patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11943272 | DETAILED DESCRIPTION FIG.1illustrates an example information distribution network100on which many of the various features described herein may be implemented. Network100may be any type of information distribution network, such as satellite, telephone, cellular, wireless, etc. One example may be a wireless network, an optical fiber network, a coaxial cable network or a hybrid fiber/coax (HFC) distribution network. Such networks100use a series of interconnected communication links101(e.g., coaxial cables, optical fibers, wireless links, etc.) to connect multiple homes102or other user locations to a local office or headend103. The local office103may transmit downstream information signals onto the links101, and each home102may have a receiver used to receive and process those signals. There may be one link101originating from the local office103, and it may be split a number of times to distribute the signal to various homes102in the vicinity (which may be many miles) of the local office103. Although the term home is used by way of example, locations102may be any type of user premises, such as businesses, institutions, etc. The links101may include components not illustrated, such as splitters, filters, amplifiers, etc. to help convey the signal clearly, but in general each split introduces a bit of signal degradation. Portions of the links101may also be implemented with fiber-optic cable, while other portions may be implemented with coaxial cable, other lines, or wireless communication paths. The local office103may include interface104, such as a cable modem termination system (CMTS), which may be a computing device configured to manage communications between devices on the network of links101and backend devices such as servers105-107(to be discussed further below). The CMTS may be as specified in a standard, such as, in an example of an HFC-type network, the Data Over Cable Service Interface Specification (DOCSIS) standard, published by Cable Television Laboratories, Inc. (a.k.a. CableLabs), or it may be a similar or modified device instead. The CMTS may be configured to place data on one or more downstream channels or frequencies to be received by devices, such as modems at the various homes102, and to receive upstream communications from those modems on one or more upstream frequencies. The local office103may also include one or more network interfaces108, which can permit the local office103to communicate with various other external networks109. These networks109may include, for example, networks of Internet Protocol devices, telephone networks, cellular telephone networks, fiber optic networks, local wireless networks (e.g., WiMAX), satellite networks, and any other desired network, and the interface108may include the corresponding circuitry needed to communicate on the network109, and to other devices on the network such as a cellular telephone network and its corresponding cell phones, or other network devices. For example, the network109may communicate with one or more content sources, such as multicast or unicast video sources, which can supply video streams for ultimate consumption by the various client devices in the homes102. As noted above, the local office103may include a variety of servers105-107that may be configured to perform various functions. For example, the local office103may include a push notification server105that can generate push notifications to deliver data and/or commands to the various homes102in the network (or more specifically, to the devices in the homes102that are configured to detect such notifications). The local office103may also include a content server106configured to provide content to users in the homes. This content may be, for example, video on demand movies, television programs, songs, text listings, etc. The content server may include software to validate user identities and entitlements, locate and retrieve requested content, encrypt the content, and initiate delivery (e.g., streaming) of the content to the requesting user and/or device. The local office103may also include one or more application servers107. An application server107may be a computing device configured to offer any desired service, and may run various languages and operating systems (e.g., servlets and JSP pages running on Tomcat/MySQL, OSX, BSD, Ubuntu, Redhat, HTML5, JavaScript, AJAX and COMET). For example, an application server107may be used to implement a cache server for the content found on the content server106. Other example application servers may be responsible for collecting data such as television program listings information and generating a data download for electronic program guide listings. Another application server may be responsible for monitoring user viewing habits and collecting that information for use in selecting advertisements. Another application server may be responsible for formatting and inserting advertisements in a video stream being transmitted to the homes102. And as will be discussed in greater detail below, another application server may be responsible for receiving user remote control commands, and processing them to provide an intelligent remote control experience. An example home102amay include an interface120. The interface120may comprise a device110, such as a modem, which may include transmitters and receivers used to communicate on the links101and with the local office103. The device110may be, for example, a coaxial cable modem (for coaxial cable links101), a fiber interface node (for fiber optic links101), or any other desired device having similar functionality. The device110may be connected to, or be a part of, a gateway interface device111. The gateway interface device111may be a computing device that communicates with the device110to allow one or more other devices in the home to communicate with the local office103and other devices beyond the local office. The gateway111may be a set-top box (STB), digital video recorder (DVR), computer server, or any other desired computing device. The gateway111may also include local network interfaces (not shown) to provide communication signals to devices in the home, such as televisions112, additional STBs113, personal computers114, laptop computers115, wireless devices116(wireless laptops and netbooks, tablet computers, mobile phones, mobile televisions, personal digital assistants (PDA), etc.), and any other desired devices. Examples of the local network interfaces include Multimedia Over Coax Alliance (MoCA) interfaces, Ethernet interfaces, universal serial bus (USB) interfaces, wireless interfaces (e.g., IEEE 802.11), Bluetooth interfaces, and others. Any of the devices in the home, such as the gateway111, STB113, computer114, etc., can include an application software client that can make use of the video images captured by the image capture servers. FIG.2illustrates general hardware elements, software elements, or both that can be used to implement any of the various computing devices and/or software discussed herein. The computing device200may include one or more processors201, which may execute instructions of a computer program to perform any of the features described herein. The instructions may be stored in any type of computer-readable medium or memory, to configure the operation of the processor201. For example, instructions may be stored in a read-only memory (ROM)202, random access memory (RAM)203, removable media204, such as a Universal Serial Bus (USB) drive, compact disk (CD) or digital versatile disk (DVD), hard drive, floppy disk drive, or any other desired electronic storage medium. Instructions may also be stored in an attached (or internal) hard drive205. The computing device200may include one or more output devices, such as a display206, such as an external monitor or television, and may include one or more output device controllers207, such as a video processor. There may also be one or more user input devices208, such as a remote control, keyboard, mouse, touch screen, microphone, etc. The computing device200may also include one or more network interfaces, such as input/output (I/O) interface209(e.g., a network card) to communicate with an external network210. The network interface may be a wired interface, wireless interface, or a combination of the two. In some embodiments, the interface209may include a modem (e.g., a cable modem), and network210may include the communication links101discussed above, the external network109, an in-home network, a provider's wireless, coaxial, fiber, or hybrid fiber/coaxial distribution system (e.g., a DOCSIS network), or any other desired network. Various features described herein offer improved remote control functionality to users accessing content from the local office103or another content storage facility or location. For example, one such user may be a viewer who is watching a television program being transmitted from the local office103. In some embodiments, the user may be able to control his/her viewing experience (e.g., changing channels, adjusting volume, viewing a program guide, etc.) using any networked device, such as a cellular telephone, personal computer, personal data assistant (PDA), netbook computer, etc., aside from (or in addition to) the traditional infrared remote control that may have been supplied together with a television or STB. FIG.3aillustrates an example distribution hierarchy of computing devices, such as caching servers for an example content delivery network (CDN). The hierarchy of caching servers may be used to help with the distribution of content (e.g., online content) by servicing requests for that content on behalf of the content's source. At the top of the hierarchy is origin server300. The origin server300may be a server that provides one or more pieces of content of a content provider. For example, origin server300may be implemented using content server106ofFIG.2, and can supply a variety of content to requesting users and devices. The content may be, for example, movies or other data available for on demand access by clients (not shown), and the origin server300for a movie may be a server that has access to a copy of the movie or initially makes the movie available. AlthoughFIG.3aillustrates a single origin server300for a given distribution hierarchy, multiple origin servers may be used for the given distribution hierarchy. The origin server300may also establish the caching hierarchy that will be used to distribute content. For example, the server300may transmit messages to one or more cache servers, requesting that the cache servers assist in caching the origin server's content, and instructing the cache server as to where that cache server should be in the hierarchy for the origin's content domain(s), what domain(s) of content it should (or should not) cache, access restrictions, authentication requirements for granting access, etc. To facilitate distribution of the content, one or more top level cache servers301a-bcan be communicatively coupled to the origin server300, and each can store a copy of the file(s) containing the content of the origin server300(e.g., a copy of the movie files for an on-demand movie). These top level servers can be co-located at the same premises as the origin server300and implemented as application servers107, or they can be located at locations remote from the origin server300, such as at a different local office connected to the network109. The top level cache servers301a-bmay also receive and respond to client requests for the content. Further down in the hierarchy may be one or more intermediate level cache servers302a-cand edge cache servers303a-d, and these may be implemented using the same hardware as the cache servers301a-b, and can be configured to also receive and respond to client requests for content. Additional layers of caches in the hierarchy may also be used as desired, and the hierarchical arrangement allows for the orderly distribution of the content, updates to the content, and access authorizations. For example, lower level servers may rely on higher lever servers to supply source files for the content and to establish access authorization parameters for the content. In some embodiments, the top and intermediate level servers may refrain from interaction with clients, and may instead limit their content delivery communications to communicating with other servers in the distribution network. Limiting client access to just the lowest level, or edge, servers can help to maintain security and assist with scaling. The CDN may also include one or more service routers304, which may be communicatively coupled to all of the cache servers, but which need not be a part of the caching hierarchy. The service router304may facilitate communications between the members of the hierarchy, allowing them to coordinate. For example, the servers in the hierarchy may periodically transmit announcement messages to other cache servers, announcing their respective availabilities (e.g., announcing the content that they are able to provide, or the address domains or routes that they support). In some embodiments, the caches may simply transmit a single announcement to a service router304, and the service router304may handle the further distribution of the announcement messages to other servers in the hierarchy. In this manner, the service router304may act as a route reflector in a border gateway protocol, reducing the amount of traffic passing directly between the caches. FIG.3aillustrates a logical hierarchy for a given content delivery network, from its origin300(e.g., a server storing the original copy of a particular video asset) down through layers of caches. Each individual piece of content can have its own hierarchy, however, and a single server (e.g., a computing device) can play different roles in the hierarchies for different pieces of content. So, for example, while a first computing device can act as the origin300for a first piece of content (e.g., videos from “ESPN.COM”), that same device can act as an intermediate-level cache302bfor a different piece of content (e.g., videos from “NBC.COM”). For some aspects of the disclosure, the physical arrangement of the servers need not resembleFIG.3aat all. As illustrated inFIG.3b, each of the cache servers may be coupled through its own corresponding router305to one or more communication network(s). The routers305may contain the necessary routing tables and address information to transmit messages to other devices on the network, such as the other caches, client devices, etc. FIG.4illustrates an example timeline for a streamlined content delivery technique (e.g., a streamlined HTTP enhancement proxy content delivery technique), in which content is requested and delivered between a client device (e.g., device200shown inFIG.2, gateway device111shown inFIG.1), a server (e.g., device200shown inFIG.2, edge cache server303shown inFIG.3), and a remote server (e.g., device200shown inFIG.2, intermediate level cache server302shown inFIG.3). While a client device and a server are shown in the example streamlined HTTP enhancement proxy content delivery timeline ofFIG.4, the technique may also be used for content delivery among servers, such as server303and server302shown inFIG.3, which may or may not be part of the same content delivery network. As illustrated inFIG.4, data transmissions as a function of time are shown for the client device and the remote server. Data received by the client device (e.g., “Client Device RX”) is illustrated as the portion ofFIG.4below line451. Data transmission by the client device (e.g., “Client Device TX”) is illustrated as the portion ofFIG.4above line451and below line452. Data received by the server (e.g., “Server RX”) is illustrated as the portion ofFIG.4above line452and below line453. Data transmission by the server (e.g., “Server TX”) is illustrated as the portion ofFIG.4above line453. Data receipt and transmission by additional servers, such as remote cache or origin servers, are not shown for the sake of brevity. The client device may transmit content request401at time402to a server, for example, in response to a user requesting content using an input device (e.g., input device208shown inFIG.2). Content request401may be, for example, an HTTP GET request for a video asset (e.g., a particular movie or program from a website) or a fragment of the video asset. The content fragment may be, for example, any time-based data unit of arbitrary size, such as a two-second video fragment of the user-requested movie or program. In some embodiments, content request401may be an HTTP request for a data unit. The data unit may be, for example, a defined unit of content, such as a high definition video asset, a three-dimensional video asset, a two-second video fragment, or any other suitable content or content fragment provided using an HTTP protocol. Content request401may be transmitted by the client device using any suitable transmission device or network, such as network I/O interface209shown inFIG.2, over any suitable communications path or network, such as network210shown inFIG.2, using any suitable communications or network protocol. For example, the client device may open a network protocol session, such as a transmission control protocol (TCP) session or a user datagram protocol (UDP) session, with the server. In another example, content request401may be transmitted by the client device to an intermediate device (e.g., modem110shown inFIG.1, interface104shown inFIG.1, network I/O209interface shown inFIG.2, service router304shown inFIG.3, etc.) communicatively coupled between the client device and the server. In some embodiments, content request401may include an address identifier of the client device making the request. The address identifier may contain routing information to indicate how the client device can be contacted. For example, the address identifier may be a dotted-decimal internet protocol (IP) address for the client device or an intermediate device that handles the client device's communications. In some embodiments, content request401may include a uniform resource identifier (URI). For example, content request401may include the URI:“http://www.videocontent.com/tv/NCIS/11915/12128607/Restless/videos” corresponding to user request for an episode of the television program NCIS. In another example, content request401may include the URI:“http://videocontent.net/movies/Avatar/140880/full-movie”corresponding to a user request for the motion picture Avatar. In some embodiments, content request401may include a URI prefix, such as “videocontent/tv/NCIS/videos” or “videocontent/movies/Avatar/full-movie”, to identify a domain or subdomains of the requested content. Although the example URI prefixes use text and the forward slash “/” to indicate sub-domain relationships, other forms of notation may be used. For example, the domain name can be represented in an order that increases in specificity from left to right, such as “net. videocontent.tv.ncis” or “net. videocontent.movies.avatar,” or vice versa. In some embodiments, a fully qualified domain name (FQDN) can be used as a URI prefix. For example, the URI prefix may be the FQDN “videocontent.net.” or any other suitable FQDN or domain name associated with content request401. In some embodiments, the URI prefix can be provided in a shortened form to reduce code bloat. For example, a URI prefix for the movie Avatar could simply be “videocontentavatar.” In some embodiments, content request401may include a time duration, a data range, or both corresponding to a fragment of the requested content (e.g., a two-second fragment of requested video content). For example, content request401may include a URI appended with the time duration “#t=2,4” corresponding to a two-second video fragment for seconds 2-4 of the requested content. In another example, content request401may include the data range “bytes=6000001-12000000” corresponding to the two-second video fragment for seconds 2-4 of a video with an associated data rate of approximately 3 MB/s. In some embodiments, content request401may include or be associated with a timeout, such as timeout413. Timeout413may correspond to a time duration after which the requesting client will respond as if an error has occurred with the original request, and may occur at a specified time after request401has been transmitted by the client device. In certain embodiments, a timeout period may correspond to the amount of time between request time402and timeout413. For example, timeout413may occur two seconds after request time402in association with a two-second timeout period. If the requested content is not received by the client device before the timeout period ends, content request401may timeout and transmission of an additional request may begin (e.g., possibly for the requested content at a lower quality or bit rate). If the additional request also times out, the client device's request for content may become abandoned. The server may receive content request403at time404from the client device using any suitable receiver device, such as network I/O interface209shown inFIG.2. Content request403may be substantially the same as, or a modified version of, content request401transmitted by the client device. For example, content request403may contain transmission errors (e.g., noise, jitter, etc.) resulting from, for example, the electromagnetic interference in the transmission medium between the client device and the server. In another example, content request403may be received from an intermediate device communicatively coupled between the client device and the server. In some embodiments, content request403may be received from another server. For example, content request403may be received at a higher-level (parent) cache server or origin server from a lower-level (child) cache server. In some embodiments, the server may check to determine whether it possesses or stores the requested content in its cache memory (e.g., ROM202, RAM203, removable media204, or hard drive205shown inFIG.2), using any suitable technique or circuitry (e.g., processor201shown inFIG.2). For example, the server may decode request403to identify the address information (e.g., URI, URI prefix) of the requested content and perform a search of its memory based on the identified address information. If the requested content is stored in its memory, the server can begin transmitting the requested content to the requesting client (or server), which is not shown for the sake of brevity. If it is not, the server can transmit content request405to another server, such as a parent cache server or origin server. The server may transmit content request405at time406to a remote server, such as a higher-level cache server or origin server, in response determining that the requested content is not stored in its memory. For example, content request405may be an HTTP GET request for the requested video asset or a fragment of the video asset. Content request405may be transmitted by the server using any suitable transmission device, such as network I/O interface209shown inFIG.2, over any suitable communications path or network, such as network210shown inFIG.2. For example, the server may open a network protocol session with the remote server. In some embodiments, request405may be sent to a router (e.g., service router304shown inFIG.3) for redistribution to other servers in the CDN hierarchy. In some embodiments, request405may include an address identifier of the server making the request. This identifier can contain routing information to indicate how the server can be contacted. This address can be, for example, a dotted-decimal internet protocol (IP) address for the server, or for a router, modem, or network interface that handles the server's communications. In some embodiments, the server may identify the remote server to which content request405will be transmitted using any suitable server selection technique, path selection technique, or both. For example, the server may consult an index of cache servers (e.g., an index or database maintained by service router304shown inFIG.3) to determine what next higher level cache server should service the request. The cache server index may identify, for example, cache destinations for each listed domain as indicated by its URI or URI prefix. The cache server index may also provide path information for obtaining the domain's content when servicing a client request for the content. In some embodiments, various other factors may be used in the identification and selection of the remote server to which request405is to be directed. One factor may be the processing capability of the remote server. For example, a remote server with a faster processing capability may be preferred over a remote server with a slower processing capability. Another factor may be the length of a physical connection between the server and the remote server. The length may include the number of intermediate routers and network legs, and the path selection process may prefer a shorter path. For example, a remote server located in the same domain or CDN may be preferred over a remote server located in a different domain or CDN. Another factor may be the cost (e.g., financial cost, computing cost, time cost) associated with accessing the respective caches. For example, the path selection process conducted by the cache server on the index may select the least costly route. In some embodiments, other business rules, processes, or techniques may be used for server selection, path selection, or both. The server may receive content407at time408from the remote server or from an intermediate device using any suitable receiver device. In some embodiments, the server may initially receive portion420of content407. Portion420may be, for example, a portion of the data unit requested by the client device. Portion420may be of an arbitrarily small size. For example, portion420may be the first 64 bits of a 6 MB video fragment. In some embodiments, the server may store or buffer portion420in a temporary storage device (e.g., RAM203shown inFIG.2). In some embodiments, the server may store portion420, the entire amount of content407, or both in its cache memory. In some embodiments, the server may identify the portion or portions of the data unit that it has received. For example, the server may store information identifying the portions of the data unit that have been received. In certain implementations, the server may associate a flag with an index of the data unit. The server may analyze the flag data to determine whether one or more of the portions of the data unit have been received. The server may also, for example, use the flag data to determine whether any portions of a requested data unit are stored in the server's memory. If the server determines that one or more of the portions are stored locally, it may transmit the locally-stored portions to the requesting client device and request additional portions of the data unit from any suitable remote server or content delivery network. The server may begin transmitting portion421of content409at time410to the requesting client device (e.g., by opening a network protocol session with the client device) upon receiving and buffering portion420of content407. For example, the difference between time408and time410may be arbitrarily small and limited only by system capabilities. Portion421may be substantially the same as, or a modified version of, portion420. In some embodiments, portion421may correspond to data read from a buffered version of portion420stored in the server's memory. For example, the size of portion421may correspond to the smallest bufferable data unit. In another example, the size of portion421may correspond to a maximum transmission unit (MTU). In some embodiments, the transmission of portion421begins in advance of completely receiving content407because waiting to completely receive content407at time414may result in transmission of content407beginning after timeout413has occurred. In some embodiments, the server may begin a looping process to receive, buffer, and transmit subsequent portions of content407until all portions of content407have been received. For example, the looping process may proceed until an end of transmission character (EOT) in content407has been received and decoded (if needed) by the server. The client device may receive portion422of content411at time412from the server using any suitable receiver device, such as network I/O interface209shown inFIG.2. Portion422may be substantially the same as, or a modified version of, content421transmitted by the server. As illustrated in the example streamlined HTTP enhancement proxy content delivery timeline ofFIG.4, the client device receives portion422before timeout413and timeout413may be avoided. In some embodiments, the client device may present the received content for display on a display device (e.g., television112, personal computer114, laptop computer115, wireless device116shown inFIG.1, display206shown inFIG.2) once it has received all of content411. If content411is a content fragment, the client device may begin a looping process to request the next content fragment of the user requested content. For example, the client device may begin a looping process after decoding an EOT in content411to request the next two-second video fragment of the movie or high-definition television program requested by the user. The looping process may proceed by requesting subsequent content fragments until all content fragments of the user requested content have been received. FIG.5illustrates an example process flow for providing content using a streamlined HTTP enhancement proxy content delivery technique. In step501, the server receives an HTTP request for a data unit from a client device, such as content request401shown inFIG.4. The data unit may be, for example, a user-requested video asset or a fragment of the user-requested video asset. The request may include address information (e.g., URI, URI prefix) to identify the requested data unit. In some embodiments, the request may be received from an intermediate device or from another cache server. In step502, the server determines whether its memory contains a copy of the requested data unit based on address information or other identifying information extracted from the received request. If the server determines that its memory includes a copy of the requested data unit, the process proceeds to step503. If the server determines that its memory does not include a copy of the requested data unit, the process proceeds to step504. In step503, the server returns a copy of the requested data unit to the requesting client device in response to determining that the data unit was found in the server's memory. For example, the server may return the entire data unit requested by the client device. In step504, the server transmits a request for the data unit to a remote server, such as request403shown inFIG.4. In some embodiments, the server may transmit the request an origin or cache server identified using any suitable server selection technique, path selection technique, or both. For example, the server may search a cache server index using a longest match lookup routine to identify the best matching remote server that supports the data unit. If a match is found, then the longest match can be used to generate the request (e.g., request405shown inFIG.4) to the identified remote server. In step505, the server begins to receive a portion of the data unit from the remote server, such as portion420of content407shown inFIG.4. During receipt of the requested data unit, the server may proceed to step506and transmit a portion of the data unit (e.g., portion421of content409) to the client device even though the entire data unit has not been fully received. For example, the server may transmit a fraction of a requested two-second video fragment to the client device as soon as it is received, limited only by the server's performance capabilities. The transmission size may be arbitrarily small, or may be based on an MTU, smallest bufferable data unit, or other suitable parameter. In certain implementations, the transmission may begin as soon as the server receives the portion of the data unit. In certain implementations, if the MTU buffer is filled before the server receives the entire portion of the data unit, the transmission may begin as soon as the MTU buffer is filled. In step507, the server determines whether it has received the entire data unit (e.g., by decoding an EOT in content407shown inFIG.4). If the cache server determines that it has not received the end of the data unit, the process returns to step505. If the cache server determines that it has received the end of the data unit, the process ends. With the features described above, various advantages may be achieved. An advantage of the present technique is that timeout of the request is avoided in some instances as a result of transmission by the server of an initial portion of the content beginning before the entire requested content has been received at the server (e.g., before timeout has occurred). As a result, a user can request and receive high quality (e.g., high bit rate) videos using an HTTP protocol without, in some instances, the user's client device experiencing timeout. Another advantage of the present technique is that delay in the delivery of content in a CDN is reduced because data is transmitted to the next layer in the CDN hierarchy as soon as or shortly after it is received. Accordingly, the network is not overloaded with redundant content requests and the user's viewing experience is enhanced. The various features described above are merely nonlimiting examples, and can be rearranged, combined, subdivided, omitted, and/or altered in any desired manner. For example, features of the servers can be subdivided among multiple processors and computing devices. The true scope of this patent should only be defined by the claims that follow. | 34,282 |
11943273 | DETAILED DESCRIPTION Before the present methods and systems are disclosed and described, it is to be understood that the methods and systems are not limited to specific methods, specific components, or to particular implementations. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the specification and the appended claims, the singular forms “a,” “an” and “the” include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” one particular value, and/or to “about” another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent “about,” it will be understood that the particular value forms another embodiment. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint. “Optional” or “optionally” means that the subsequently described event or circumstance may or may not occur, and that the description includes instances where said event or circumstance occurs and instances where it does not. Throughout the description and claims of this specification, the word “comprise” and variations of the word, such as “comprising” and “comprises,” means “including but not limited to,” and is not intended to exclude, for example, other components, integers or steps. “Exemplary” means “an example of” and is not intended to convey an indication of a preferred or ideal embodiment. “Such as” is not used in a restrictive sense, but for explanatory purposes. Disclosed are components that can be used to perform the disclosed methods and systems. These and other components are disclosed herein, and it is understood that when combinations, subsets, interactions, groups, etc. of these components are disclosed that while specific reference of each various individual and collective combinations and permutation of these may not be explicitly disclosed, each is specifically contemplated and described herein, for all methods and systems. This applies to all aspects of this application including, but not limited to, steps in disclosed methods. Thus, if there are a variety of additional steps that can be performed it is understood that each of these additional steps can be performed with any specific embodiment or combination of embodiments of the disclosed methods. The present methods and systems may be understood more readily by reference to the following detailed description of preferred embodiments and the examples included therein and to the Figures and their previous and following description. As will be appreciated by one skilled in the art, the methods and systems may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the methods and systems may take the form of a computer program product on a computer-readable storage medium having computer-readable program instructions (e.g., computer software) embodied in the storage medium. More particularly, the present methods and systems may take the form of web-implemented computer software. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices. Embodiments of the methods and systems are described below with reference to block diagrams and flowchart illustrations of methods, systems, apparatuses and computer program products. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create a means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks. Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions. As described in more detail below, a system for processing a data stream can comprise an encoder/transcoder to condition fragments of the data stream and/or encode information relating to each of the fragments for down stream processing of the fragments. FIG.1illustrates various aspects of an exemplary network and system in which the present methods and systems can operate. Those skilled in the art will appreciate that present methods may be used in systems that employ both digital and analog equipment. One skilled in the art will appreciate that provided herein is a functional description and that the respective functions can be performed by software, hardware, or a combination of software and hardware. FIG.1is a block diagram of a data stream fragmentation network and system10. As shown, the system10can comprise an input12for receiving a data stream, an encoder14in signal communication with the input12, and a fragmentor16in signal communication with the encoder14. It is understood that the network and system10can include other components such as processors, routers, network devices and the like. The input12can be any device, system, apparatus, or the like to provide a signal communication between a data source17and the encoder14and to transmit the data stream to the encoder14for signal processing/conditioning. In an aspect, the data source17is a content provider that provides content such as, but not limited to, data, audio content, video content, news, and sports that can be made available to various end-users. As an example, the data stream can comprise a service identification18(shown inFIG.2) that represents information about the content represented by the data stream. By way of example, the service identification18can be a predefined alphanumeric identifier such as CNN and ESPN. In an aspect, the data source17can transmit or allow access to the data stream in a standard format (e.g. MPEG-4 or a single MPEG-4 video encapsulated in an MPEG-2 transport stream over UDP MCAST). However, the encoder14can receive the data stream from any source having any format. The encoder14can be any device, system, apparatus, or the like to encode and/or transcode the data stream. In an aspect, the encoder14can convert a data stream input having a single bit rate (by way of example, high bit rate video), to an output of one or more data streams of other bitrates (by way of example, lower bit rate video). As an example, the encoder14can convert the data stream from the input format received from the data source to a transport format for distribution to consumers. In an aspect, the encoder14can comprise a device such as a transcoder that conditions streaming data and/or changes data from one format to another. In an aspect, the encoder14can comprise a separate encoder and transcoder for conditioning streaming data and/or changing the data from one format to another. As an example, the encoder14can receive or access the data stream from the input12and encodes/transcodes information onto the data stream. As a further example, the encoder14adds information to the stream relating to content fragments24. In an aspect, the encoder14can comprise an encoder identification (ID)20. The encoder ID20can be a pre-defined alphanumeric identifier, an Internet protocol address, or other identifying information. As an example, the encoder14can comprise encoder software22for controlling an operation of the encoder14, the encoder software having an identifiable software version or revision23. Turning now toFIG.2, content fragments will be discussed in greater detail. As shown inFIG.2, the exemplary data stream comprises content that can be divided into one or more content fragments24. As an example, each of the content fragments24can comprise a bit rate and resolution. As a further example, the data stream can be an adaptive data stream and each of the content fragments24can have variable characteristics such as bit rate and resolution relative to each other. In an aspect, the encoder14can be configured to insert/encode a fragment time code26into the data stream to identify each of the content fragments24. As an example, the time code26can be encoded into private data of the data stream, such as, for example, the MPEG-2 transport stream. As a further example, the time code can be derived from a continuously increasing time scale or time standard (e.g., UNIX-based time). However, other timing scales and standards, now known or later developed, can be used, as can other systems of providing unique identifiers to data. In an aspect, the time code26can be in Hundred Nano Seconds (HNS or 10,000,000 ticks per second) format. As an example, the UNIX time can be converted to HNS format. In an aspect, the actual time of Thursday, January 6 14:18:32 2011 has a Unix time value of 1294348712. The time code for a fragment created on that date can be 1294348712×10,000,000. The next fragment time code can be 1294348712×10,000,000+duration of the content fragment24in presentation time stamp (PTS) units=“1294348712*10,000,000+“abcdefghi” (where “abcdefghi” is the PTS duration value for that fragment). If the time code26for the first one of the content fragments24is referred to as “time code x,” then a time code for a second one of the content fragments24can be “time code x”+a duration of the second one of the content fragments24in PTS units. As an example, even though the PTS units reset (or wrap) every twenty-six hours, the use of UNIX time nevertheless ensures unique values for the time code. In an aspect, downstream devices (e.g., fragmentors) can use the same time code values as established by the encoder14, thereby allowing inter-fragmentor service and redundancy. In some systems, a name associated with each of the content fragments24is not standardized among various data feeds and must be modified when switching between feeds such as a live feed, a network digital recorder feed, and a video on demand feed, for example. However, the unique time codes provided by the exemplary encoder14can allow for a simpler network digital video recorder (NDVR) implementation, as time codes can always count forward. Accordingly, a changing of namespace when going from live or linear content to NDVR to video on demand (VOD) is not necessary. Further, when the fragmentor16(or other downstream device) restarts (hardware wise or software wise), or experiences another event that affects the fragmentor's internal systems, the timing mechanism can ensure that all devices (e.g., fragmentors) receiving the same content can nevertheless generate substantially similarly or commonly named fragments, because the time code associated with a particular one of the content fragments24is determined by the encoder14, rather than the fragmentor16. In an aspect, the encoder14can be configured to insert service descriptor data28, such as a Service Descriptor Table (SDT), in the data stream or information into in an existing SDT. As an example, the data stream can comprise an SDT and the encoder can “tag” or encode information (e.g., tracking information) into the existing SDT or some other data field. As a further example, the fragmentor16can be configured to tag and/or add information to the existing SDT or some other data field. In an aspect, the SDT28can be information provided in a format such as a data table that describes the services that are part on an MPEG transport stream. The SDT28can include name of the service, service identifier, service status, and whether or not the service is scrambled. (See ETSI EN 300 468, hereby incorporated herein by reference in its entirety). In an aspect, the SDT28can include information such as service identification18(e.g., the provider of the content), the identity of the encoder that generated the stream20, and the encoder software revision23. In an aspect the encoder14can be configured to insert the associated quality level information30in the data stream to identify characteristics of each of the content fragments24, such as their resolution and bit rate. By way of example, the quality level information30can be encoded into the private data (e.g., MPEG-2 transport stream private data) of the data stream by the encoder14and can be subsequently extracted by the fragmentor16. In an aspect, the quality level information30can be propagated to the fragmentor16(or other device or system such as a server31) to generate manifest information and/or a name (e.g., URL) for the associated content fragment24. In an aspect, the encoder14can be configured to insert a horizon window42(FIG.3) or information relating to sequential content fragments that are expected to follow the content fragment24that is being encoded/tagged. As an example, the horizon window42can comprise a next-in-time content fragment and/or sequence of subsequent content fragments. As a further example, the horizon window42can comprise information relating the various quality levels of the next in time fragment(s) so that a downstream device can selectively process the content fragments based upon a desired and/or optimal quality level. In an aspect, the horizon window42can represent/identify missing information or “potholes” for particular content fragments. For example, the horizon window42can have information for two consecutive next-in-time content fragments for all but the highest quality level. In use, a downstream device requesting the content fragment from the missing quality level may receive an error. Likewise, if a large number of downstream devices request the same missing content fragment, the network can be overloaded with error messages, thereby delaying processing and consuming bandwidth. However, the horizon window42can comprise information relating to each of the content fragments and the various quality levels and can identify if any of the content fragments associated with a particular quality level are missing. In this way, a downstream device can process the horizon window42to identify which fragments are available and which fragments are missing. Accordingly, the downstream device will not request a fragment that is identified as missing. Instead, the down stream device can be configured to request another suitable content fragment such as the content fragment associated with the next lower level of quality (or higher level quality). Returning toFIG.1, the fragmentor16can be in signal communication with the encoder14and receive the conditioned data stream therefrom. By way of example, the encoder14can transmit a plurality of MPEG-4 videos encapsulated in an MPEG-2 transport stream over UDP MCAST, however, other protocols and standards can be used. The fragmentor16can condition the data stream for downstream processing and distribution by a computing device such as server31. In an aspect, the fragmentor16can separate or fragment the data stream into content fragments24based upon information encoded onto the data stream by the encoder14. As an example, the fragmentor16can be configured to access the information encoded/inserted in the data stream by the encoder14to define the content fragments24from the conditioned data stream. Once the content fragments24are generated, the content fragments24can be transmitted to a content distribution network (CDN)32, or another intermediary or storage device, for delivery to a user device33or any client or consumer device for playback. In an aspect, the fragmentor16can then provide the content fragments to the server31, which, as an example, can be an origin hypertext transfer protocol (HTTP) server. However, other servers can be used. The fragmentor16can transmit the content fragments to the server31using HTTP commands, such as the POST command. However, other network protocols and communication methods can be used. The server31can then provide the content fragments24to the CDN32for distribution to users. By way of example, the CDN32can obtain content from the server31using an HTTP GET command, and the devices33can then obtain content from the CDN32using an HTTP GET command. However, other network protocols and communication methods can be used to provide the network communications and file transfers between the various portions of the system. As an example, the use of push and/or pull data transfer model is merely an implementation detail, and any mechanism for the transfer of data over a network can be used. As described in further detail below, the fragmentor16can include a fragmentor identification34, such as a pre-defined alphanumeric identifier, an Internet protocol address, or other identifying information. The fragmentor16can also include fragmentor software36for controlling an operation of the fragmentor16, the fragmentor software36having an identifiable software versions or revision38. FIG.3is a block diagram illustrating at least a portion of data comprised in an exemplary content fragment24. In an aspect, the fragmentor information can comprise at least one of the fragmentor identification34and the fragmentor software revision38. In an aspect, the fragmentor16can encode the data stream and/or content fragments24with fragmentor information. As an example, the fragmentor information can be encoded by the fragmentor16into a metadata header of the content fragment24such as a MOOF header or ‘mfhd’ of each of the content fragments24, for example. The use of a metadata header such as the MOOF header can be beneficial to system performance because the information bits can be part of a video packet and precede the video data where it is not necessary to look further into the video data to discover the encoded information. In an aspect, the fragmentor information can be encoded by the fragmentor16as a custom UUID “box.” For example, MPEG-4 part 12 (14496-12) allows a UUID box to be optionally added at each level of an MPEG-4 message. In an aspect, the UUID box can comprise an underlying information associated with a 128-bit number used to uniquely identify the underlying information for subsequent reference and retrieval. As an example, the UUID box can comprise a reference to the network address of the host that generated the UUID, a timestamp (a record of the precise time of a transaction), and a randomly generated component. Because the network address identifies a unique computer, and the timestamp is unique for each UUID generated from a particular host, those two components sufficiently ensure uniqueness. However, the randomly generated element of the UUID can be added as a protection against any unforeseeable problem. As an example, the encoded fragmentor information can allow the processing history, including, without limitation, the fragmentor16that processed the fragment, and when the processing occurred, for a content fragment24to be identified even after the content fragment has been downloaded to a remote storage device. In an aspect, the fragmentor16can be configured to encode the SDT27or similar data as a custom universally unique identifier (UUID) box in the header of the associated content fragment24. As an example, the SDT28can be encoded in a transport header of the data stream by the encoder14, extracted from the transport header by the downstream fragmentor16, and encoded into a MOOF header of an associated one of the content fragments24by the fragmentor16. In an aspect, the fragmentor16can be configured to encode an authenticated checksum40into at least one of the content fragments24. As an example, the check sum40can be a fixed-size datum computed from an arbitrary block of digital data for the purpose of detecting accidental errors that may have been introduced during its transmission or storage. The integrity of the data can be checked at any later time by recomputing the checksum40and comparing it with the stored one. If the checksums match, the data were almost certainly not altered (either intentionally or unintentionally). By way of example, the checksum40can be an MD5 checksum or other cryptographic hash function. In an aspect, the checksum40can be encapsulated on the MOOF header of at least one of the content fragments24. By analyzing the checksum40in the MOOF header of the content fragment24, it can be determined whether the content fragments24were received with incorrect/malformed/corrupted or hacked MOOFs. As described in more detail below, a method for processing a data stream can comprise encoding information relating to each of a plurality of content fragments of the data stream for downstream processing of the content stream. FIG.4illustrates a method400of data stream fragmentation.FIG.4will be discussed, for illustrative purposes only, with reference toFIG.1,FIG.2, andFIG.3. In step402, the input12can receive the data stream from the data source17. In step404, the encoder14can receive the data stream from the input12and can encode/transcode information onto the data stream(s). By way of example, the information inserted/encoded on the data stream by the encoder14can comprise the time code26, and/or the SDT28or similar data, the quality level information30. As a further example, additional information can be inserted in the data stream by the encoder14. In step406, the encoded data stream can be received by the fragmentor16to fragment the data stream in response to the encoded information and to define the content fragments24of the data stream. By way of example, the fragmentor16can be configured to separate or fragment the data stream into each of the content fragments24represented by the data stream based at least in part upon at least a portion of an information (e.g., time code26) encoded onto the data stream by the encoder14. In an aspect, the fragmentor16can encode/encapsulate information onto at least one of the content fragments24constructed/defined by the fragmentor16, as shown in step408. For example, the fragmentor16can insert information into each of the content fragments24, including the SDT28, the fragmentor identification34, the fragmentor software revision38, and the checksum40. However, other information can be inserted into at least one of the content fragments24by the fragmentor16. As an example, any of the information inserted by the fragmentor16can be included in the SDT28. In an aspect, once the content fragments24are generated, the content fragments24can be distributed to the consumer devices33or the client devices, for example, an end-user, client or consumer device, as shown in step410. In step412, an end-user device can receive the content fragments24and can adaptively select the most appropriate sequence of the content fragments24to reconcile the content fragments as a substantially seamless media playback. As described in more detail below, a method for locating a content fragment can comprise querying a desired fragment by an associated identifier and/or time code, wherein the fragment associated with the identifier or a suitable alternative can be returned to a requesting device. In an aspect,FIG.5illustrates a method500of locating a fragment, such as a particular one of the content fragments24.FIG.5will be discussed, for illustrative purposes only, with reference toFIG.1,FIG.2, andFIG.3. In step502, a user or user device can submit a fragment request to a computing device such as the server31or the content distribution network (CDN)32associated with distributing the content fragments24. By way of example, the fragment request can comprise a particular time code associated with one of the content fragments24defined by the encoder14or fragmentor16. As an additional example, the fragment request can include a “best guess” of a time code associated with one of the content fragments24. As a further example, the “best guess” can be determined based upon a difference between a known time value and the time duration between the known time value at the requested content. Other means of approximating a time code for the purposes of requesting one of the content fragments24can be used. In step504, the fragment request can be propagated or otherwise transmitted through the CDN32and to the server31. As an example, the fragment request can be transmitted to other locations having information relating to the naming convention of each of the content fragments24. In step506, the server31(or other device having the requisite information) can analyze the request to extract a time code identified in the fragment request. In step508, the server31can return one of the content fragments24having the time code identified in the fragment request. In the event the time code identified in the fragment request is not identical to a time code of one of the content fragments24, the server31can return the one of the content fragments having the time code nearest to the time code identified in the fragment request. In an aspect, each of the content fragments24includes a UUID box which contains a name (e.g. time code) of the subsequent one of the content fragments relative to time. Accordingly, once the user receives the content fragment24returned by the server31using the method500, the user device can parse out the encoded UUID box which identifies the name of the next one of the content fragments24so a duplication of content in the CDN32is minimized. As an example a user/device can initiate a fragment request for any time code. As a further example, any user, client, and/or system can initiate a fragment request. In an aspect, a given content delivery system can have multiple redundant computing devices such as server31, for example. A user device33may be requesting content from a first such server (e.g., server31) which, during playback, fails. The user device33may direct the content request to a second server that contains the same content. Similar to step508, the second server can attempt to return a content fragment24having the time code identified in the fragment request. In the event the time code in the fragment request does not exist in one of the content fragments24, the second server can return the one of the content fragments having the time code nearest to the time code identified in the fragment request. Because each fragment24includes information that identifies the next fragment24in the stream, and because the fragments are identified with unique time codes that relate to a common position within the original content, playback will resume for the consumer device33at or near the point at which the first server failed, and will continue uninterrupted. In this way, the user will experience a substantially seamless transition in the content being delivered, even though the origin server has changed. The present methods and systems can be operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that can be suitable for use with the systems and methods comprise, but are not limited to, personal computers, server computers, laptop devices, and multiprocessor systems. Additional examples comprise set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that comprise any of the above systems or devices, and the like. The processing of the disclosed methods and systems can be performed by software components. The disclosed systems and methods can be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers or other devices. Generally, program modules comprise computer code, routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The disclosed methods can also be practiced in grid-based and distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote computer storage media including memory storage devices. While the methods and systems have been described in connection with preferred embodiments and specific examples, it is not intended that the scope be limited to the particular embodiments set forth, as the embodiments herein are intended in all respects to be illustrative rather than restrictive. Unless otherwise expressly stated, it is in no way intended that any method set forth herein be construed as requiring that its steps be performed in a specific order. Accordingly, where a method claim does not actually recite an order to be followed by its steps or it is not otherwise specifically stated in the claims or descriptions that the steps are to be limited to a specific order, it is no way intended that an order be inferred, in any respect. This holds for any possible non-express basis for interpretation, including: matters of logic with respect to arrangement of steps or operational flow; plain meaning derived from grammatical organization or punctuation; the number or type of embodiments described in the specification. It will be apparent to those skilled in the art that various modifications and variations can be made without departing from the scope or spirit. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims. | 31,484 |
11943274 | DETAILED DESCRIPTION OF THE INVENTION The following description of embodiments relating to the first aspect of the present application preliminarily resumes the description of the handling of encryption relating to portioned or tile-based video streaming set out above in the introductory portion of the specification. To this end, possible modifications of the known techniques in the environment of MPEG are presented. These modifications, thus, represent embodiments of the first aspect of the present application, and they are abstracted thereinafter as the modifications are not restricted to be used in the MPEG environment, but may be advantageously used elsewhere. In particular, embodiments described further below enable content media encryption in tile-based video streaming systems across a wider set of available platforms in an efficient manner and overcome the shortcoming of the encryption schemes present in the introductory portion of the specification in this regard. In particular, this encompasses tile-based streaming services with:CTR based encryption of all sub-picturesEncrypted media (CTR or CBC) with DASH Preselections A first tool which is used in accordance with a subsequently described modifying embodiment which allows for ‘cbcs’ all subsample encryption with preselection, is called mandatory subsample identification concept or algorithm in the following. This algorithm allows to make use of CBC based encryption schemes when preselections are used in the MPD. Common encryption [3] offers two ways to identify subsample boundaries and, hence, the byte ranges of encrypted and un-encrypted data as reproduced for reference in the following: A decryptor can decrypt by parsing NAL units to locate video NALs by their type header, then parse their slice headers to locate the start of the encryption pattern, and parse their Part 15 NAL size headers to determine the end of the NAL and matching Subsample protected data range. It is therefore possible to decrypt a track using either (a) this algorithm, i.e. by parsing, ignoring the Sample Auxiliary Information or (b) the Sample Auxiliary Information, ignoring this algorithm. The Sample Auxiliary Information (SAI) consists of the two boxes ‘saiz’ and ‘saio’ defined in [4] that together indicate the location and ranges of the bytes of encrypted and un-encrypted data. However, in a tile-based streaming scenario with preselections, it is not possible to know the bitrate (and hence byte size) of each sub-picture/tile in the resulting client-side bitstream. Hence, it is not possible for the extractor track to include correct SAI beforehand. Therefore, in accordance with embodiments described herein, it is signalled or mandated in an application format specification such as OMAF that, if present, the incorrect SAI parameters related to clear/protected byte ranges within the extractor track are not to be regarded and instead the above algorithm is to be used for derivation of the location and ranges of the bytes of encrypted and un-encrypted data. In accordance with a first embodiment, this concept is used along with encrypting the video content portion/tile wise as described in the following. In particular,FIG.7shows a collection of data10for downloading an ROI-specific video stream by tile-based video streaming. Embodiments for the actual streaming and embodiments for the entities involved therein are described further below. Data10comprises bit streams12each having encoded thereinto one of portions14of a video picture area16which portions may be tiles as taught hereinafter, so that each portion14of the video picture area16is encoded into a subset18of the bit streams12at different qualities. The subsets18, thus, form portion specific subsets18. These subsets18may, in terms of adaptive streaming and the description in the manifest24, be treated as individual adaptation sets as depicted inFIGS.3and4where, exemplarily, one adaptation set (thus, forming a subset18) was present per tile (thus forming a portion14), each tile forming a tile-specific set of representations (thus forming bit streams12). In particular, there was exemplarily one adaptation set, AdaptationSet 1, for tile 1 and another adaptation set, AdaptationSet 2, for tile 2. The bit streams12may, thus, be treated as representations in the MPD24or alternatively speaking, same may be distributed onto different representations. The data10further comprises at least one extractor20, i.e. extractor data or extractor file or extractor track, associated with an ROI22of the video picture area, and a manifest file24. The latter identifies, for the predetermined ROI22, as illustrated by arrow26, a set of bit streams12, the set being composed of one bit stream12per subset18so as to have encoded thereinto the different portions14into which the video picture area16is partitioned in a manner focusing on the ROI22. This focusing is done, for instance, by composing the set such that for subsets18within the ROI, the one bit stream out of this subset18, which contributes to the composed set, is of higher quality compared to subsets18pertaining portions14outside ROI22where the one bit stream selected out of corresponding subsets18and comprised by the ROI specific set is of lower quality. The set, thus formed by referencing26and indicated by manifest24, is a ROI specific set of bit streams. An example is depicted inFIG.8and will be further discussed below. Note that the bit streams12may, for instance, be formed by M independently coded tiles of N video data streams each having video picture area16encoded thereinto in units of these M tiles14, but at different quality levels. Thus, N times M bit streams would result withFIG.7illustrating M=16, with N being, for instance, the number of bit streams12per subset18. The ROI specific set would comprise M bit streams: one out of each subset18. This is, however, only an example and others would be feasible as well. For instance, N may vary among the M portions14. The ROI specific set may be composed of merely a subset of the subsets18pertaining portions14covering, at least, ROI22. The bit streams12may be stored for on a storage for being downloaded, in pieces and selectively, by a client as taught later on, and might be treated, though, as individual representations in the MPD24which is also stored for download by the client and indicates to the client addresses for the download of the bit streams12. The representations corresponding to bit streams12may be, however, by indicated as being not dedicated for being played out individually, i.e. not for play out without being part of a ROI specific set s formed by adaptation set. The extractor20is also stored for download by the clients either separately by addresses being indicated in the manifest24, or along with any of the bit streams such as a track of a media file. In the further description herein, the extractor20has also been denoted as FF extractor file. The quality levels which the representations in one subset18relate to, may vary in terms of, for instance, SNR and/or spatial resolution and/or colorness. The extractor file20is quasi a constructor for constructing a compiled bit stream out of the ROI specific set. It may be downloaded by the client along with the ROI specific set of bit streams12. It indicates, by way of pointers and/or construction instructions, a compilation of the compiled bitstream out of the ROI specific set of bitstreams by identifying26, for each of the subsets18of bitstreams, out of the one bitstream of the respective subset18of bitstreams, comprised by the ROI specific set, a picture portion relating to a current picture frame and signalling a compilation of the compiled bitstream out of the identified picture portions so that the compiled bitstream comprises a sub-picture portion for the picture portion of the selected bitstream of each of the subsets18of bitstreams the compiled bitstream is formed of. InFIG.7, for instance, three consecutive picture frames are illustrated.FIG.8shows one such picture frame30, the ROI specific set32of bit streams12and the picture portion34in each bit stream of set32which relates to the picture frame30. The picture portions34may, as illustrated exemplarily for the bit stream12of set32relating to portion No. 13, be partitioned, spatially, into one or more than one units such as NAL units36each unit encoding a corresponding partition38of the portion14which the respective picture portion relates to. When composed together according to extractor20, a composed bit stream40results which has an access unit42—or, speaking in file format domain as used herein elsewhere, a sample for each picture frame such as picture frame30. Each access units42has encoded thereinto the picture area16in a spatially varying quality with increased quality within the ROI22, and subdivided into one sub-picture portion44per portion14, each sub-picture portion44formed by the corresponding picture portion32, i.e. the one which concerns the same portion14. Note that in case of preselection whichFIG.4refers to, the extractor20is associated with the ROI, but that this extractor20is used to compose different ROI specific sets32all of which have increased quality, i.e. select bitstreams of increased quality among the subset18, within ROI22. That is, a kind of freedom exist for the client to choose the set32for the wanted ROI. In case of defining for ROI22in the manifest24an adaptation set defining each pair of one specific ROI22with one of different ROI specific set32, whichFIG.3refers to, the extractor20is associated with that ROI and the corresponding ROI specific set32, specifically, while another extractor20might be present which corresponds to another pair of that ROI22and another ROI specific set32differing to the former set32in, for example, in the chosen bitstream12in the subsets18concerning portions14within the ROI and/or in the chosen bitstream12in the subsets18concerning portions14outside the ROI. Besides, as noted below, more than one ROI22may be envisaged in data10, so that for each of these ROIs one or more than one extractor20may be present in the data, with the manifest comprising corresponding information. A coding payload section of the picture portion34of each bitstream12of each subset18of bitstreams, is encrypted by using block-wise encryption by use of sequential variation of a plaintext mask and/or block-encryption key by reinitializing the sequential variation for each picture portion34. That is, instead of encrypting the coding payload sections of the picture portions34of a collection of bit streams, the portions14of which together cover the picture area16and all belong to the a common picture frame30, sequentially without reinitializing the sequential variation therebetween such as for the set32, the encryption is done for each picture portion34separately. It should be noted that the encryption of the coding payload section may be restricted to picture portions34of bit streams12belonging to any of an “encrypted set” of one or more of the subsets18of bitstreams, such as to subsets18relating to portions14in the mid of picture16or subsets18relating to every second portion14distributed over the area16like checkerboard pattern, for instance. FIG.9, for instance, shows a picture portion34which may contribute to a composed bit stream40. It is exemplarily composed of a sequence of more than one unit36. Each unit (such as a NAL unit), comprises a header46and a payload section48. The latter may comprise all the prediction parameters and prediction residual related syntax elements having the portion14of area16encoded thereinto, which corresponds to the picture portion34, and the former may contain coding settings valid for the whole partition38which its payload section36encodes such as motion information and residual data. The concatenation50of the payload sections48of the picture portion34, which forms a sub-portion44, in turn, is encrypted. In a deterministic manner, a sequential variation of a plaintext mask and/or a block-decryption key takes place in the block block-wise encryption of concatenation50. That is, concatenated data50is portioned into blocks52, which were called plaintext blocks. inFIGS.1aand1b, and from one block to the next, an incremental change of cipher (non-linear bijection) input so as to obtain different block-encryption keys for consecutive plaintext blocks in case of CTR takes place. That is, the non-linear function or cipher function, controlled by a certain general key,—the function being called CIPH and the general key being called key inFIG.1b—is fed with an increment or counter value, called counter, which changes from one plaintext block to the next, thereby obtaining different en/decryption keys for consecutive blocks which are XORed with the corresponding en/decryption key to obtain the encrypted cipher block, respectively. The intermediate encryption keys (output at “output block #” inFIG.4for the successive plaintext blocks “plaintext #” are the same as the decryption keys used for decryption. In CBR, the predecessor cipher block, i.e. the encrypted version of the predecessor block52, is used as plaintext mask for masking the current plaintext block before the latter is subject to ciphering using the non-linear bijective function. It might be that sections48have been generated by an encoder in a manner to have a length corresponding to an integer multiple of a block length of the encryption so that the borders between payload sections48coincides with block borders. This is especially advantageous when using the above-mentioned alternating between decryption and parsing algorithm for border detection. In particular, a receiving entity such as the client, need to detect the borders54between consecutive payload sections as well as the border56at the end of concatenation50, i.e. the end border of the last payload section, for instance. Thus, the RIO specific set32of bit streams, in its not yet decrypted form, and the extractor20together represent an encrypted video stream. The ROI specific set32of bitstreams12has encoded thereinto the portions14of video picture area16, and the extractor20indicates the compilation of the compiled bitstream out of this set32. The coding payload section48of the picture portion34of each bitstream12out of set32—or merely of the encrypted set of bitstreams thereamong—is encrypted by using the block-wise encryption using the sequential variation of plaintext mask and/or block-encryption key and by reinitializing the sequential variation for each picture portion. FIG.10shows an embodiment for an apparatus80for downloading an ROI-specific video stream by tile-based video streaming. The apparatus may, thus, be called a client or client device. As shown, same may be composed of a concatenation of a DASH client82, a file handler84, and a decryptor86, and, optionally, a decoder88. Note that DASH is merely an example for an adaptive streaming environment. Another may be used as well. File handler84and decryptor86may operate in parallel or, differently speaking, need not to operate strictly sequentially, and the same applies when considering the file handler84, the decryptor86, and the decoder88h. The apparatus is able to handle, i.e. download and decrypt, a video scene prepared as described with respect toFIG.7which might, as described, end up in a downloaded composed stream40having all sub-samples44, i.e. all portion14, encrypted irrespective of the currently envisaged ROI or viewport. Without having mentioned it above, it is clear that the data ofFIG.7has further extractors20for, and has the manifest file24indicating bit stream12sets32for, more than one ROI, namely a set of ROIs distributed over the area16so as to be able to follow a view direction of a user in the scene, for instance. The apparatus80has access to the data10via a network90such as the internet, for instance. The DASH client82downloads and inspects the manifest file24so as to, depending on an ROI which is currently of interest because of, for instance, the user looking at the corresponding viewport, such as22inFIG.7, identify and download the ROI specific set32of bit streams12along with the extractor file20, both being associated with that ROI22. The file handler84compiles, using the extractor file20, the compiled bitstream40out of the ROI specific set32of bitstreams12by extracting, from each of these bitstreams, the picture portion34relating to a current picture frame30by parsing the respective bitstream and forming the compiled bitstream40out of the extracted picture portions34so that the compiled bitstream40is composed of the corresponding sub-picture portions44, one for each portion14. Note that at the time of receiving the bitstreams of ROI specific set32, the picture portions' payload sections are still encrypted. The picture portions are, however, packetized so that the file handler is able to handle them though. The decryptor86decrypts the encrypted coding payload section48of each subpicture portion44by using block-wise decryption by use of sequential variation of a plaintext mask and/or block-decryption key. To this end, the decryptor86reinitializes the sequential variation for each subpicture portion44to be decrypted, i.e. at the beginning92of concatenation50or the start border of the payload section48of the first unit36. It finds the borders54,56of the coding payload section(s) of each subpicture portion44to be decrypted by parsing the coding payload section of the respective subpicture portion44up to a currently decrypted position or, differently speaking, by alternatingly decrypting and parsing the payload section(s) of concatenation50. See, for instance,FIG.11showing that the decryptor, after having initialized the plaintext mask and/or block-decryption key for the sequential variation for the first block of payload data50, decrypts100, using e.g. CTR or CBR as described above, a current block to obtain its plaintext version with subsequently parsing102the latter, i.e. pursuing the parsing done for the current payload section48of the current unit36so far up to the currently decrypted block's end. It is checked at104if the end of the current block52represents the end of the current payload section48, and if not, the procedure steps106to the next block52in the current section48. If yes, however, it is checked whether the end of the last section48of the concatenation50has been reached at108, and if yes, the current section's48border or end has been found and the procedure is finished for the current subpicture portion44, and if not, the first block of the next section48or next unit36is pursued with at110. It could be that, by default, each picture portion34or sub-picture portion44is merely composed of one unit36in which case steps108and110could be left off. In effect, the procedure finds, by this way, a begin and an end of payload sections48. Note that the payload data sections48were denoted video slice data inFIG.2. The sub-picture portions44were denoted above as subsamples. Note that the way the manifest24defines the relationships between the ROI2and the ROI specific set32and the extractor may be according to the concept of pre-selections shown inFIG.4, or according to the concept ofFIG.3. Note also that, although the above description assumed the download to pertain the whole video picture area16available, merely a section thereof which includes the ROI may be covered by the downloaded stream40. That is, the borders are found by alternatingly decrypting and continuing the parsing so as to decide whether another block52of the respective subpicture portion's coding payload section48is to be decrypted or not. In effect, the concatenation or combination of file handler84and decryptor86from a an apparatus for recovering a video stream from a downloaded ROI specific set32of bit streams12and a corresponding extractor20. The video stream may be fed into decoder88which may optionally part of that apparatus or not. The file handler performs the compilation using the extractor file20and the decryptor86the decryption of the coding payload sections48using the alternating parsing/decryption concept ofFIG.11. The decryptor86, in turn, represents an apparatus for recovering a video stream for being decoded by a decoder88, from compiled bitstream40, the apparatus being configured to decrypt the coding payload sections of each subpicture portion44the alternating parsing/decryption concept ofFIG.11. Note that, as described, the parsing the coding payload section48according toFIG.11for sake of finding the payload section borders may be accompanied by a disregarding of explicit border location information possibly comprised in the extractor20which, however, might be wrong and merely present therein for sake of file format standard conformance. The above embodiments enabled an encryption of all subsamples44downloaded. However, in accordance with embodiments described next, encryption may be focused onto one sub-sample44, for instance. Again, the above description of the introductory specification is initially resumed before presenting broadening embodiments. In particular, here, an index of an encrypted subsample is used for addressing alternation (or allowing alternating) single (one|most important|high-res) subsample encryption, wherein this is combinable with CTR or cbc1 encryption and the usage of preselections. Based on the subsample identification algorithm illustrated inFIG.11, an encryption scheme with preselection in the manifest24is achieved in which encryption is applied on a sub-picture basis to varying tiles14within the picture plane16in an alternating fashion, selecting tiles and pictures in a strategy that might regard:their relative ‘importance’ to the coding structures and dependencies. For instance, a key frame with a lower temporal level is much more important to the decoding result, e.g. in terms of error propagation.the relative ‘importance’ of the depicted content. For instance, higher resolution tiles depicting the current or an expected viewport or directors cut in 360° video applications. To enable this subsample encryption, an index to the encrypted subsample is signalled so that the decryptor can identify the encrypted subsample44. For instance, the decryptor may simply count through the subsamples44within a sample42until the decryptor reaches the signalled index of the encrypted subsample and, by way of gathering the NALU length from the Part 15 header and by identifying how many bytes to decrypt as taught with respect toFIG.11, it may decrypt the section48of that subsample44. One embodiment would be for the OMAF specification to define a FF Box to indicate the index of the encrypted subsample44or to improve the ‘senc’ box defined in Common encryption [3] that is used to derive encrypted and unencrypted bytes from SAI. The current ‘senc’ box is defined as follows:aligned(8) class SampleEncryptionBoxextends FullBox(‘senc’, version=0, flags){unsigned int(32) sample_count;{unsigned int (Per_Sample_IV_Size*8) InitializationVector;if (flags & 0x000002){unsigned int(16) subsample_count;{unsigned int(16) BytesOfClearData;unsigned int(32) BytesOfProtectedData;} [subsample_count]}} [sample_count]} One embodiment is a new version of the ‘senc’ box that omits signaling of incorrect byte ranges and instead indicates indexes of encrypted subsamples is as follows. aligned(8) class SampleEncryptionBox_Invention2extends FullBox(′senc′, version, flags){unsigned int(32) sample_count;{unsigned int (Per_Sample_IV_Size*8) InitializationVector;if (flags & 0x000002){if (version == 0) {unsigned int(16) subsample_count;{unsigned int(16) BytesOfClearData;unsigned int(32) BytesOfProtectedData;}[ subsample_count ]} else if (version == 1) {unsigned int(32) EncryptedSubsampleIndex;}}}[ sample_count ]} Here, EncryptedSubsampleIndex points to the encrypted subsample44within the current sample42. The just described modification leads to embodiments which may be explained by referring toFIG.7to11. The following description of such abstracted embodiments focusses onto the amendments relative to the embodiments described so far with respect to these figures. In particular, not all sub-samples44of the downloaded stream40are encrypted within one sample42, but merely one sub-sample44. Which one, may have been decided on the fly or before encryption specifically for the requested ROI22, or beforehand so that, for instance, the picture portions34of the corresponding picture frame30, which belong to any of the bitstreams12within one subset18, which corresponds to, for instance, the “interesting” scene content, are encrypted, thereby leading to a corresponding encrypted subsample44in the downloaded stream40. Having said this,FIG.7shows, in accordance with the latter alternative, a collection of data10for downloading an ROI-specific video stream by tile-based video streaming, which comprises bit streams12, each having encoded thereinto one of portions141of video picture area16, so that each portion14of the video picture area is encoded into a subset18of the bit streams12at different qualities, and at least one extractor20associated with an ROI22of the video picture area, as well as a manifest file24which identifies, for the predetermined ROI22, the ROI specific set32of bit streams12having encoded thereinto the portions14of the video picture area16in a manner focusing on the ROI in terms of, for instance, higher quality within the ROI22compared to outside thereof. The extractor20indicates the compilation of the compiled bitstream40out of the ROI specific set32in the manner described above. However, a predetermined subpicture portion40is identified out of the subpicture portions44of the compiled bitstream40. This may be done by identifying a predetermined subset of bitstreams out of the subsets18of bitstreams or, synonymously, a predetermined portion14, so that the picture portion34of the selected bitstream12of the predetermined subset18of bitstreams12, i.e. the one included in the ROI specific set32, becomes the predetermined subpicture portion44which is the one being encrypted and to be decrypted, in turn. The signaling may be contained in the extractor20as described above. It could, alternatively be, however, that this signaling is comprised by the sub-picture portions40. The coding payload section of the picture portion34of the bitstreams18of the predetermined subset18of bitstreams12, i.e. the subset corresponding to the predetermined portion14, is encrypted for all bitstreams12in that subset18so that the downloaded stream40comprises the encrypted sub-picture portion or subsample44for the predetermined portion, irrespective for the chosen quality for that portion14according to the ROI specific set32. The data downloaded according to the latter embodiment, represents a video stream, comprising the ROI specific set32of bit streams12and the extractor20, wherein the ROI specific set32of bitstreams12has encoded thereinto the portions14of the video picture area, and the extractor20indicates the compilation of the compiled bitstream40out of the ROI specific set32of bitstreams12in the manner outlined above. The predetermined subpicture portion44in this compiled bitstream is identified out of the subpicture portions44of the compiled bitstream40by signaling contained in at least one of the extractor20or the sub-picture portions44. The coding payload section of the predetermined subpicture portion is encrypted. In line with above re-interpretation ofFIG.7,FIG.10may, according to a corresponding alternative embedment, show an apparatus for downloading an ROI-specific video stream by tile-based video streaming, i.e. a client, differing from the above description with respect to the encryption of merely the identified sub-picture portion. That is, the DASH client inspects the manifest file24so as to, depending on the wished ROI22, identify and download the ROI specific set32of bit streams12along with the extractor20, i.e. the video stream outlined in the previous paragraph. The file handler84compiles, using the extractor20, the compiled bitstream40out of the ROI specific set32of bitstreams12by extracting, from each of these bitstreams, the picture portion34relating to the current picture frame30by parsing the respective bitstream12and forming the compiled bitstream40out of the extracted picture portions34so that the compiled bitstream40comprises a sub-picture portion44for, and formed by, the picture portion34of each of the ROI specific set32of bitstreams12the compiled bitstream is formed of. The decryptor86identifies the predetermined subpicture portion44out of the subpicture portions44of the compiled bitstream40for the current picture frame30on the basis of the signaling which, as mentioned, may be in at least one of the extractor20with such a signaling being called EncryptedSubsampleIndex above, or the sub-picture portions. The decryptor86then decrypts the coding payload section48of the predetermined subpicture portion44by finding the border of the coding payload section48of the predetermined subpicture portion44to be decrypted by the alternating parsing-decryption process discussed inFIG.11. Likewise, the file handler84and decryptor86together form an apparatus for recovering a video stream from the ROI specific set32of bitstreams and the extractor20by performing the compiling using the extractor20and identifying the predetermined/encrypted subpicture portion44on the basis of signaling in at least one of the extractor file or the sub-picture portions. It then decrypts the coding payload section48of the encrypted subpicture portion by performing the border detection according toFIG.11. The decryptor86, in turn, represents an apparatus for recovering the video stream from the bitstream40, wherein the apparatus is configured to identify the encrypted subpicture portion44on the basis of signaling inbound from outside, namely from the file handler84which forwards this information as taken from signaling in the extractor20, or itself from signaling in the sub-picture portions44. It then performs the decryption of the coding payload section48of the encrypted subpicture portion44with forming the border detection ofFIG.11. The signaling may index or address the encrypted subsample44out of the subsamples of the current sample42of the compiled bitstream40in form its rank in the sample42so that the decryptor84may count the subsamples44in the current sample42to detect the nthsubsample44in sample42with n being the rank indicated by the signaling. The identification of the encrypted subpicture portion for several picture frames may be done in manner so that the several picture frames contain picture frames30for which the encrypted subpicture portion44corresponds to different portions14, and/or the several picture frames contain first picture frames for which there is exactly one encrypted subpicture portion44and second picture frames, interspersed between the first picture frames, for which no subpicture portion is identified to be the encrypted subpicture portion. That is, for some frames, no encryption may take place with respect to any portion14. Again, it is noted that all details having initially been described above with respect toFIG.7to11shall also apply to the embodiments having been described thereinafter with respect to the one-subsample encryption modification except for, accordingly, all details regarding having all or more subsamples encrypted. Without having explicitly mentioned it with respect toFIG.11, it is noted that the decryptor86, in resuming106decryption after having encountered104a section′48trailing border or end, may parse the slice header46of the subsequent unit36to detect the beginning of the payload section48of this subsequent unit36. Next, modifications of above described embodiments are described which do not need the alternating decryption/parsing procedure for detecting the encrypted ranges48. An extended SAI variant which allows ‘cbcs’ all subsample encryption with preselection described next would allow this ‘cbcs’ all subsample encryption with preselection, but without the need to parse the slice header. According to next variants, an explicit signaling or straight-forwards derivation of clear and protected data ranges within the extractor track is allowed. First, a ‘senc’ box extension using NAL lengths (i.e. extracted bytes) for derivation of encrypted byte ranges is described. As described before, the individual subsamples' sizes in the composed bitstream32may vary depending on the extracted data when preselection is used. The video bitstream structure may be used to derive encrypted byte ranges, specifically the Part 15 NALU length headers. One embodiment would be to define a second version of the box as follows: aligned(8) class SampleEncryptionBox_Invention3.1extends FullBox( ′senc′, version, flags){unsigned int(32) sample_count;{unsigned int (Per_Sample_IV_Size*8) InitializationVector;if (flags & 0x000002){unsigned int(16) subsample_count;{if (version == 0) {unsigned int(16) BytesOfClearData;unsigned int(32) BytesOfProtectedData;}else if (version == 1) {unsigned int(1) WholeDataClear;unsigned int(15) BytesOfClearData;}}[ subsample_count ]}}[ sample_count ]} In this embodiment, a simplification is assumed, which is that a subsample is to be equal to a NAL Unit. The size of the subsample is determined by the NALULength. This is found at the first position (e.g. first 4 bytes) of the sample (this applies for the first subsample of the sample) and at position Pos_i=Sum{i=1 . . . N}(NALULengthi) (for the remaining subsamples in the sample). The length of the BytesOfProtectedData is derived as the length of the subsample-BytesOfClearData if WholeDataClear is not 1. If WholeDataClear is equal to 1, BytesOfProtectedData is inferred to be equal to 0 and BytesOfClearData (although in this case mandated to be signalled as 0 in the box/syntax) is inferred to be equal to the subsample length derived from the Part 15 NALU length header. That is, in accordance with all embodiments for apparatuses described above with respect toFIG.10, the border detection using alternating decryption and parsing according toFIG.11may be rendered superfluous in the following manner: the bitstreams12of data10are generated so that all picture portions34of encrypted bitstreams12are merely composed of one unit36(NAL unit). That is, per portion14the subset18of which is encrypted, there is merely one NAL unit per picture portion of the current frame30. As each subsample44of the composed bitstream is formed by such as picture portion—namely if same is part of a bitstream21belonging to the ROI specific set32—each encrypted subsample44is one NAL unit long, too. Note the above alternatives: the encrypted subsamples per frame30may be all or merely one. The alternating decryption/parsing border detection is then replacable by a simple derivation of the length of the coding payload section48of the encrypted subpicture portions44from a header within these subpicture portions44, namely the NAL unit header46. The process of parsing the headers of encrypted subsamples44with respect to the length indication is easy to perform and may be made on the fly as owing one to one correspondence between subsamples and NAL units, this information may derived on the basis of the length indication in the corresponding only one NAL unit which length indication lies pretty much at the beginning of the encrypted subsamples. Another option to avoid the alternating decryption/parsing border detection may be called CENC: a “FF-‘senc’ inheritance box” is used to inherit subsample sizes from any sub-picture track or bitstream12into extractor track or the composed bitstream40, respectively. The aim of this option is to define an inheritance box that derives the subsample values from the dependent tracks (bitstreams12of set32). The dependent tracks are signalled in the ‘tref’ box in the ‘moov’ box, i.e. the extractor20. This information is used to get the samples from the dependent tracks, thereby becoming subsamples44of composed bitstream40. In a similar manner, the BytesOfProtectedData can be inherited by a box (e.g. ‘senc’ box) of the dependent track with some hints (e.g. offsets how to find it) and the BytesOfClearData signalled in the inheritance box since this is the same size, and independent of the representation used when using Preselections. Hence, inheritance of the ‘senc’ relevant information from information signalled in the dependent tracks carrying the subsamples is allowed. Hints for gathering this information is signaled in the extractor20. As illustrated inFIG.12that shows an MPD structure with one Adaptation set per Tile, each including 3 Representations with different bitrate versions and one Adaptation set with an extractor track (right-most side). The so called “inherited ‘senc’”-box inherits the byte ranges of protected data from the ‘senc’ boxes within each tile representations as selected on client side. That is, in accordance with all embodiments for apparatuses described above with respect toFIG.10, the border detection using alternating decryption and parsing according toFIG.11may be rendered superfluous in the following manner: the bitstreams12of data10are generated so that all picture portions34of encrypted bitstreams12are accompanied with information such as in the file format (FF) boxes which indicate the payload sections of units of the respective picture portion. This is done in a manner so that the information may be referred to from the extractor20irrespective of the bitstream12of a subset18belonging to the ROI specific set32, ought to from a subsample44of composed bitstream. For example, its collocated among the picture portions of the substreams belong to the same subset18and belong to the same frame30. The alternating decryption/parsing border detection is then replacable by a simple derivation of the coding payload sections'48location within the encrypted subpicture portions44by inheriting this information from the bitstreams12in set32. That is, a bitstream length or pointer indication signaled within the bitstream12from which the encrypted picture portion34is extracted which the respective subpicture portion44belongs to, is used to detect the borders54and56therein. Note that whatever border detection alternative is used, the client apparatus10may be disregard explicit border location information in the extractor20which may be wrong and be there merely for standard conformance reasons, or, differently speaking, which might be in there, for instance, because mandatory according to standard, but not correct owing to preselection-inherent freedom in selecting among representations within each adaptation set. Next, possible extensions of above described embodiments are presented. They may be referred to as ‘ces2’-CTR based encryption with subsample initialization vector. Here, a CTR based sub-picture encryption scheme is augmented with encryption metadata (i.e. means for allowing re-initialization of the encryption chain for each subsample with an per subsample initialization vector(s)) that allow independence of the encrypted data streams of each tiles.FIG.13illustrates this in terms of a block operation diagram. Instead of an IV per sample, the encryption chain is restarted for each subsample (N, N+1, N+2 . . . ) of the sample using per subsample IV (IVA, IVB, IVC) and maintaining respective counters. A comparison approach which may be used for the CBC based ‘cbcs’ scheme is to use one IV for all subsamples of the sample. This has the disadvantage to result in similar ciphertext blocks at be beginning of each subsample when the plaintext blocks are similar. The presently discussed possibilities entail various modes for derivation of the varying per-subsample IVs on client side. First, the IVs can be explicitly signalled in a new version of the ‘senc’ box as given below. aligned(8) class SampleEncryptionBox_Invention4extends FullBox ( ′senc′, version, flags){unsigned int(32) sample_count;{if (version == 0) {unsigned int (Per_Sample_IV_Size*8) InitializationVector;if (flags & 0x000002) {unsigned int(16) subsample_count;{unsigned int(16) BytesOfClearData;unsigned int(32) BytesOfProtectedData;} [ subsample_count ]}} else if (version == 1) {if (flags & 0x000002) {unsigned int(16) subsample_count;{unsigned int (Per_Sample_IV_Size*8)InitializationVectorPerSubsample;unsigned int(16) BytesOfClearData;unsigned int(32) BytesOfProtectedData;}[ subsample_count ]}}}[ sample_count ]} A further possibility is to derive the subsample IVs on client side based on a single signalled IV per sample as in the existing ‘senc’ box but with an additional subsample dependent offset. The offset in this case can either becalculated via an numeric function (e.g. offset equals subsample_index*((2(N*8)−1)/subsample_count) for an N byte counter.derived from the subsample_index-th entry of an prearranged pseudo-random sequence. Summarizing, in the above described embodiments, described above with respect toFIG.7to11, and the modifications thereof described with respect toFIG.12, the re-initialization for each picture portion34, within the current picture frame30may be based on mutually different initialization states. In other words, in case of encrypting the bitstreams12of more than one subset18, mutually different initialization states are used for these subsets, one for each subset18. Thereby, mutually different initialization states are derived for each of the encrypted subpicture portions44in the composed bitstream. The mutually different initialization states may be the result of applying mutually different modifications to a base initialization state for the current picture frame, called single signalled IV per sample above. The apparatuses described above with respect toFIG.8are, thus, able to derive the mutually different initialization states for the encrypted subset of subpicture portions44per access unit4or current frame30by applying mutually different modifications to a base initialization state for the current picture frame30. The mutually different modifications for each subpicture portion44or subset18, respectively, may be derived depending on the portion14of the video picture area16which the respective subpicture portion44or subset18relates to or depending on an index of the respective subpicture portion44or subset18or portion14. A calculation or table look-up may be used to this end as described above. The index has been called subsample index above. The extractor20may comprise an initialization state list signaling an initialization state for each picture portion34within the current picture frame30. The initialization state may be additionally be signalled in the bitstream the respective picture portion belongs to or stems from. The following description focuses on another aspect of the present application. In particular, here, the embodiments seek to overcome a problem associated with the usage of preselection adaptation sets, namely the problem that the combinational options offered by such preselection adaptation sets for the client by selecting one representation out of each picture-portion specific adaptation set assigned by this preselection adaptation set to each of regions of an output picture area, are difficult to understand in terms of the quality ranking between these combinational options as well as in terms of the overall location of the ROI within the circumference of the output picture area they correspond to. The following embodiments seek to overcome this problem. As done previously with respect to the encryption/decryption related embodiments, the following description starts with resuming the description set out in the introductory portion of the specification of the present application by way of presenting possible modifications of the techniques set out in the introductory portion. Later on, the embodiments represented by these modifications are then broadened by broadening embodiments. In particular, to cope with the just-outlined problem one of the following solutions might be used: First embodiment: Add max_quality_ranking and min_quality_ranking attributes to the region-wise quality descriptor as shown inFIG.14. Second embodiment: Add a flag indicating scope of the quality values is only within adaptation set as show inFIG.15. It would be undesirable to have regions defined in the RWQR descriptor for which local_quality_ranking has different values, since it would be difficult to interpreted the meaning of the qualities of different regions across representations. Therefore, it can be mandated that all RWQR descriptors within an adaptation set shall have the same value for local_quality_ranking. Alternatively the signaling could be done out of RWQR descriptor and add it at the MPD (e.g. at adaptation Set level). Third embodiment: Add the RWQR as a delta to a qualityRanking indicated for a representation. It would be desirable to group all representations with same viewport as focus within an AdaptationSet. Therefore, it is helpful to indicate for a given AdaptationSet which region is emphasized and to describe the quality relationships for each region. Such an indication can be used as a grouping mechanism. E.g. inFIG.16, 3 representations with 2 regions and a quality difference of 1 are specified, while each representation is encoded at a different bitrate and therefore have different qualities (Rep 1=3, 4; Rep2=2, 3; Rep3=1, 2). In this example we assume that the region of RWQR1 has a better quality as RWQR2 and the region-wise quality descriptors are used on the AdaptationSet level to signal that. The RWQR is therefore use to group the representations and indicate the quality relationship of the regions. This is done as a delta/offset to a quality ranking indicated for the representations themselves. Thus the @qualityRanking attributes from all representations within the same AdaptationSet are used to compute the real quality values of the regions together with the region-wise quality ranking descriptors (RWQR1 and RWQR2). An option could be to apply the described descriptor to tile-based streaming, in which case the dependencyIds would be used in such a way, that within the AdaptationSet where the region-wise quality ranking descriptors are located, all combinations of Representations and their @qualityRanking attributes have the same relationship (signalled delta in the proposed RWQR). For example, if RWQR1 and RWQR2 values define the delta/offset value of 1, qualityRanking attributes shall have the same relationship. Obviously, the same approach can be used for other viewport dependent solutions. If the viewport dependency is achieved using a certain projection method, like for example in case of the Truncated Square Pyramid Projection (TSP) (see the example for the projection inFIG.17) where a part of the360video is emphasized by mapping it to the base of the pyramid, which has a higher resolution than the other faces of the pyramid. For such a case, the region-wise quality ranking descriptors are used to signal the relationship in quality of the regions of that projection. For example, inFIG.17, the region of the front face (represented with the RWQR1 descriptor) has a better quality in respect to all remaining faces (RWQR2). In order to describe certain broadening embodiments with respect to the just-outlined modification embodiments, reference is made toFIG.18which shows the general environment the following embodiments deal with. Partially, reference signs having been used with respect to the description ofFIGS.1to17, are re-used with respect to the description ofFIG.18. The re-usage is used in order to assist in an easier understanding of the following description, but the re-usage shall, naturally, not mean that details set out above with respect to, for instance, en/decryption should be transferable onto the subsequently explained embodiments. FIG.18shows a download apparatus or client80for downloading, using tile-based streaming, video content from a server or the like. The internal structure of the download apparatus80does not necessarily correspond to the one shown inFIG.10. The download apparatus80may, however, comprise a dash client82as shown inFIG.10, for instance, and may optionally also comprise a file handler84and, optionally, a decoder88and even optionally a decrypter86. The download apparatus80has, via a network90, access to data10including a plurality of bitstreams12and a manifest file24. The bitstreams12have a video picture area16encoded there into in a tile or portion-based manner. To this, bitstreams12are partitioned into subsets18with each subset18being associated with a certain portion or tile14into which the video picture area is partitioned so that the bitstreams12of one subset18have the same associated portion/tile14encoded there into, but at different qualities. As described above, the qualities may mutually differ in one or more of various aspects such as in terms of SNR, spatial resolution and so forth. For ease of understanding, merely two portions/tiles14are illustrated inFIG.18, thereby corresponding to the case depicted inFIG.6. In further compliance withFIG.6,FIG.18shows the exemplary case where each subset18contains six different bitstreams12. By way of the manifest file24, each bitstream12is indicated to the client80as a representation within at least one of adaptation sets200, so-called scene-portion or picture-portion specific adaptation sets. InFIG.18, two such portion-specific adaptation sets200exist for each tile14, corresponding to adaptation sets 1 to 4 shown inFIG.6, but it should be clear that the number of adaptation sets per tile14is not restricted to be 2 and may even vary among portions14. It should also be noted that the physical bitstreams12may partially be assigned to more than one adaptation sets200or, differently speaking, may represent a representation co-owned by or shared by more than one adaptation set200. Frankly speaking, the grouping of representations12belonging to one subset18and, accordingly, referring to the same scene portion14, is done in a manner so that representations belonging to one adaptation set200are, at least in average, higher in quality than the representations of the same subset18belonging to another adaptation set. The grouping of representations12of a subset18into adaptation sets200may even be done in a manner so that any representation of one adaptation set200of that subset18is higher in quality than in the other representation in the other adaptation set. This is, however, not mandatory and will get clearer from the description brought forward below. The manifest file24, at least, comprises first parameter sets202, namely one for each adaptation set200. Each parameter set #i,202, defines the corresponding scene-portion specific adaptation set #i,200, by associating with this adaptation set #i a certain sub-group of representations12within one subset18so that the representations12within each such adaptation sets200have encoded there into the same scene portion14, but at different qualities. Each of these parameter sets202comprises a quality level, or a syntax element204indicating a quality level, for each representation12within the adaptation set which the respective parameter set defines. To this end, the parameter set202defining adaptation set #i has a quality level Qi(j) for each representation #j within that adaptation set i. This had also been depicted inFIG.6at the corresponding adaptation sets 1, 2, 4 and 5, where adaptation set 1 corresponds to portion/tile 1 and adaptation sets 2 and 5 correspond to portion/tile 2. Here, Q corresponds to the quality level indicated by each priority set202. Besides, the manifest file24comprises parameters sets206which define preselection adaptation sets. Each preselection adaptation set208assigns to each of regions of an output picture area one of the tile-specific adaptation sets200. The preselection adaptation sets208, thus defined, differ in assignment of tile-specific adaptation sets200to the regions. Frankly speaking, preselection adaptation sets are ROI specific in that they, for instance, assign adaptation sets200of representations12of higher quality to a region or regions corresponding to the ROI, compared to qualities of representations12of adaptation sets assigned to regions farther away from the ROI, or in that, for instance, they only collect adaptation sets200relating to regions at and around the ROI with leaving out regions farther away from the ROI. A problem exists in that, however, the client has to ascertain by itself, and in a manner further outlined below, as to which ROI a specific preselection adaptation sets relates to. The qualities204are not suitable to this end by themselves alone as they are merely ordinally scaled within the same set202they are comprised by. Generally, the mentioned regions and output picture area may correspond to a partitioning of the picture or scene area16into portions14using which bitstreams12might have been obtained by tile-based encoding, but the output picture area might alternatively rearrange and/or scale and/or rotate portions14to result into an output picture area with this rearrangement and/or scaling and/or rotation possibly being indicated in the manifest file24as well, or the output picture area only is composed of a proper subset of the portions14. In order to ease the description of the main topics of the following embodiments, it shall preliminarily be assumed that the output picture area looks like the scene area16and that the portions14represent the regions14for which each preselection adaptation set208assigns one of the corresponding adaptation sets200.FIG.18illustrates, for instance, that adaptation set 6 has an output picture area216associated therewith which is, in turn, subdivided or partitioned into regions214. An extractor or extractor file/track which is comprised by data10and which is indicated by reference sign20, composes a corresponding video data stream, for instance, showing the output picture area216by using a representation chosen by the client80out of adaptation set No. 5 for encoding one region, and the representation chosen by the client80out of adaptation set 4 for encoding the other region214. However, as just-mentioned, output picture area216may differ from any composition of picture areas14on the basis of which bitstreams12might have been generated using tile-based encoding at different qualities. Adaptation set No. 3 might have associated therewith an extractor file20, too, and might coincide with adaptation set 6 in shape, size and number of regions compared to the output picture area216of the other adaptation set 6. With respect toFIG.18it shall be noted that the existence of the extractor, for instance,20is not needed in that the origin of the representations12might be of such nature that their picture portions14individually coded into these representations are not defined on a common video picture area16, but on individual ones so that just by their composition by way of the preselection adaptation sets206, the picture content thereof, i.e. their picture portions14, are put together to result into regions214and, accordingly, the output picture area216. Summarizing the description brought forward so far with respect toFIG.18, each preselection adaptation set206leaves some decision up to the client device80with respect to the representations12chosen for each region214of the output picture area216. Each adaptation set206merely associates picture-portion specific adaptation sets200to regions214with the client device80having the freedom to select, for each region214, one of the representations12assigned to that region214by the respective preselection adaptation set206. Here, in this example ofFIG.18, this would mean that, theoretically, there are nine options to choose among for each preselection adaptation set206. Unfortunately, the qualities204provided in the parameter sets do not allow for an assessment where in the output picture area216of a preselection adaptation set208the ROI lies as the qualities as, without any other means, merely guaranteed to be ordinally scaled portion-specific adaptation set wise. Even further, the client may not even reasonably rank the various combinations options of a certain preselection adaptation set in terms of quality. The above-described embodiments enable to provide the client device80with efficient guidance to assess the ROI location of a certain preselection adaptation set and/or asses the ranking among the combinational options for a certain preselection adaptation set206in terms of quality and maybe even the meaningfulness of the options considering the ROI specificness of the preselection adaptation set. To this end, each preselection adaptation set206comprises certain additional quality guidance data218, namely guidance data218which enables to define a ranking among the picture-portion specific adaptation sets200assigned by the respective preselection adaptation set206to regions214mutually in terms of quality, and optionally may enable an even finer assessment of the mutual relationship between the representations12comprised by the assigned picture-portion specific adaptation sets200assigned by a certain preselection adaptation set206in terms of quality. A first embodiment conveyed by the above description of modifications of the technique set out in the introductory portion of the specification of the present application, is described with respect toFIG.19. According toFIG.19, each preselection parameter set206comprises one or more parameters for each region214, which indicates a quality level range220covering the quality levels204of the representations12of the picture-portion specific adaptation set200assigned to the respective region214by the preselection adaptation set defined by this parameter set216.FIG.19, for instance, shows that the additional quality guidance information218comprises—as indicated by reference sign219—a quality maximum level parameter and quality minimum level parameter Qi, maxand Qi, minfor each region i in order to indicate the ranges220within which the qualities of the representations lie, namely Q1 . . . 3(i), of the representations j comprised by the picture-portion specific adaptation sets200assigned to the respective region i by the parameter set206which the respective guidance information218is part of. The parameters of the guidance information218define the quality level ranges220on a common quality scale222so that the client device80is able to use the mutual location of the quality level ranges220indicated for the various regions on the common scale222to assess as to where the ROI of the preselection adaptation set208lies to which the second parameter set206belongs, namely where the region(s) are located which are of higher quality. The client may, for instance, assume the ROI to be the collation of region(s)214, for which the quality range220is highest, or the collation of region(s)214, for which the quality range220is not minimum among the ranges220of all regions214within area216. The client may even derive from the quality level ranges220a ranking among the possible representation combinations offered by the corresponding preselection adaptation set defined by the corresponding parameter set206in terms of quality. In particular, the pure presence of the range information219in the quality guidance information218may also represent a signal to the client that the portion local quality levels are also defined on the common scale. That is, quality levels Qj(i) of a certain region i would lie in the range indicated for region i. In that case, the client may deduce from the pure presence of the range information in the guidance information218that the qualities are mutually comparable even across portion specific adaptation sets200. Alternatively, the presence of the range information in the quality guidance information218does not change the circumstance that the qualities204are merely scaled ordinally within one set202, i.e. within one adaptation set200. In the latter case, a client device80may, however, use the range information to map the qualities levels204onto qualities defined on the common scale222. The client may, for instance, assume that the number of representations12within a picture-portion specific adaptation set200are, in terms of their qualities, uniformly distributed over the quality level range220indicated by the guidance information218for the corresponding region and accordingly, by additionally using the mutual quality indications or ranking values204indicated by the corresponding adaptation set202of the picture-portion specific adaptation set200, the client device80is able to determine the qualities of all bitstreams contributing to a certain preselection adaptation set on the common quality scale222. Let's resume the just outlined example: Using the Qi, maxand Qi, minthe client may map Qj(i) onto Qj(i)→(j−1)·(Qi,max−Qi,min)+Qi,minoder Qj(i)→(Qj(i)−minj{Qj(i)})·(maxj{Qj(i)}−minj{Qj(i)})·(Qi,max−Qi,min)+Qi,min. The resulting qualities are all ordinally scaled relative to each other for all j and i. Without the guidance information, the client may merely rank the representations j within each adaptation set i200individually. In the above example ofFIG.14, the guidance information218involved the syntax element max_quality_ranking and min_quality_ranking in the RwQR descriptor for each region. Among the possible combinational options of bitstreams offered by a preselection adaptation set a client may, thus, exclude those which would be in conflict with the ROI specificness of the preselection adaptation set because of, for instance, the option leading to regions outside the ROI being of higher quality than the one(s) within the ROI. Additionally or alternatively, the client may use the guidance information so as to obtain a better understanding of the quality offset between the ROI related and ROI distinct regions of the various options in order to decide based on a current situation such as user viewing speed, available network download rate and the like, for instance, which option to choose. And beyond all, the client may deduce as to where the ROI of a certain preselection adaptation set lies and may, accordingly, select among several preselection adaptation sets available one where the ROI coincides, for instance, with a current user's viewport. A further embodiment which is derivable from the description ofFIG.15, pertains the following specifics for the manifest file24. In particular, as explained again with respect toFIG.20, the quality guidance information218may in fact comprise an indication223indicating whether the quality levels Qi(j) of region j as indicated within the parameter sets202for the picture-portion specific adaptation sets200are defined on a common ordinal scale222as depicted inFIG.20at the lower half, or whether the quality levels Qi(j) indicated by these parameter sets202are defined on separate ordinal scales224. When defined on the common ordinal scale222, the quality levels204indicated for the representations within a certain picture-portion specific adaptation set by way of a certain parameter set202such as those for tile 1 inFIG.20, may be compared, in ordinal sense, with the quality levels indicated by another parameter set202for another picture-portion specific adaptation set200assigned to another region by the same preselection adaptation set206which indication218belongs to. In so far, indication218is a kind of “globality indication”. In the other case of being defined on separate ordinal scales, the quality levels204indicate the mutual ordinal relationship between the qualities of the representations within the picture-portion specific adaptation set200which the parameter set202belongs to, which comprises these quality levels204, but the quality levels204of different picture-portion specific adaptation sets200assigned to different regions214by the preselection adaptation set which the globality indication218belongs to, are not comparable with each other, i.e. it may not be determined the quality of which bitstream is better based on the corresponding quality levels204. That is, if globality applies, the client may compare all Qj(i) for all j and i. They are ordinally scaled relative to each other globally. Without globality, the client may merely rank the representations j within each adaptation set i200individually. The client may then, for instance, determine that the ROI for the preselections adaptation set is the collation of region(s)214, for which the quality level204is highest, or the collation of region(s)214, for which the quality level204is not minimum among the quality levels204of all regions214within area216. InFIG.19, illustrates that the second parameter set206of a preselection parameter set206may comprise one or more parameters indicating, for each region214of the output picture area216, a quality level hint for the respective region214, here exemplified by a quality level Q′(i) representative for region i and the referenced adaptation set200respectively. As they are defined in one parameter set, namely296, they mutually be defined on a common scale. However, the guidance information218may comprise an indication for each region i—which may coincide with indication223which, in so far control both indications concurrently or which may be used alternatively to indication223—whether the quality level hint for the respective region214, i, and the quality levels204defined by the first parameter set202of the picture-portion specific adaptation set200assigned to the respective region214, are defined on a mutually common ordinal scale so as to be ordinally scaled thereacross, or the quality level hint Q′(i) and the quality levels204defined by the first parameter set202of the picture-portion specific adaptation set200assigned to the respective region i are defined on separate ordinal scales224. In the former case, all quality levels Q′(i) and Qj(i) might in fact be defined on the common ordinal scale222as the quality levels Q′(i) are mutually ordinally scaled anyway owing to their definition in the same set206. Again, the client may derive based on the Q′(i)'s as to where the ROI of a certain adaptation set208lies, and if the indication223applies, the client may even gain an understanding of the individual combination options in terms of quality. In accordance with an even further embodiment, the guidance information2018merely comprises the Q′(i)'s without223or218. Even here, the client is able to determine the RIO of a certain preselection adaptation set206and, accordingly, to select a matching preselection adaptation set for a wanted view port. In particular, a mere ranking between the assigned picture-portion specific adaptation sets200as realized by such quality_ranking parameter Q′(i) enable to client device80at least to correctly assess the general quality gradient across the area216to find the ROI. It should be noted that the indication223could be interpreted to signal the common ordinal scale222for all quality levels204of all picture-portion specific adaptation sets200coinciding in viewpoint, i.e. coinciding in viewpoint from which the respective portion14of the video picture area16is captured and which is indicated, for instance, in the respective parameter set202. This renders the following clear: a described above with respect toFIG.15, the globality indication223would not have to reside within the parameter sets206concerning preselection adaptation sets. The globality indication223could be positioned in the manifest file24or elsewhere. The latter aspect that the quality guidance information223may alternatively be positioned in the manifest file24outside parameter sets206is indicated inFIG.18by dashed lines. As an alternative to the description ofFIG.19, it should be noted that the indication of quality level ranges220for each region214a certain parameter set206relates to, could be replaced by the indication of a mere quality level offset between quality levels indicated within the picture-portion specific adaptation set related parameter sets202, i.e. the quality levels204. Thus, the additional quality guidance218would then indicate a relative offset to be applied to the quality levels204in order to be comparable to each other. For instance, the quality guidance218could indicate that the quality levels of tile 1 have to be increased by a certain value before being compared to the quality levels204of the other tile so as to be defined on the common ordinary scale222. Using such an information218on the offsets ΔQmnbetween the qualities Qj(i) indicated by the sets202, the client may map Qj(i) of a certain set i200onto Qj(i)→Qj(i)−ΔQikto compare them with Qj(k) of a certain set k200. The resulting qualities are all ordinally scaled relative to each other for all j and i. Without the guidance information, the client may merely rank the representations j within each adaptation set i200individually. As already stated above, the existence of an extractor20is not mandatory for achieving the advantages described with respect toFIG.18to20. If present, however, a file format descriptor/box such as the SphereRegionQualityRankingBox may be used to convey information as descried above wrt to the manifest file. In particular, while the extractor indicates a compilation of a compiled bitstream such as40out of subsets of bitstreams each associated with a different one of portions214of the video picture area216, with leaving freedom to select for each portion one bitstream of the associated subset of bitstreams, the file format descriptor would comprise one or more parameters for each portion214of the video picture area216, indicating a quality level range220covering quality levels signaled in the representations12(here tracks) of the subset of representations assigned to the respective portion214, or quality offsets between the quality levels of the representations12of different ones of the subsets of representations and/or comprise an indication whether quality levels indicated in the representations are defined on a common ordinal scale so as to be ordinally scaled across different ones of the representations of different subsets, or the quality levels indicated by the representations are defined on separate ordinal scales224, individual for the subsets. In other words, all bitstreams12in one set200inFIG.18may have a quality value in one of its boxes. Likewise, the file format descriptor may additionally or alternatively comprise one or more parameters indicating, for each portion214of the output picture area216, a quality level hint for the respective portion and an indication whether the quality level hint for the respective portion and the quality levels indicated in the representations comprised by the subset associated with the respective portion, are defined on a common ordinal scale so as to be ordinally scaled thereacross, or the quality level hint and the quality levels204indicated in the representations comprised by the subset associated with the respective portion are defined on separate ordinal scales224, and/or comprise one or more parameters indicating, for the portions214of the output picture area216, quality ranking among the portions214. Upon one of same being put together and referenced by a certain extractor20, the question may arise as to how the qualities in the bitstreams relate to each other and/or where the ROI for such downloaded video stream is. To this end, a file format box or descriptor may be spent which is ready for download by the client which wishes to present the corresponding ROI to which the extractor belongs. The mentioned file format box has a similar information as thought by218for the MPD: It indicates how the qualities in the bitstreams of the various subsets200relate to other and where those portions214are within area216which have higher quality, thereby indicating where the ROI is. In even other words, an extractor20associated with a certain ROI collects, by referencing, one subset200of representations per region214. Later on, at the time of actual download, the extractor forms a file along with those representations which have been—one for each subset200and associated region—selected out of the respective subset200. The latter referenced bitstreams12form tracks of the file. They from set32. Each has a quality value in it, just as quality204in the MPD. The mentioned FF descriptor would come in addition and would indicate e.g. whether all these quality values, residing in the different tracks stemming from different subsets200relating to different regions214, are defined on the common scale222or separate scales224, or would indicate the ranges220on the common scale222. The FF descriptor might be part of an initialization segment of the composed video stream downloaded by the client which is interested in the ROI associated with the extractor20to which the FF descriptor indicating the quality globality belongs: The file has, as mentioned, the referenced tracks12of set32in there, and the extractor20. Each referenced track has its quality value in a local FF box/descriptor, for instance, and the FF descriptor/box outlined herein may be part of the initialization segment downloaded first by the client to obtain settings of the file. For sake of completeness, it shall be mentioned that for each picture-portion specific adaptation set200, the corresponding first parameter set202may define a field of view information with respect to the picture portion14encoded into the representations of the respective picture-portion specific adaptation set. The second parameter set206, in turn, may define a field of view information with respect to a collation of the regions214, i.e. the field of view resulting from the overlay of all regions214. If there are more than two second parameter sets206of respective preselection adaptation sets208, as depicted inFIG.18, each one may define a field of view information with respect to a collation of its regions214, wherein the collation coincides between said at least two second parameter sets. That is, the circumference of the output picture area216may coincide for these sets208. The preselection adaptation sets206may, however, differ in that their parameter sets206define a region of highest quality among the regions214, a location of which within the collation varies over the parameter sets206. The region of highest quality would, thus, correspond to the ROI with which the various adaptation sets208are associated. The client device may, as described, inspect the manifest file24and change, based on the quality level range and/or the indication, a streaming strategy in adaptively streaming a video from a server. It may use the quality levels, quality level ranges, the quality level hints and/or the indication, in order to rank the preselection adaptation sets with respect to a wished viewport. As explained with respect toFIG.17, the collections of bitstreams defining the options of preselection adaptation sets may alternatively be defined as different representations grouped into one adaptation set in a manifest file. This yields a manifest file comprising a parameter set for a region-wise compiled adaptation set defining a set of representations coinciding in a subdivision of a video picture area216in regions214, the representations having encoded thereinto the regions214of video picture area at different quality level tuples assigning a region-specific quality level to each region. The representations would, accordingly, all cover the area216individually. They would differ in association of qualities assigned to the various regions. The parameter set would then comprise an adaptation set quality level indication for all regions, illustrated by RWQRiinFIG.17, and for each representation, a representation-specific quality level indication, indicated by @qualityRanking. For each representation, the quality level tuple of the respective representation, indicated in the parentheses inFIG.17, is then derivable from a combination of the adaptation set quality level indication and the representation-specific quality level indication for the respective representation such as by adding same. The Client device may inspect the manifest file and use the quality level tuples of the representations in a streaming strategy for adaptively streaming a video from a server. It may use the quality level tuples of the representation in order to rank the representations with respect to a wished viewport. Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, one or more of the most important method steps may be executed by such an apparatus. The inventive data signals such as data collections, video streams, manifest files, descriptors and the like can be stored on a digital storage medium or can be transmitted on a transmission medium such as a wireless transmission medium or a wired transmission medium such as the Internet. Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable. Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed. Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier. Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier. In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer. A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary. A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet. A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein. A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein. A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver. In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are performed by any hardware apparatus. The apparatus described herein may be implemented using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer. The apparatus described herein, or any components of the apparatus described herein, may be implemented at least partially in hardware and/or in software. The methods described herein may be performed using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer. The methods described herein, or any components of the apparatus described herein, may be performed at least partially by hardware and/or by software. While this invention has been described in terms of several advantageous embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention. REFERENCES [1] NIST, “ADVANCED ENCRYPTION STANDARD (AES)”, 2001, online: http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.197.pdf[2] NIST, “Recommendation for Block 2001 Edition Cipher Modes of Operation”, NIST Special Publication 800-38A 2001 Edition, online: http://dx.doi.org/10.6028/NIST.SP.800-38A[3] ISO/IEC 23001-7:2016, Information technology—MPEG systems technologies—Part 7: Common encryption in ISO base media file format files[4] ISO/IEC 14496-12:2015, Information technology—Coding of audio-visual objects—Part 12: ISO base media file format[5] ISO/IEC 14496-15:2017, Information technology—Coding of audio-visual objects—Part 15: Carriage of network abstraction layer (NAL) unit structured video in the ISO base media file format[6] ISO/IEC 23008-2:2013, Information technology—High efficiency coding and media delivery in heterogeneous environments—Part 2: High efficiency video coding[7] Byeongdoo Choi, Ye-Kui Wang, Miska M. Hannuksela, Youngkwon Lim (editors), “OMAF DIS text with updates based on Berlin OMAF AHG meeting agreements”, m40849, 2017-06-16[8] ISO/IEC 23009-1:2014, Information technology—Dynamic adaptive streaming over HTTP (DASH)—Part 1: Media presentation description and segment formats | 83,671 |
11943275 | DETAILED DESCRIPTION In the following, various example systems and methods will be described herein to provide example embodiment(s). It will be understood that no embodiment described below is intended to limit any claimed invention. The claims are not limited to systems, apparatuses or methods having all of the features of any one embodiment or to features common to multiple or all of the embodiments described herein. A claim may include features taken from any embodiment as would be understood by one of skill in the art. The applicants, inventors or owners reserve all rights that they may have in any invention disclosed herein, for example the right to claim such an invention in a continuing or divisional application and do not intend to abandon, disclaim or dedicate to the public any such invention by its disclosure in this document. Generally, the present disclosure provides a method and system for managing adaptive bitrate video streaming. Embodiments of the system and method are configured to determine a video data chunk in order to determine various measurements associated with the video data chunk. Embodiments may be configured to then determine bitrate, duty cycle and a bitrate factor from the determined measurements of the video data chunk. From this data, a score associated with the video may be determined, and policies may be applied to streaming video to improve a QoE for a subscriber based on the score associated with the video. A challenge for operators is to maintain a good streaming video QoE in an environment where streaming video traffic is increasing while operating budgets are decreasing. Understanding the streaming video QoE is important to achieve the best possible experience under various circumstances. If a network operator does not understand the type of video being watched by subscribers, the network operator may not be able to adopt policies that will improve the QoE for its subscribers. Generally, encryption is making analysis of streaming video and determining QoE using Deep Packet Inspection (DPI) difficult. Due to encryption, it may not be possible to analyze the content of the video stream and model a client state and buffer health associated with a video stream. A sudden decrease in data volume for a streaming video session could be associated with, for example, an end user pausing the video playback, a video player downshifting the quality due to network conditions, or the like. Changes in network conditions may be difficult to detect, especially in the case of Quick UDP Internet Connections (QUIC), which does not tend to allow any measurement of latency and/or packet loss. While it may be possible to try to predict behavior based on assumptions, the variation in streaming services, clients, and devices makes this process difficult and may yield inaccurate results. The results may be inaccurate even when applying machine learning techniques on broad data sets, or using other techniques to try and improve accuracy. Some solutions for Video QoE analytics may take a more encompassing view, considering various qualities and factors, for example:Video Service;Device type, for example, tablet, mobile phone, tv, or the like;Resolution and Bitrate;Initial Buffering Delay;Buffer stalls; andthe like. In general, this type of solution relies on unencrypted traffic to get a detailed understanding of the video flow, the video flow's adaptive streaming segments and any shifts in quality. As video streaming traffic flows have become more encrypted, there is now less information in the traffic flow from which Video QoE can be derived. There are some conventional methods that may do heuristic analysis on the encrypted video in conjunction with TCP parameters from TLS/HTTPS, but these conventional methods may fall apart in the case of QUIC, which is a delivery method of YouTube™ and Facebook™. Embodiments of the disclosed method and system are intended to measure the quality of the network delivery of Adaptive Bitrate Video Streaming by analyzing the network activity level of a video stream traffic flow. From an analytical point of view, the video stream traffic flow may be broken into chunks should the subscriber traverse multiple locations in a mobile network, allowing the quality of each chunk to be measured, so that a QoE per location can be derived. A benefit of chunking the video and deriving a QoE per location is that a derived score can be rolled up and/or aggregated for a location from multiple sessions even though these sessions may have traversed multiple locations, this makes the resulting analytical data more versatile. Each chunk is intended to be unique per location. As such, embodiments of the system and method detailed herein are configured to collect chunks from a specific location, from multiple subscribers, and analyze the QoE for the specific location. The specific location may be defined based in part by the type of network being used. In some cases, a mobile network location may be based on a cell, in a fixed or Fiber to the x (FTTX) network, a location may be based on an access concentrator, and the like. The system may gain awareness of various dimensions by reviewing a signaling flow from the network, for example an S11 fee in a mobile network, a RADIUS/DHCP feed in Fixed/FTTX networks, or the like. In some cases, the dimensions may be a service plan, device type, a location, a network, a site, access point name (APN), access node, access technology, interface, gateway, upstream channels, downstream channels, 5G network slice and other additional custom dimensions. For the activity level to be measured, granular data may be preferred. It is possible that through embodiments of the system and method described herein, raw data may provide records on a per application basis for predetermined time intervals, for example every 200 milliseconds (ms), 250 ms, 500 ms and the like. In the examples provided herein, 250 ms is used but it will be understood that records may be for a shorter or longer duration. Embodiments of the method and system disclosed herein are intended to measure quality of each of the chunks, which may be estimated in near-real time by continuously computing the chunk bitrate, the duty cycle, and the bitrate factor. The chunk bitrate depends on the chunk volume and the chunk duration that can be measured through inline monitoring. The bitrate factor may then subsequently be used to estimate the QoE. Details of the conversion of the bitrate factor to QoE are detailed herein. Depending on the system implementation, traffic action may be taken directly on the traffic to improve the QoE, or in the case of a passive deployment where direct action may not be possible, the result could be fed to third party systems, like radio optimization modules or the like to enable more capacity for QoE improvements. In some cases, the system and method may include traffic management capabilities, so the system and method may take action directly on the traffic to optimize for QoE. The traffic management capabilities may include, for example, shaping traffic, providing updated shaping instructions to one or more shapers to shape traffic, reprioritizing other traffic, or the like, to have a positive impact on Video QoE. If the system is inline, the system may provide for traffic management capabilities, if the system is intended to be passive, the system may provide instructions to other network devices in order to accomplish traffic management. FIG.1illustrates an embodiment of a system for managing adaptive bitrate video streaming in a computer network environment10. Subscribers12of an operator may request video content from a content provider via the Internet14. Subscribers12access the operator's access network16, which flows requests through to a core network18. The system100may reside in the core network and may be inline with a core router20. The core router20may receive the requested content from the video content provider and forward the content to the subscriber, via the access network16. FIG.2illustrates an embodiment of the system100for managing adaptive bitrate video streaming according to an embodiment. The system includes an input module105, an analysis module110, a QoE score module115, an output module120, a processor125and memory130. In some cases, the system100may include a plurality of processors, for example, including at least one processor per module or per engine. In some cases, the system100may be distributed and may be housed in a plurality of network devices. In other cases, the system may reside in a single network device. In some cases, the memory130may be included as an internal component of the system. In other cases, the memory component130may be housed externally or in a cloud and may be operatively connected to the system and the components of the system. In some cases, the memory component130may store instructions that, when executed by the processor125, cause the modules to perform the configured tasks. The system is intended to reside on the core network. The input module105is configured to receive or retrieve real-time data records via third party network devices, for example a Deep Packet Inspection network device or a Packet Processing network device. The real-time data records are intended to include data associated with active bitrate video streaming traffic flows. From the data records, it is intended that the input module105may determine the duration, volume, and activity of the video stream traffic flow. The input module105may determine the video chunking as detailed herein. The analysis module110is configured to review the data records to gain insight into the dimensions associated with the content and the associated QoE of the subscriber viewing the content, as detailed herein. In some cases, dimensions may include, for example, service plan, device type, a location, a network, a site, access point name (APN), and the like. The analysis module110may determine a video stream parameters, for example, bitrate, duty cycle and bitrate factor and the like, associated with the video stream traffic flow. The analysis module110may receive measured data from the input module105, or may retrieve the data from the input module105to determine the associated dimensions and parameters. The QoE score module115is configured to calculate the QoE score based on the analysis of the analysis module110. The QoE score may be determined by a localized region, by a set of subscribers or the like. In some cases, the QoE score module may be configured to aggregate QoE scores for a particular dimension or over a plurality of dimensions, based on the parameters and dimensions provide by the analysis module110. In a particular example, the QoE module115may aggregate all of the QoE scores for subscribers between a time interval in a particular location to determine, for example, whether a location appears to be congested. It is intended that providing an aggregate QoE score for a particular dimension or over a plurality of dimensions may aid a network operator in capacity planning and network understanding. The output module125is configured to either take action/implement a change based on the QoE score, export the analytics and QoE score to an operator or other third party, produce a report or visualization of the analytics or the like. In some cases, the system may be operatively connected with at least one shaper, configured to shape the traffic based on, for example, the output from the output module125. In this way, the system can provide an output that implements a change in the operation of the shaper. The system may alternatively be operatively connected to various traffic management modules/devices such that the output can implement changes in those traffic management modules/devices. FIG.3illustrates an embodiment of a method200for managing adaptive bitrate streaming. The input module105collects or receives raw video data at205. The input module105may then determine the video chunking, at210, as detailed herein. The input module105may measure at least one of the duration at215, volume at220and activity of the video stream at225. It will be understood that these measurements may be calculated serially or in parallel/simultaneously. The analysis module110may further calculate characteristics/parameters such as a bitrate at230, a duty cycle at235and a bitrate factor at240from the measured data received from or retrieved from the input module. These calculations may also be done serially or in parallel. The QoE module115may determine a QoE score based on the bitrate factor, at245. The output module120may then store a result or may suggest or implement action based on the score, at250. It will be understood that the method may run continuously, or may be performed at predetermined intervals. In some cases, the method may be configured to run when there is a change to the video stream, for example, when the subscriber has changed location, channel or the like. In some cases, the QoE score of a plurality of subscribers may be aggregated by the QoE module over a particular dimension or a plurality of dimensions of interest for a network operator. Embodiments of the system and method may use predetermined high frequency data, which may contain records of incoming and outgoing data volume per application with highly granular dimensions, various aspects of the associated data with the video chunk. These records may be received by or retrieved by the input module105, for example, every 250 ms or the like. In some cases, the dimensions may be a service plan, device type, a location, a network, a site, access point name (APN), access node, access technology, interface, gateway, upstream channels, downstream channels, and other additional dimensions, which may be customized or configured in various ways. In some cases, only a subset of the dimensions may be used, and the dimensions may depend on, for example, the type of network. A benefit of the dimensional granularity is that data can be analyzed on any given dimension. For example, Video QoE can be compared for different devices using a device column, or different service plans using a service plan column, and may also be compared in various combinations, for example, comparing device makers, device types and devices models within a specific service plan. Custom dimensions may be determined by a network operator and may be supported by embodiments of the system and method for dimensional data. Dimensions are intended to be used when analyzing the results as detailed herein. The analysis module110is configured to analyze the patterns in the traffic. In particular, a plurality of records will be grouped together into chunks, where a chunk is unique per dimension (for example, a location), separated by a silence period and may contain one or multiple video loading events, where the video player is downloading a segment of adaptive bitrate video. A silence period is intended to be a period of no data for a predetermined time, such as a number of milliseconds, seconds or the like. In some cases, the predetermined number of milliseconds or seconds may be 1, 2, 3 or the like. In some cases, a network operator may configure the silence period to a predetermined number of milliseconds or seconds that is intended to reflect an appropriate silence period for the network in question. An example of video chunks is shown inFIG.4. The analysis module110may further determine the start and end of a video chunk and the total duration for the chunk. In some cases, the analysis module110may determine or calculate the load or loading events of the video chunk. Further, the system may summarize a total downstream volume of all records for the load events in the chunk that may allow for the total chunk volume to be calculated. An example of video chunk duration can be seen inFIG.5. By counting the number of samples that have had a downstream transfer volume greater than 0 bytes, the activity rate of the chunk can be determined by the analysis module. It will be understood that the records may be completed for example, every 250 ms, which would allow the system to determine the length of time or chunk duration. By multiplying the chunk volume by 8 bits and dividing by chunk duration a bitrate for the chunk can be calculated by the analysis module110. The bitrate can be used to estimate the chunk resolution, based on assumed bitrate to resolution ratios for a given Streaming Video Service. In some cases, this estimation can be determined by a table or other mapping. In a specific example, a table of bitrate to resolution estimation is shown below. TABLE 1Example of bitrate to resolution estimationBitrate RangeEstimated Resolution0-80 KbpsLow resolution video/Audio only81-130 Kbps144 p131-350 Kbps240 p351-625 Kbps360 p626-1100Kbps480 p1101-2250Kbps720 p2251-4500Kbps1080 p4501-9000Kbps1440 p9001-200000Kbps2160 p20000 Kbps -Above 2160 p By dividing the chunk activity rate with the total number of intervals within the chunk duration a duty cycle can be calculated by the analysis module110. The duty cycle is indicative of how fast the chunk is sent over the network, a low duty cycle indicates there is plenty of capacity available while a high duty cycle indicates there is congestion on the network. As an example, 0 to 0.25 may typically be considered as a low duty cycle, while above 0.5 may be considered high. It will also be understood that the ranges of high and low duty cycle may depend on the video application. As a specific example, it has been viewed that Netflix™ is aggressively pushing higher bitrates compared to YouTube™ and therefore the duty cycle is typically higher for Netflix™, than it would be for YouTube™. An example of chunk activity rate can be seen inFIG.6. By dividing the chunk bitrate by the duty cycle, the bitrate factor for the chunk can be established, by the system. The bitrate factor indicates the maximum attainable bandwidth if the streaming video has been loading constantly. In some cases, it will be understood that having a low bitrate with a low duty cycle would indicate that the subscriber is likely experiencing an appropriate QoE. This analysis gives a measure that is intended to remove video supply (available resolution) and consumer demand (screen size) from the QoE determination. This is intended to provide a more accurate determination of the network capability rather than including the capability of the video service or video client. In a specific example, if a subscriber receives a chunk at 1700 Kbps (estimated 720p) with a duty cycle of 0.1, this would imply the bitrate factor is 1700/0.1=17000 Kbps, if there was demand (for example, a larger screen size) and supply (for example, 2160p encoded video) the subscriber could have received a much higher resolution without risking a buffer stall. On the other hand, if a subscriber receives a chunk at 1700 Kbps (estimated 720p) with a duty cycle of 0.8, this would imply the bitrate factor is 1700/0.8=2125 Kbps, the network may not be capable of delivering a higher resolution even if the device would benefit from a higher resolution and the video was available in a higher resolution. A QoE score can be derived from various variables. In particular, as an example, a QoE score can be derived from the Bitrate Factor. In some cases, the QoE score may be considered as an absolute score, which does not take into consideration service plans, devices, limitations of access technology or such into account, or may be considered as a relative score that factors in limitations of a price plan, device, access technology or fair usage policy. A table showing examples of bitrate factor to QoE score is shown below. TABLE 2Bitrate Factor and ScoreBitrate FactorScore0 bps0450000 bps1900000 bps21800000 bps33600000 bps47200000 bps and up5 Embodiments of the system and method for managing Adaptive Bitrate Video Streaming Quality of Experience derived from high frequency sampling provides a general QoE score for Streaming Video, on a per video chunk basis, focusing on the networks ability to deliver the video, while using the duty cycle to weight in the supply and demand factor. Embodiments of the system and method are intended to not be dependent on modeling specific video services and/or consumer devices for analytics and are intended to adapt to changing conditions in video streaming offerings. The system and method detailed herein are also not dependent on specific transport protocols, such as HTTP, TLS/HTTPS and/or QUIC. As each chunk has a QoE score, the resulting data can be used to score any dimension available in the input data, for example: per video service, per subscriber, per location, per device, or the like, or combinations of the dimensions. It is intended that the system and method provide responses to network operators on a variety of network capacity questions, for example, “How is video delivered to Apple™ devices on a low end plan?” It is understood that the system may aggregate scores on a plurality of dimensions, and the example above includes two dimensions, device manufacturer as well as service plan. It is intended that this type of result would provide for the operator to make informed traffic policy decisions based on the output from the system. As video streaming traffic flows become more encrypted, the embodiments of system and method provided herein are intended to provide for analytics of video streaming traffic flow from the state of the flow, and not requiring the decryption of the data within the traffic flow. Chunks may also be grouped to analyze a complete video session traversing a plurality of chunks and/or locations.FIG.7illustrates an example of multiple chunks analyzed together with a score, providing a holistic view of a video experience across multiple locations. The X-axis inFIG.7is time.FIG.7is intended to illustrate the mobility journey of a user, the size represents time spent at a given location, which becomes a chunk that is reviewed, analyzed and scored, for that location. In some cases, where the location is fixed, there may only be a single chunk per video session. Near real-time measurement of Video QoE may be useful for Intent Based Traffic Management where traffic policies may be adapted based on measured quality, with the overall objective of increasing quality for all subscribers. Embodiments of the method and system provided herein are intended to provide value in analytical use-cases, providing insights into how video is delivered over the network, through data on a per subscriber to Customer Experience Management (CEM) systems, allowing customer care to better advice subscribers facing QoE problems, or through data on a per location basis feeding into network planning systems to help determine if Video QoE can be improved through traffic management or whether capacity expansion for a given location. In the preceding description, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the embodiments. However, it will be apparent to one skilled in the art that these specific details may not be required. In other instances, well-known structures may be shown in block diagram form in order not to obscure the understanding. For example, specific details are not provided as to whether the embodiments or elements thereof described herein are implemented as a software routine, hardware circuit, firmware, or a combination thereof. Embodiments of the disclosure or elements thereof can be represented as a computer program product stored in a machine-readable medium (also referred to as a computer-readable medium, a processor-readable medium, or a computer usable medium having a computer-readable program code embodied therein). The machine-readable medium can be any suitable tangible, non-transitory medium, including magnetic, optical, or electrical storage medium including a diskette, compact disk read only memory (CD-ROM), memory device (volatile or non-volatile), or similar storage mechanism. The machine-readable medium can contain various sets of instructions, code sequences, configuration information, or other data, which, when executed, cause a processor to perform steps in a method according to an embodiment of the disclosure. Those of ordinary skill in the art will appreciate that other instructions and operations necessary to implement the described implementations can also be stored on the machine-readable medium. The instructions stored on the machine-readable medium can be executed by a processor or other suitable processing device, and can interface with circuitry to perform the described tasks. The above-described embodiments are intended to be examples only. Alterations, modifications and variations can be effected to the particular embodiments by those of skill in the art without departing from the scope, which is defined solely by the claims appended hereto. | 25,131 |
11943276 | DETAILED DESCRIPTION Detailed embodiments of the claimed structures and methods are disclosed herein; however, it can be understood that the disclosed embodiments are merely illustrative of the claimed structures and methods that may be embodied in various forms. This invention may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of this invention to those skilled in the art. In the description, details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the presented embodiments. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart qillustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The following described exemplary embodiments provide a system, method and program product for web conference optimization. As such, the present embodiment has the capacity to improve the technical field of web conferencing by transmitting line art drawings of one or more participants of a scheduled web conference based on the network bandwidth. More specifically, the present invention may include receiving data for an organization, the organization being comprised of a plurality of participants. The present invention may include receiving a scheduled web conference. The present invention may include determining a network bandwidth threshold for each of the plurality of participants of the scheduled web conference based on at least the data received for the organization and data associated with the scheduled web conference. The present invention may include monitoring a network bandwidth of the scheduled web conference. The present invention may include determining whether to transmit a line art drawing for one or more participants based on the network bandwidth of the scheduled web conference. As described previously, web conferencing may be a term used for various types of online conferencing and collaborative services including at least webinars, webcasts, and web meetings. Web conferencing may be used in social gatherings, live discussions, professional meetings, training events, lectures, and/or presentations, amongst other things. Depending on the technology being used, presenters may utilize both audio and/or video in communicating with web conference participants. In web conferences presenters and/or participants may frequently deal with bandwidth issues which may compromise both audio and/or video communication. Presenters and/or participants of the web conference may often solve bandwidth issues by ceasing video communication which may lead to less effective communication. Therefore, it may be advantageous to, among other things, receive data for an organization, the organization being comprised of a plurality of participants, receive a scheduled web conference, determine a network bandwidth threshold for each of the plurality of participants of the scheduled web conference, monitor the network bandwidth of the scheduled web conference, and determine whether to transmit a line art drawing for one or more participants based on the network bandwidth of the scheduled web conference. According to at least one embodiment, the present invention may improve web conference interactions between participants by utilizing line art drawings to maintain visual communication between participants in low network bandwidth scenarios. According to at least one embodiment, the present invention may improve network bandwidth utilization by converting video images of one or more participants to line drawings based on a determination that the network bandwidth of the scheduled web conference is less than the network bandwidth threshold for the one or more participants. According to at least one embodiment, the present invention may improve network bandwidth utilization by prioritizing participants of a web conference more likely to be active, have a speaking role, and/or higher in an organization hierarchal structure. According to at least one embodiment, the present invention may improve energy consumption by optimizing a limited amount of central processing units (CPUs) on each of the plurality of participants devices by only rendering vectors and/or color information in the form of a line art drawing for each of the one or more participants below a network bandwidth threshold. Referring toFIG.1, an exemplary networked computer environment100in accordance with one embodiment is depicted. The networked computer environment100may include a computer102with a processor104and a data storage device106that is enabled to run a software program108and a web conference program110a. The networked computer environment100may also include a server112that is enabled to run a web conference program110bthat may interact with a database114and a communication network116. The networked computer environment100may include a plurality of computers102and servers112, only one of which is shown. The communication network116may include various types of communication networks, such as a wide area network (WAN), local area network (LAN), a telecommunication network, a wireless network, a public switched network and/or a satellite network. It should be appreciated thatFIG.1provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environments may be made based on design and implementation requirements. The client computer102may communicate with the server computer112via the communications network116. The communications network116may include connections, such as wire, wireless communication links, or fiber optic cables. As will be discussed with reference toFIG.3, server computer112may include internal components902aand external components904a, respectively, and client computer102may include internal components902band external components904b, respectively. Server computer112may also operate in a cloud computing service model, such as Software as a Service (SaaS), Platform as a Service (PaaS), or Infrastructure as a Service (IaaS). Server112may also be located in a cloud computing deployment model, such as a private cloud, community cloud, public cloud, or hybrid cloud. Client computer102may be, for example, a mobile device, a telephone, a personal digital assistant, a netbook, a laptop computer, a tablet computer, a desktop computer, or any type of computing devices capable of running a program, accessing a network, and accessing a database114. According to various implementations of the present embodiment, the web conference program110a,110bmay interact with a database114that may be embedded in various storage devices, such as, but not limited to a computer/mobile device102, a networked server112, or a cloud storage service. According to the present embodiment, a user using a client computer102or a server computer112may use the web conferencing program110a,110b(respectively) to maintain visual communication in a web conference for participants with low bandwidth. The web-conferencing method is explained in more detail below with respect toFIG.2. Referring now toFIG.2, an operational flowchart illustrating the exemplary web conferencing process200used by the web conferencing program110aand110baccording to at least one embodiment is depicted. At202, the web conferencing program110receives data for an organization. The organization may be a business entity, non-profit organization, educational institution, or any other organization comprised of a plurality of participants (e.g., employees, volunteers, lecturers, students, presenters) which may utilize web conferencing in communicating between participants. Upon consent of a user, the web conferencing program110may receive and/or access data from one or more web conferencing scheduling tools utilized by the organization and/or organizational information. Consent may be obtained in real time or through a prior waiver or other process that informs a subject that their data may be captured by appropriate devices or other sensitive personal data may be gathered through any means and that this data may be analyzed by any of the many algorithms that may be implemented herein. Data accessed and/or received from the one or more web conferencing scheduling tools may include, but is not limited to including, emails, calendars, prior web conference minutes, web conference transcripts, attendance records, web conference participation records, web conference subjects and/or descriptions, amongst other data which may be accessed from the one or more web conferencing scheduling tools by which participants of the organization may receive, accept, and/or schedule web conferences. The web conferencing program110may also receive and/or access organizational information, such as, but not limited to, internal documentation, an organizational directory, management chain, job and/or role descriptions, participant titles, amongst other organizational information. The data received and/or accessed by the web conferencing program110may be stored in a knowledge corpus (e.g., database114). Data received and/or accessed by the web conferencing program110shall not be construed as to violate or encourage the violation of any local, state, federal, or international laws with respect to privacy protection. As will be explained in more detail below with respect to step204, the data received and/or accessed by the web conferencing program110may be utilized in at least determining which of the participants of a web conference may be prioritized with respect to network bandwidth. The web conferencing program110may utilize a hierarchal analysis and/or one or more linguistic analysis techniques in analyzing at least the one or more web conferencing scheduling tools and/or organizational information. The one or more hierarchal analysis techniques utilized by the web conferencing program110may include, but are not limited to including, the Galton-Watson branching process, the Lightweight Directory Access Protocol (LDAP), amongst other hierarchal analysis techniques. The web conferencing program110may utilize the one or more hierarchal analysis techniques in generating a hierarchal organizational structure comprising the plurality of participants of the organization. The organizational structure may be a directory information tree (DIT) illustrating a position of each of the plurality of participants relative to one another within the organization. The one or more linguistic analysis tools utilized by the web conferencing program110may include, but are not limited to including, a machine learning model with Natural Language Processing (NLP), Latent Dirichlet Allocation (LDA), speech-to-text, Hidden markov models (HMM), N-grams, Speaker Diarization (SD), Semantic Textual Similarity (STS), Keyword Extraction, amongst other analysis techniques, such as those implemented in IBM Watson® (IBM Watson and all Watson-based trademarks are trademarks or registered trademarks of International Business Machines Corporation in the United States, and/or other countries), IBM Watson® Speech to Text, IBM Watson® Tone Analyzer, IBM Watson® Natural Language Understanding, IBM Watson® Natural Language Classifier, amongst other linguistic analysis techniques. As will be explained in more detail below, the one or more linguistic analysis tools may be utilized in analyzing the data accessed and/or received from the one or more web conferencing scheduling tools in understanding which of the plurality of participants may be most likely to be active in a web conference. At204, the web conferencing program110receives data associated with a scheduled web conference. The web conferencing program110may receive the data associated with the scheduled web conference utilizing access to the one or more web conference scheduling tools utilized by the organization. The scheduled web conference may include data such as, but not limited to, a subject line, a meeting agenda, whether the meeting is a recurring or one time meeting, time of the meeting, scheduled presenters, participants invited, whether the scheduled web conference is mandatory or optional, amongst other data which may be included for the scheduled web conference. The web conferencing program110may utilize the one or more linguistic analysis techniques detailed above in analyzing the data associated with the scheduled web conference. As will be explained in more detail below, the analysis of the data associated with the scheduled web conference may enable the web conferencing program110to determine which of the plurality of participants are most likely to be active in the scheduled web conference. The web conferencing program110may prioritize the participants most likely to be active with respect to network bandwidth. At206, the web conferencing program110determines a network bandwidth threshold for each of the one or more participants of the scheduled web conference. The web conferencing program110may utilize at least, one or more of, the organizational structure, data accessed and/or received from the one or more web conference scheduling tools, and/or the data associated with the scheduled web conference in determining the network bandwidth threshold for each of the participants of the scheduled web conference. The network bandwidth threshold may be determined for each of the plurality of participants of the scheduled web conference to prioritize a video image of participants more likely to be active and/or higher within the organizational structure. For example, the web conferencing program110may receive a scheduled web conference for an organization. The scheduled web conference may be a recurring event with 100 participants. Additionally, the data associated with the scheduled web conference may include 5 scheduled speakers. Utilizing the linguistic analysis tools discussed above the web conferencing program110may determine that based on an available network bandwidth, 80 participants will likely remain inactive, and 15 participants will likely be active based on web conference transcripts and web conference participation records of the previous web conferences for the current recurring event. Accordingly, the web conferencing program110may determine three different thresholds for network bandwidth, one threshold for the 80 participants likely to remain inactive, another threshold for the 15 participants likely to be active, and another threshold for the 5 scheduled speakers. In this example, the web conferencing program110may prioritize the video image of the 5 scheduled speakers and/or the 15 participants likely to be active based on the available network bandwidth over the 80 participants likely to remain inactive during the scheduled web conference. In an embodiment, the web conferencing program110may dynamically adjust the network bandwidth threshold for each of the one or more participants of the scheduled web conference based on the number of participants joining and/or leaving the scheduled web conference. The web conferencing program110may also consider features to be utilized in the scheduled web conference in determining the network bandwidth threshold for each of the one or more participants. Features which may be utilized in the scheduled web conference may include, but are not limited to including, polling, screen sharing, downloadable content, file share upload, web links, question and/or answer features, amongst other features which may be utilized in the scheduled web conference. At208, the web conferencing program110monitors the scheduled web conference. The web conferencing program110may monitor at least the network bandwidth and/or a video image of one or more participants. The web conferencing program110may monitor bandwidth utilizing one or more bandwidth monitoring tools in at least determining an amount of data being transmitted and/or latency in determining how fast data is being transmitted in the web conference. The web conferencing program110may monitor the network bandwidth of the web conference relative to the network bandwidth threshold of each of the one or more participants of the scheduled web conference. As will be explained in more detail below, the web conferencing program110may display one or more prompts to each of the one or more participants of a network bandwidth threshold as the network bandwidth approaches the predetermined bandwidth threshold for those participants. The one or more prompts may be utilized by the web conferencing program110in at least managing the available network bandwidth of the scheduled web conference, enabling one or more participants to manage video image transmission, providing one or more recommendations to one or more participants, and/or receiving permission to transmit a line art drawing from one or more participants. As will be explained in more detail below, the one or more prompts may be displayed by the web conferencing program110to the one or more participants in a web conference user interface118. The web conferencing program110may only monitor the video image of participants for which the web conferencing program110receives permission. The web conferencing program110may receive permission from a participant in one or more ways, including, but not limited to including, prompts displayed to the participant in the web conference user interface118, permission granted by a participant within user preferences of the web conference user interface118, amongst other ways the web conferencing program110may receive permission to access the video image of a participant. The conference user interface118may be displayed by the web conferencing program110in at least an internet browser, dedicated software application, or as an integration with a third party software application, amongst other mediums. The web conferencing program110may never access any video image of any participant of web conference if their video sharing is disabled. The access of any video images and/or other data from the participants of the web conference may not be construed as to violate or encourage the violation of any local, state, federal, or international law with respect to privacy protection. In an embodiment, the web conferencing program110may display a prompt to one or more participants of the scheduled web conference. The prompt may include one or more options by which the one or more participants may decrease their bandwidth usage based on the network bandwidth associated with the scheduled web conference, as described above. The web conferencing program110may display the prompt to the one or more participants prior to the network bandwidth falling below the predetermined network bandwidth for the one or more participants. Continuing with the example above, the web conferencing program110may display a prompt to the 80 participants likely to remain inactive as the network bandwidth approaches the predetermined bandwidth threshold for those participants. The prompt may enable the participants to enable video image access for the web conferencing program and/or disable all video image transmission for the web conference. As will be explained in more detail below, the web conferencing program110may disable transmission of the video image transmission for one or more participants when the network bandwidth is less than the predetermined bandwidth of the one or more participants. The web conferencing program110may transmit a line art drawing of the one or more participants who enabled the video image access. The line art drawing being derived from the video image of the participant. In an embodiment, a participant may preemptively enable transmission of their line art drawing and/or line art drawings of other participants regardless of the network bandwidth for a scheduled web conference. The participant may preemptively enable the transmission of their line art drawing and/or line art drawings of other participants in the web conference user interface118. For example, a participant of an organization may be located in a remote location and may frequently experience connection issues in web conferences. This participant may choose to transmit their line art drawing and/or receive line art drawings of the other participants preemptively to avoid connection issues while still retaining the personability and/or visual communications of the web conference. In an embodiment, the web conferencing program110may provide one or recommendations to one or more of the plurality of participants. The web conferencing program110may provide the one or more recommendations to the participants in the web conferencing user interface118. The one or more recommendations may include steps by which each participant may minimize their bandwidth consumption. The one or more recommendations may be provided to the plurality of participants prior to the scheduled web conference based on factors such as the number of participants attending and/or in real time as the scheduled web conference may be taking place. In this embodiment, the web conferencing program110may continue to learn which features may be utilized for each scheduled conference and may provide recommendations accordingly. For example, based on the data associated with the scheduled web conference the web conferencing program110may determine the scheduled web conference may be utilizing, polling, screensharing, and an active chat. The scheduled web conference may also have 100 confirmed participants. Accordingly, the web conferencing program110may recommend one or more features be disabled to one or more participants and/or preemptively enable one or more participants to transmit a line art drawing. At210, the web conferencing program110determines whether to transmit a line art drawing. The web conferencing program110may determine whether to transmit the line art drawing for one or more participants based on the network bandwidth of the scheduled web conference and/or based on a participant indication in response to the one or more prompts. The web conferencing program110may determine to transmit the line art drawing based on the network bandwidth of the scheduled web conference being below the network bandwidth threshold of at least one or more participants. The web conferencing program110may disable the video image transmission for each of the one or more participants of the network bandwidth threshold and transmit the line art drawing for each of the one or more participants who enabled the transmission of the line art drawing. The web conferencing program110may generate the line art drawings for each of the one or more participants based on the video image of the one or more participants in which access was granted at step208. The web conferencing program110may utilize one or more edge detection methods in converting facial elements of the participant into vector lines and converting those vector lines into pencil lines. The one or more edge detection methods may include, but are not limited to including, Prewitt edge detection, Sobel edge detection, Laplacian edge detection, Canny edge detection, amongst other edge detection methods. The pencil lines may be filled in by the web conferencing program110using colors identified in the video image of the participant. The web conferencing program110may also utilize additional visual analysis tools such as, but not limited to, image mapping technology, object segmentation techniques, object-based image analysis, a Convolutional Neural Network (CNN), supervised/unsupervised image classification techniques OpenCV™ (Open CV and all Open-CV-based trademarks or registered trademarks of Open Source Computer Vision Library in the United. States and/or other countries), Image)/FIJI, amongst other visual analysis tools. The web conferencing program110may utilize at least pre-trained classifiers in conjunction with the visual analysis tools in classifying objects segmented form the video image. The pre-trained classifiers may be stored in the knowledge corpus (e.g., database114). The web conferencing program110may additionally utilize image mapping technology in transmitting the line art drawing for each of the one or more participants to the other participants of the web conference. The image mapping technology may be utilized in transmitting the line art drawing in real time such that the participant's expressions, hand gestures, mannerisms, and/or other forms of non-visual communication which may be animated through the line art drawing of the participant to the other participants of the web conference. In an embodiment, the web conferencing program110may enable a participant to switch back to a transmission of their video image prior to participating and/or may automatically switch the line art drawing of the participant to the video image based on the actions of the participant, such as, but not limited to, unmuting, posting a message in the web conference chat, speaking, and/or other activities that may indicate a participant's desire to engage. In this embodiment, the web conferencing program110may dynamically convert participants to and from line art drawings based on activity such that the network bandwidth may be sustained above various threshold levels. It may be appreciated thatFIG.2provides only an illustration of one embodiment and do not imply any limitations with regard to how different embodiments may be implemented. Many modifications to the depicted embodiment(s) may be made based on design and implementation requirements. FIG.3is a block diagram900of internal and external components of computers depicted inFIG.1in accordance with an illustrative embodiment of the present invention. It should be appreciated thatFIG.3provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environments may be made based on design and implementation requirements. Data processing system902,904is representative of any electronic device capable of executing machine-readable program instructions. Data processing system902,904may be representative of a smart phone, a computer system, PDA, or other electronic devices. Examples of computing systems, environments, and/or configurations that may represented by data processing system902,904include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, network PCs, minicomputer systems, and distributed cloud computing environments that include any of the above systems or devices. User client computer102and network server112may include respective sets of internal components902a, band external components904a, billustrated inFIG.3. Each of the sets of internal components902a, bincludes one or more processors906, one or more computer-readable RAMs908and one or more computer-readable ROMs910on one or more buses912, and one or more operating systems914and one or more computer-readable tangible storage devices916. The one or more operating systems914, the software program108, and the web conferencing program110ain client computer102, and the web conferencing program110bin network server112, may be stored on one or more computer-readable tangible storage devices916for execution by one or more processors906via one or more RAMs908(which typically include cache memory). In the embodiment illustrated inFIG.3, each of the computer-readable tangible storage devices916is a magnetic disk storage device of an internal hard drive. Alternatively, each of the computer-readable tangible storage devices916is a semiconductor storage device such as ROM910, EPROM, flash memory or any other computer-readable tangible storage device that can store a computer program and digital information. Each set of internal components902a, balso includes a R/W drive or interface918to read from and write to one or more portable computer-readable tangible storage devices920such as a CD-ROM, DVD, memory stick, magnetic tape, magnetic disk, optical disk or semiconductor storage device. A software program, such as the software program108and the web conferencing program110aand110bcan be stored on one or more of the respective portable computer-readable tangible storage devices920, read via the respective R/W drive or interface918and loaded into the respective hard drive916. Each set of internal components902a, bmay also include network adapters (or switch port cards) or interfaces922such as a TCP/IP adapter cards, wireless wi-fi interface cards, or 3G or 4G wireless interface cards or other wired or wireless communication links. The software program108and the web conferencing program110ain client computer102and the web conferencing program110bin network server computer112can be downloaded from an external computer (e.g., server) via a network (for example, the Internet, a local area network or other, wide area network) and respective network adapters or interfaces922. From the network adapters (or switch port adaptors) or interfaces922, the software program108and the web conferencing program110ain client computer102and the web conferencing program110bin network server computer112are loaded into the respective hard drive916. The network may comprise copper wires, optical fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. Each of the sets of external components904a, bcan include a computer display monitor924, a keyboard926, and a computer mouse928. External components904a, bcan also include touch screens, virtual keyboards, touch pads, pointing devices, and other human interface devices. Each of the sets of internal components902a, balso includes device drivers930to interface to computer display monitor924, keyboard926and computer mouse928. The device drivers930, R/W drive or interface918and network adapter or interface922comprise hardware and software (stored in storage device916and/or ROM910). It is understood in advance that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows:On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider.Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter).Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. Service Models are as follows:Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows:Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises.Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises.Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes. Referring now toFIG.4, illustrative cloud computing environment1000is depicted. As shown, cloud computing environment1000comprises one or more cloud computing nodes100with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone1000A, desktop computer1000B, laptop computer1000C, and/or automobile computer system1000N may communicate. Nodes100may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment1000to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices1000A-N shown inFIG.4are intended to be illustrative only and that computing nodes100and cloud computing environment1000can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.5, a set of functional abstraction layers1100provided by cloud computing environment1000is shown. It should be understood in advance that the components, layers, and functions shown inFIG.5are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer1102includes hardware and software components. Examples of hardware components include: mainframes1104; RISC (Reduced Instruction Set Computer) architecture based servers1106; servers1108; blade servers1110; storage devices1112; and networks and networking components1114. In some embodiments, software components include network application server software1116and database software1118. Virtualization layer1120provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers1122; virtual storage1124; virtual networks1126, including virtual private networks; virtual applications and operating systems1128; and virtual clients1130. In one example, management layer1132may provide the functions described below. Resource provisioning1134provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing1136provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may comprise application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal1138provides access to the cloud computing environment for consumers and system administrators. Service level management1140provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment1142provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer1144provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation1146; software development and lifecycle management1148; virtual classroom education delivery1150; data analytics processing1152; transaction processing1154; and web conferencing program1156. A web conferencing program110a,110bprovides a way to include determining a network bandwidth threshold for each of the plurality of participants of the scheduled web conference based on at least the data received for the organization and data associated with the scheduled web conference. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The present disclosure shall not be construed as to violate or encourage the violation of any local, state, federal, or international law with respect to privacy protection. | 47,649 |
11943277 | DESCRIPTION OF EMBODIMENTS An embodiment of a system and the like according to the present invention will now be described in detail with reference to the accompanying drawings. In the following embodiments, the term “prediction processing” may be used. As will be apparent to those skilled in the art, the term “prediction processing” refers to forward arithmetic processing of a trained model and can, therefore, for example, be replaced with terms such as simply conversion processing or inference processing. 1. First Embodiment <1.1 System Configuration> First, the configuration of a system10of this embodiment will be described with reference toFIGS.1to5. FIG.1is an overall configuration diagram of the system10according to this embodiment. As is clear from the drawing, a server1having a communication function and a plurality (N) of robots3having a communication function constitute a client-server system, and are mutually connected via a Wide Area Network (WAN) and Local Area Network (LAN). Note that the WAN is, for example, the Internet, and the LAN is installed, for example, in a factory. FIG.2is a diagram showing the hardware configuration of the server1. As is clear from the drawing, the server1includes a control unit11, a storage unit12, an I/O unit13, a communication unit14, a display unit15, and an input unit16, which are connected to each other via a system bus or the like. The control unit11consists of a processor such as a CPU or GPU and performs execution processing for various programs. The storage unit12is a storage device such as a ROM, RAM, hard disk, or flash memory, and stores various data, operation programs, and the like. The I/O unit13performs input and output or the like with external devices. The communication unit14is, for example, a communication unit that communicates based on a prescribed communication standard, and communicates with the robots3that are client devices in this embodiment. The display unit15is connected to a display or the like to present a prescribed display. The input unit16receives input from the administrator through, for example, a keyboard or a mouse. FIG.3is a diagram showing the hardware configuration of a robot3. The robot3is, for example, an industrial robot located in a factory or the like. As is clear from the drawing, the robot3includes a control unit31, a storage unit32, an I/O unit33, a communication unit34, a display unit35, a detection unit36, and a drive unit37, which are connected to each other via a system bus or the like. The control unit31consists of a processor such as a CPU or GPU, and performs execution processing for various programs. The storage unit32is a storage device such as a ROM, RAM, hard disk, or flash memory, and stores various data, operation programs, and the like. The I/O unit33performs input and output or the like with external devices. The communication unit34is, for example, a communication unit that communicates based on a prescribed communication standard and, in this embodiment, communicates with the server1. The display unit35is connected to a display or the like to present a prescribed display. The detection unit36is connected to a sensor and detects sensor information as digital data. The drive unit37drives a connected motor or the like (not shown), in response to a command from the control unit. FIG.4is a functional block diagram of the control unit31of a robot3. As is clear from the drawing, the control unit31includes a sensor information acquisition unit311, a prediction processing unit312, an encryption processing unit319, a hashing processing unit313, an information acquisition necessity determination unit314, a cache information acquisition processing unit315, a server information acquisition processing unit316, a decryption unit317, and a drive command unit318. The sensor information acquisition unit311acquires the sensor information acquired by the detection unit36. The prediction processing unit312reads basic information, weight information, and the like on, for example, the configuration of a prediction model (trained model) generated by supervised learning of a neural network, and generates a prescribed prediction output based on the input data. The encryption processing unit319performs processing for encrypting the input data with a public key or the like. The hashing processing unit313generates corresponding hash values by hashing input information, that is, it generates irregular fixed-length values. The information acquisition necessity determination unit314determines whether or not the data corresponding to the prescribed data is already stored in a prescribed table. When the information acquisition necessity determination unit314determines that the data corresponding to the prescribed data exists, the cache information acquisition processing unit315acquires the corresponding data. The server information acquisition processing unit315transmits prescribed data to the server1and receives the data corresponding to that data. The decryption unit317performs decryption processing, with an encryption key, of the data encrypted with a public key or the like. The drive command unit318drives, for example, a motor according to the output data. FIG.5is a functional block diagram related to the control unit11of the server1. As is clear from the drawing, the control unit11includes an input data receiving unit111, a decryption processing unit112, a prediction processing unit113, an encryption processing unit114, and a data transmitting unit115. The input data receiving unit111receives input data from the robots3. The decryption processing unit112decrypts the data encrypted by a public key or the like with an encryption key, for example. The prediction processing unit113reads basic information, weight information, and the like on, for example, the configuration of a prediction model (trained model) generated by supervised learning of a neural network, and generates a prescribed prediction output based on the input data. The encryption processing unit114encrypts the input data with a public key or the like. The data transmitting unit performs processing of transmitting transmission-target data to the robots3. <1.2 System Operation> The operation of the system10will now be described with reference toFIGS.6to9. The prediction processing operation in the robot3in this embodiment will be described with reference toFIGS.6and7. In this embodiment, the robot3performs prescribed prediction processing based on the acquired sensor information to drive an operating unit such as a motor. When the prediction processing is started in the robot3, processing of acquiring sensor information (I) via the sensor information acquisition unit311is performed (S1). Subsequently, the sensor information (I) is input to the prediction processing unit312to perform prediction processing from the input stage to the first intermediate layer, thereby generating input-side intermediate layer data (X1) (S3). The generated input-side middle layer data (X1) is encrypted by the encryption processing unit319with a public key, whereby encrypted input-side middle layer data (X1′) is generated (S5). The encrypted input-side middle layer data (X1′) is then hashed by the hashing processing unit313to generate a hash value (Y1) (S7). The information acquisition necessity determination processing unit314then reads a hash table, and determines whether or not encrypted output-side middle layer data (Z1′) corresponding to the generated hash value (Y1) exists in the hash table (S9). The output-side middle layer data (Z1) represents, as will be explained later, the second middle layer output closer to the output layer than the first middle layer, and the encrypted output-side middle layer data (Z1′) represents the output of the second middle layer that was encrypted with the public key in the server1. If, according to the determination (S9), the encrypted output-side middle layer data (Z1′) corresponding to the hash value (Y1) exists in the hash table (S11YES), the cache information acquisition processing unit315performs processing of acquiring the encrypted output-side middle layer data (Z1′) as cache information (S13). In contrast, if, according to the determination, the encrypted output-side middle layer data (Z1′) corresponding to the hash value (Y1) does not exist in the hash table (S11NO), the server information acquisition processing unit316transmits the encrypted input-side middle layer data (X1′) to the server1(S15), and then goes into a prescribed waiting mode (S17NO). Upon reception of the encrypted output-side middle layer data (Z1′) from the server1in this waiting mode, the waiting mode is cleared (S17YES), and processing of associating the received encrypted output-side middle layer data (Z1′) with the hash value (Y1) and saving it is performed (S19). The operation of the server1during this period will be explained in detail inFIG.8. The decryption unit317generates the output-side middle layer data (Z1) by decrypting the acquired encrypted output-side middle layer data (Z1′) with a private key (S21). After that, the prediction processing unit312performs prediction processing based on the generated output-side middle layer data (Z1) from the second middle layer to the output layer, thereby generating a final output (O) (S23). The drive command unit318then issues a drive command to a drive unit, such as a motor, based on the final output (O) (S25). Upon completion of this drive processing, sensor information acquisition processing is performed again (S1), and a series of processing (S1to S25) is then repeated. The prediction processing operation in the server1will now be explained with reference toFIG.8. When the prediction processing is started in the server1, the server1goes into a prescribed waiting mode through the input data receiving unit111(S31NO). Upon reception of the encrypted input-side middle layer data (X1′) from the robot3in this state, the waiting mode is cleared (S31NO), and the decryption processing unit112performs processing to decrypt the received encrypted input-side middle layer data (X1′) with the private key, thereby generating input-side middle layer data (X1) (S33). The prediction processing unit113then performs prediction processing from the first middle layer to the second middle layer by using the input-side middle layer data (X1) as an input, thereby generating output-side middle layer data (Z1) (S35). The encryption processing unit114encrypts the output-side middle layer data (Z1) with a public key to generate the encrypted output-side middle layer data (Z1) (S37). The data transmitting unit115then transmits the encrypted output-side middle layer data (Z1′) to the robot3(S39). Upon completion of this transmission processing, the server1returns again to the reception waiting mode (S31), and a series of processing (S31to S39) is then repeated. FIG.9is a conceptual diagram of the prediction processing implemented with the system1according to this embodiment. In the drawing, the upper part is a conceptual diagram of the prediction processing performed in the robot3, and the lower part is a conceptual diagram of the prediction processing performed in the server1. The left side of the drawing shows the input side, and the right side shows the output side. As is clear from the drawing, when the sensor information (I) is input to the robot3, the prediction processing unit312performs prediction processing from the input stage to the first middle layer, thereby generating input-side middle layer data (X1). The input-side middle layer data (X1) is then encrypted and transmitted to the server1, and is decrypted in the server1. In the server1, the prediction processing unit113performs prediction processing from the first middle layer to the second middle layer by using the input-side middle layer data (X1) as an input, thereby generating output-side middle layer data (Z1). The output-side middle layer data (Z1) is then encrypted and transmitted to the robot3, and is decrypted in the robot3. In the robot3, the prediction processing unit312performs prediction processing between the second middle layer and the output layer to generate the final output (O). With such a configuration, in performing prediction processing using machine learning, only the abstracted intermediate output is transmitted and received between the client device (robot3) and the server, with no need for transmitting and receiving raw data. Therefore, the user of the client device can ensure protection of information such as personal information and trade secrets. Besides, the provider of the prediction model does not need to provide the entire prediction model to the client device side. Therefore, it is possible to reduce the risk of leakage, and the like, of the algorithm or the program implementing the algorithm. In other words, it is possible to provide a secure prediction system capable of satisfying the requirements of both the user side and the provider side of the prediction model. Besides, since an inquiry to the server for the data stored in the hash table is unnecessary, the cost of the use of the server can be reduced, and the prediction processing can be speeded up. Also, if the system is continuously used so that adequate information is accumulated in the hash table, the client device can be operated almost autonomously. Moreover, encryption processing has been performed for intermediate outputs communicated between the client device and the server are encrypted. Therefore, this contributes to excellent data security. In addition, hashing processing is performed in the aforementioned embodiment. This improves the data security, and by enhancing the speed of search processing in the hash table, enhancement of the speed of determination processing can be implemented. 2. Second Embodiment In this embodiment, servers are arranged in multiple stages in a system20. <2.1 System Configuration> The configuration of the system20according to this embodiment will be described with reference toFIGS.10to12. In this embodiment, servers5and6are configured in multiple stages. FIG.10is an overall configuration diagram of the system20according to this embodiment. As is clear from the drawing, the system20according to this embodiment is the same as in the first embodiment in that the server5and multiple robots7(7-1to7-N) as client devices are connected by communication via a network. However, this embodiment differs from the first embodiment in that an intermediate server6is interposed between the robots7and the final server5. The intermediate server6is operated by, for example, a machine learning technology vendor (AI vendor). FIG.11is a diagram showing the hardware configuration of an intermediate server6interposed between the robots7and the final server5. As is clear from the drawing, the intermediate server6includes a control unit61, a storage unit62, an I/O unit63, a communication unit64, a display unit65, and an input unit66, which are connected to each other via a system bus or the like. The control unit61consists of a processor such as a CPU or GPU and performs execution processing for various programs. The storage unit62is a storage device such as a ROM, RAM, hard disk, or flash memory, and stores various data, operation programs, and the like. The I/O unit63performs input and output or the like with external devices. The communication unit64is, for example, a communication unit that communicates based on a prescribed communication standard, and communicates with the final server5and the robots7that are client devices. The display unit65is connected to a display or the like to present a prescribed display. The input unit66receives inputs from the administrator through, for example, a keyboard or a mouse. FIG.12is a functional block diagram related to the control unit61of the intermediate server6. As is clear from the drawing, the control unit61includes an input data receiving unit611, a decryption processing unit612, a prediction processing unit613, an encryption processing unit614, a hashing processing unit615, an information acquisition necessity determination unit616, a cache information acquisition processing unit617, a server information acquisition processing unit618, and a data transmitting unit619. The input data receiving unit611receives input data from the robots3or the final server5. The decryption processing unit612decrypts the data encrypted by a public key or the like with an encryption key, for example. The prediction processing unit613reads basic information, weight information, and the like on, for example, the configuration of a prediction model (trained model) generated by supervised learning of a neural network, and generates a prescribed prediction output based on the input data. The encryption processing unit614encrypts the input data with a public key or the like. The hashing processing unit615generates corresponding hash values by hashing input information, that is, it generates irregular fixed-length values. The information acquisition necessity determination unit616determines whether or not the data corresponding to the prescribed data is already stored in a prescribed table. When the information acquisition necessity determination unit616determines that the data corresponding to the prescribed data exists, the cache information acquisition processing unit617acquires the corresponding data. The server information acquisition processing unit618transmits prescribed data to the final server5and receives the data corresponding to that data. The data transmitting unit619performs processing of transmitting transmission-target data to the robots3or the final server5. Since the hardware configurations of the final server5and the robots7are substantially the same as the configurations of the server1and the robots3of the first embodiment, their description will be omitted here. <2.2 System Operation> The operation of the system20according to this embodiment will now be described with reference toFIGS.13to16. The operation of the robot7is substantially the same as that in the first embodiment. In other words, as shown inFIGS.6and7, if, according to the determination (S9) in the information acquisition necessity determination processing unit314, the encrypted output-side middle layer data (Z1′) corresponding to the hash value (Y1) does not exist in the hash table (S11NO), the server information acquisition processing unit316transmits the first encrypted input-side middle layer data (X1′) to the intermediate server6(S15) and then goes into a prescribed waiting mode (S17NO). Upon reception of the first encrypted output-side middle layer data (Z1′) from the server1in this waiting mode, the waiting mode is cleared (S17YES), and processing of associating the received first encrypted output-side middle layer data (Z1′) with the hash value (Y1) and saving it is performed (S19). FIGS.13and14are flowcharts related to the prediction processing operation in the intermediate server6. When the prediction processing starts, the intermediate server6goes into a prescribed waiting mode with the input data receiving unit611(S51NO). After that, upon reception of the first encrypted input-side middle layer data (X1′) from the robot7(S51YES), the waiting mode is cleared. After that, the decryption processing unit612performs decryption processing of the received first encrypted input-side middle layer data (X1′) with a private key, and generates the first input-side middle layer data (X1) (S53). The prediction processing unit613performs prediction processing from the first middle layer to the third middle layer based on the decrypted first input-side middle layer data (X1), thereby generating the second input-side middle layer data (X2) (S55). The encryption processing unit614encrypts the second input-side middle layer data (X2) with a public key to generate the second encrypted input-side middle layer data (X2′) (S57). The hashing processing unit615performs a hashing processing of the second encrypted input-side middle layer data (X2′) and generates the second hash value (Y2) (S59). The information acquisition necessity determination unit616then reads the second hash table stored in the intermediate server6, and determines whether or not the second encrypted output-side middle layer data (Z2′) corresponding to the generated second hash value (Y2) exists in the second hash table (S61). If, according to this determination (S9), the second encrypted output-side middle layer data (Z2′) corresponding to the second hash value (Y2) exists in the hash table (S63YES), the cache information acquisition processing unit617performs processing of acquiring the second encrypted output-side middle layer data (Z2′) as cache information (S65). In contrast, if, according to the determination, the second encrypted output-side middle layer data (Z2′) corresponding to the second hash value (Y2) does not exist in the second hash table (S63NO), the server information acquisition processing unit618transmits the second encrypted input-side middle layer data (X2′) to the server1(S67) and then goes into a prescribed waiting mode (S69NO). Upon reception of the second encrypted output-side middle layer data (Z2′) from the final server5in this waiting mode, the waiting mode is cleared (S69YES), and processing of associating the received second encrypted output-side middle layer data (Z2′) with the second hash value (Y2) and saving it is performed (S71). The operation of the final server5during this period will be explained later inFIG.15. The decryption processing unit612generates the second output-side middle layer data (Z2) by decrypting the acquired second encrypted output-side middle layer data (Z2′) with a private key (S73). The prediction processing unit613then performs prediction processing from the fourth middle layer to the second middle layer based on the generated second output-side middle layer data (Z2), thereby generating the first output-side middle layer data (Z1) (S75). The encryption processing unit614performs encryption processing of the first output-side middle layer data (Z1) to generate the first encrypted output-side middle layer data (Z1′) (S77). The data transmitting unit619then transmits the first encrypted output-side middle layer data (Z1′) to the robot7. Upon completion of this transmission processing, the intermediate server6returns again to the reception waiting mode (S51NO), and a series of processing (S51to S79) is then repeated. FIG.15is a flowchart related to the prediction processing operation in the final server5. When the prediction processing is started, the final server5goes into a prescribed waiting mode with the input data receiving unit111(S81NO). Upon reception of the second encrypted input-side middle layer data (X2′) from the intermediate server6in this state, the waiting mode is cleared (S81YES). The decryption processing unit112performs decryption processing of the received second encrypted input-side middle layer data (X2′) with a private key, and generates the second input-side middle layer data (X2) (S83). The prediction processing unit113then performs prediction processing from the third middle layer to the fourth middle layer by using this second input-side middle layer data (X2) as an input, thereby generating the second output-side middle layer data (Z2) (S85). The encryption processing unit114encrypts this second output-side middle layer data (Z2) with a public key to generate the second encrypted output-side middle layer data (Z2′) (S87). The data transmitting unit115then transmits the second encrypted output-side middle layer data (Z2′) to the intermediate server6(S89). Upon completion of this transmission processing, the final server5returns again to the reception waiting mode (S81), and a series of processing (S81to S89) is then repeated. FIG.16is a conceptual diagram of the prediction processing implemented with the system20according to this embodiment. In the drawing, the upper part is a conceptual diagram of the prediction processing performed in the robot7, the middle part is a conceptual diagram of the prediction processing performed in the intermediate server6, and the lower part is a conceptual diagram of the prediction processing performed in the final server5. The left side of the drawing shows the input side, and the right side shows the output side. As is clear from the drawing, when the sensor information (I) is input to the robot3, the prediction processing unit312performs prediction processing from the input stage to the first middle layer, thereby generating the first input-side middle layer data (X1). The first input-side middle layer data (X1) is then encrypted and transmitted to the intermediate server6and is decrypted in the intermediate server6. In the intermediate server6, the prediction processing unit613performs prediction processing between the first middle layer and the third middle layer to generate the second input-side middle layer data (X2). The second input-side middle layer data (X2) is then encrypted and transmitted to the final server5and is decrypted in the final server5. In the final server5, the prediction processing unit113performs prediction processing from the third middle layer to the fourth middle layer by using the second input-side middle layer data (X2) as an input, thereby generating the second output-side middle layer data (Z2). The second output-side middle layer data (Z2) is then encrypted and transmitted to the intermediate server6and is decrypted in the intermediate server6. In the intermediate server6, the prediction processing unit613performs prediction processing between the fourth middle layer and the second middle layer to generate the first output-side middle layer data (Z1). The first output-side middle layer data (Z1) is then encrypted and transmitted to the robot7and is decrypted in the robot7. In the robot7, the prediction processing unit312performs prediction processing between the second middle layer and the output layer to generate the final output (O). With such a configuration, which has servers provided in multiple stages, the processing load in each device in the client device and each server can be reduced, and at the same time, the economies of scale given by providing multiple stages can be expected to enhance the prediction performance of the client device. Besides, even if multiple stages are provided in this way, the processing speed is unlikely to drop because each server also performs prediction processing based on the cache information. Since the prediction models are distributed, the safety of the system, for example, is expected to be further improved, and management of each of the servers can be shared by multiple administrators. 3. Third Embodiment In this embodiment, a system30performs learning processing in addition to prediction processing. <3.1 System Configuration> The configuration of the system30according to this embodiment is substantially the same as that shown in the second embodiment. However, they differ in that each control unit of the robots7, the intermediate server6, and the final server5has a functional block for learning processing besides prediction processing. FIG.17is a functional block diagram of the control unit710of a robot7. In the drawing, the features of the prediction processing unit7101are substantially the same as the configuration shown inFIG.4, and thus its detailed description will be omitted. Note that the prediction processing unit7101is different in that it further includes a cache table addition processing unit7109. The cache table addition processing unit7109performs decryption processing (S21) shown inFIG.7to generate the output-side middle layer data (Z1) and then performs processing of additionally storing the output-side middle layer data (Z1) in the cache table together with the corresponding input-side middle layer data (X1). This cache table is used for the learning processing described later. The control unit710further includes a learning processing unit7102. The learning processing unit7102includes a data reading unit7102, an approximation function generation processing unit7116, a prediction processing unit7117, an error backpropagation processing unit7118, a parameter update processing unit7119, an encryption processing unit7120, and a data transmission processing unit7121. The data reading unit7115performs processing of reading various data stored in the robot7. The approximation function generation processing unit7116generates an approximation function by a method, which will be described later, based on a cache table related to a prescribed input and output correspondence relationship. The prediction processing unit7117reads basic information, weight information, and the like on, for example, the configuration of a prediction model (trained model) generated by supervised learning of a neural network, and generates a prescribed prediction output based on the input data. The error backpropagation processing unit7118performs processing of propagating an error obtained by comparing the output of the prediction model with the teacher data, from the output side to the input side of the model (Backpropagation). The parameter update processing unit7119performs processing of updating model parameters such as weights so as to reduce the error between the output of the prediction model and the teacher data. The encryption processing unit7120performs processing of encrypting prescribed target data with a public key or the like. The data transmission processing unit7121performs processing of transmitting prescribed target data to the intermediate server6. FIG.18is a functional block diagram of the control unit610of the intermediate server6. In the drawing, the features of the prediction processing unit6101are substantially the same as the configuration shown inFIG.12, and its detailed description will therefore be omitted. Note that the prediction processing unit6101is different in that it further includes a cache table addition processing unit6112. The cache table addition processing unit6112performs decryption processing (S75) shown inFIG.14to generate the second output-side middle layer data (Z2), and then performs processing to additionally store the second output-side middle layer data (Z2) in the cache table together with the corresponding second input-side middle layer data (X2). This cache table is used for the learning processing described later. The control unit610further includes a learning processing unit6102. The learning processing unit6102includes an input data receiving unit6123, a data reading unit6115, a sampling processing unit6116, an approximation function generation processing unit6117, a prediction processing unit6118, an error backpropagation processing unit6119, a parameter update processing unit6120, an encryption processing unit6121, and a data transmission processing unit6122. The input data receiving unit6123performs processing of receiving, decrypting, and storing various data such as a first cache table received from the robot7. The data reading unit6115performs processing of reading various data stored in the intermediate server6. The sampling processing unit6116performs processing of selecting a data set to be a learning target from the cache table. The approximation function generation processing unit6117generates an approximation function by a method, which will be described later, based on a cache table related to a prescribed input and output correspondence relationship. The prediction processing unit6118reads basic information, weight information, and the like on, for example, the configuration of a prediction model (trained model) generated by supervised learning of a neural network, and generates a prescribed prediction output based on the input data. The error backpropagation processing unit6119performs processing of propagating an error obtained by comparing the output of the prediction model with the teacher data, from the output side to the input side of the model (Backpropagation). The parameter update processing unit6120performs processing of updating model parameters such as weights so as to reduce the error between the output of the prediction model and the teacher data. The encryption processing unit6121performs processing of encrypting prescribed target data with a public key or the like. The data transmission processing unit6122performs processing of transmitting prescribed target data to the robot7or final server5. FIG.19is a functional block diagram of the control unit510of the final server5. In the drawing, the features of the prediction processing unit5101are substantially the same as the configuration shown inFIG.5, and its detailed description will therefore be omitted. The control unit510further includes a learning processing unit5102. The learning processing unit5102includes an input data receiving unit5115, a data reading unit5110, a sampling processing unit5111, a prediction processing unit5112, an error backpropagation processing unit5113, and a parameter update processing unit5114. The input data receiving unit5115performs processing of receiving, decrypting, and storing various data such as a second cache table received from the intermediate server6. The data reading unit5110performs processing of reading various data stored in the final server5. The sampling processing unit5111performs processing of selecting a data set to be a learning target from the second cache table. The prediction processing unit5112reads basic information, weight information, and the like on, for example, the configuration of a prediction model (trained model) generated by supervised learning of a neural network, and generates a prescribed prediction output based on the input data. The error backpropagation processing unit5113performs processing of propagating an error obtained by comparing the output of the prediction model with the teacher data, from the output side to the input side of the model (Backpropagation). The parameter update processing unit5114performs processing of updating model parameters such as weights so as to reduce the error between the output of the prediction model and the teacher data. <3.2 System Operation> The operation of the system30will now be described with reference toFIGS.20to26. Note that the prediction processing operation is substantially the same as in the second embodiment, and its description will therefore be omitted here. FIG.20is a flowchart of learning processing operation in a robot7. As is clear from the drawing, when learning processing operation starts, the data reading unit7115reads an input and output pair (X0, Z0) from the input and output data table stored in the robot7and corresponding to the teacher data (S101). Upon this reading, the prediction processing unit7117performs prediction processing in the section extending from the input layer of the prediction model to the first middle layer based on the input data X0, thereby generating the input-side middle layer data (X1-s1) (S103). Meanwhile, concurrently with these steps (S101to S103), the data reading unit7115performs processing of reading the first cache table including correspondence between the first input-side middle layer data (X1) and the first output-side middle layer data (Z1) accumulated in the robot7during prediction processing (S105). After reading of the first cache table, processing of generating an approximation function is performed based on the first cache table (S107). The processing of generating the approximation function will now be explained in detail. Data conversion (cache conversion) that generates the data (Z1) of the first output-side middle layer (temporarily referred to as Z layer for convenience of explanation), using the data (X1) of the first input-side middle layer (temporarily referred to as X layer for convenience of explanation) as an input can be expressed as follows. {right arrow over (z)}=s({right arrow over (x)}) (1) Here, the vector representing the data of the X layer composed of n neurons can be expressed as follows. {right arrow over (x)}=(xi, . . . xni, . . . ,xn) (2) Similarly, the vector representing the data of the Z layer composed of N neurons can be expressed as follows. {right arrow over (z)}=(z1, . . . ,zk, . . . ,zN) (3) The k-th value zk of the Z layer, which can be calculated independently of the other N−1 values from the formula (1), can be expressed as follows. zk=Sk({right arrow over (x)}) (4) At this time, the conversion function Sk cannot be converted to the k-th value of the corresponding Z layer if the combination of each of the component values of the data vector of the X layer does not exist in the first cache table due to the nature of the cache conversion. Therefore, approximation is made by the formula (5) that is a linear equation such as the following. zk=(∑mnwkmxm)+b(5) Note that the variable in the formula (5) is the following (n+1). wkl, . . . ,wkn,b(6) Therefore, in order to obtain the solution of the formula (5), (n+1) pieces of data should be extracted from the formula (4) and the following simultaneous linear equation in (n+1) unknowns should be solved. {zkJ=wk1x1,J+…+wkmxm,i+…+wknxn,1+b…zk,j=wk1x1,j+…+wkmxm,j+…+wknxn,1+b…zk,n-1=wk1x1,n+1+…+wkmxm,n+1+…+wknxn,n+1+b(7) For extraction of the (n+1) pieces of data, the cache data near the target point for which an approximate value is desired, is preferably selected. This is because fluctuations in approximation errors can be suppressed by extracting as much cache data as possible near the target point for which the approximate value is desired.FIG.21shows a conceptual diagram related to the extraction of such cache data. Here, the following definition can be made. βk=(zk,1, . . . ,zk,p, . . . ,zk,n+1) (8) Then, the formula (7) can be simply expressed as follows. A=(xj,1…xm,j…xn,11⋮⋱⋮⋱⋮⋮xl,j…xm,j…xn,i1⋮⋱⋮⋱⋮⋮xi,n+1…xm,n+1…xn,n+11)vk=(wk1,…,wkm,…,wkn,b)Avk=βk(9) If A, which is the square matrix of the order (n+1), is a regular matrix, the formula (9) uniquely has the following solution vk. vk={circumflex over (v)}k=(ŵkj, . . . ,ŵkm, . . . ,ŵkn,{circumflex over (b)}) (10) In other words, the solution vk of the formula (10) can be obtained by calculating the formula (9) with a computer according to an algorithm such as Gaussian elimination. By substitution of this solution vk, the formula (5) can be expressed as follows. zk=(∑mnw^kmxm)+bˆ(11) In other words, this formula (11) is an approximation expression. As is clear from this formula, since approximate partial differentiation is applicable to each component of the data vector of the X layer, errors can be easily back propagated from the Z layer to the X layer, for example. In other words, before and after the learning model part to which the cache table corresponds, that is, for example, even if each machine learning model of the input-side and output-side multiple layer neural network models, learning processing can be performed at high speed using error backpropagation method. Returning to the flowchart ofFIG.20, upon completion of the processing (S103) of generating the first input-side middle layer data (X1-s1) and the processing (S107) of generating an approximation function, the prediction processing unit7117performs, based on the first input-side middle layer data (X1) and the approximation function, prediction processing for the section extending from the first middle layer to the second middle layer and generates output-side middle layer data (Z1-s1) (S109). After that, the prediction processing unit7117performs prediction processing using the output-side middle layer data (Z1-s1) as an input, for the section extending from the second middle layer to the output layer, thereby generating a final output (Z0-s1) (S111). The error backpropagation processing unit6119generates an error between the teacher output (Z0) according to the teacher data and the final output (Z0-s1), and the error or a prescribed value (for example, root mean square error) based on the error is propagated from the output side to the input side by methods such as the steepest descent method (S113). After that, the parameter update processing unit7119performs, based on the back propagated error and the like, processing of updating the parameters such as weights of the section extending from the input layer to the first middle layer and the section extending from the second middle layer to the output layer of the learning model, excluding the approximation function part (S115). After that, the robot7checks whether or not transmission of the first cache table is permitted, from prescribed settings information (S117). As a result, if no transmission permission is granted, learning ending determination (S121) is made, and if it is determined not to end (S121NO), all the processings (S101to S121) are repeated again. In contrast, if it is determined to end (S121YES), the learning processing ends. In contrast, if permission to transmit the cache table is granted (S117YES), the data transmission processing unit7121performs processing of transmitting the first cache table that was encrypted by the encryption processing unit7120, to the intermediate server6(S119). This is followed by the learning ending determination (S121). The learning processing operation in the intermediate server6will now be explained. FIG.22is a flowchart related to processing of reception and storage of the first cache table transmitted from the robot7. As is clear from the drawing, when the learning processing is started in the intermediate server6, the input data receiving unit6123goes into the data reception waiting mode (S131). If, in this state, the data corresponding to the encrypted first cache table is received (S131YES), the data reception waiting mode is cleared, the received first cache data is decrypted with a private key or the like (S133), and processing of storing it in the storage unit is performed (S135). Upon completion of this storage processing, the intermediate server6again goes into the reception waiting mode (S131NO). FIG.23is a flowchart related to the learning processing operation in the intermediate server6executed concurrently with the processing of receiving the first cache table shown inFIG.22. As is clear from the drawing, when learning processing operation starts, the data reading unit6115reads an input and output pair (X1-s1, Z1-s1) from the input and output data table stored in the intermediate server6and corresponding to the teacher data (S141). Upon reading of the input and output pair, the sampling processing unit6116extracts the input and output pair to be used for learning (S143). After this extraction processing, the prediction processing unit6118performs prediction processing in the section extending from the first middle layer to the third middle layer of the prediction model according to the input data (X1-s1), thereby generating the second input-side middle layer data (X2-s2) (S145). Meanwhile, concurrently with these steps (S141to S145), the data reading unit6115performs processing of reading the second cache table (X2and Z2) including correspondence between the second input-side middle layer data (X2) and the first output-side middle layer data (Z2) accumulated in the intermediate server6during prediction processing (S147). After reading of the second cache table, processing of generating, based on the second cache table, an approximation function in such a way that the second output-side middle layer data (Z2) is generated based on the second input-side middle layer data (X2) is performed (S149). The approximation function generating processing is the same as the approximation function generation in the robot7. Upon completion of the processing (S145) of generating the second input-side middle layer data (X2-s2) and the processing (S149) of generating an approximation function, the prediction processing unit6118performs, based on the second input-side middle layer data (X2-s2) and the approximation function, prediction processing for the section extending from the third middle layer to the fourth middle layer and generates the second output-side middle layer data (Z2-s2) (S151). After that, the prediction processing unit6118performs prediction processing using the second output-side middle layer data (Z2-s2) as an input, for the section extending from the fourth middle layer to the second middle layer, thereby generating the second output-side prediction output (Z1-s2) (S153). The error backpropagation processing unit6119generates an error between the teacher data (Z1-s1) and the second output-side prediction output (Z1-s2), and the error or a prescribed value (for example, root mean square error) based on the error is propagated from the output side to the input side by methods such as the steepest descent method (S155). After that, the parameter update processing unit6120performs, based on the back propagated error and the like, processing of updating the parameters such as weights of the section extending from the first middle layer to the third middle layer and the section extending from the fourth middle layer to the second middle layer of the learning model, excluding the approximation function part (5157). After that, the intermediate server6checks whether or not transmission of the second cache table (X2-s2and Z2-s2) is permitted, from prescribed settings information (S159). As a result, if no transmission permission is granted, learning ending determination (S163) is made, and if it is determined not to end (S163NO), all the processings (S141to S163) are repeated again. In contrast, if it is determined to end (S163YES), the learning processing ends. In contrast, if permission to transmit the cache table is granted (S159YES), the data transmission processing unit6122performs processing of transmitting the second cache table that was encrypted by the encryption processing unit6121, to the final server5(S161). This is followed by the learning ending determination (S163). The learning processing operation in the final server5will now be explained. FIG.24is a flowchart related to processing of reception and storage of the second cache table (X2-s2, Z2-s2) transmitted from the intermediate server6. As is clear from the drawing, when the learning processing is started in the final server5, the input data receiving unit5115goes into the data reception waiting mode (S171). If, in this state, the data corresponding to the encrypted second cache table is received (S171YES), the data reception waiting mode is cleared, the received second cache data is decrypted with a private key or the like (S173), and processing of storing it in the storage unit is performed (S175). Upon completion of this storage processing, the final server5again goes into the reception waiting mode (S171NO). FIG.25is a flowchart related to the learning processing operation in the final server5executed concurrently with the processing of receiving the second cache table shown inFIG.24. When the learning processing starts, the data reading unit5110performs processing of reading a cache table (S181). The sampling processing unit (55111) then extracts an input and output pair to be a learning target, from the cache table (S183). The prediction processing unit5112performs prediction processing from the third middle layer to the fourth middle layer based on the read second input-side middle layer data (X2-s2), thereby generating the second output-side middle layer data (Z2-s3) (S185). The error backpropagation processing unit5113generates an error between the second output-side middle layer data (Z2-s3) and the teacher data (Z2-s2), and the error or a prescribed value (for example, root mean square error) based on the error is propagated from the output side to the input side by methods such as the steepest descent method (5187). After that, the parameter update processing unit5114performs processing of updating the parameters such as weights of the learning model based on the back propagated error and the like (S189). If parameter updating processing is performed, learning ending determination is made, and if a prescribed end condition is not satisfied (S191NO), the series of processing (S181to S189) is performed again. In contrast, if the prescribed end condition is satisfied (S191YES), the learning processing ends. FIG.26is a conceptual diagram of the learning processing implemented with the system30according to this embodiment. In the drawing, the upper part is a conceptual diagram of the learning processing performed in the robot7, the middle part is a conceptual diagram of the learning processing performed in the intermediate server6, and the lower part is a conceptual diagram of the learning processing performed in the final server5. The left side of the drawing shows the input side, and the right side shows the output side. As is clear from the drawing, when the input information (X0) is input to the robot7, the prediction processing unit7117performs prediction processing from the input stage to the first middle layer, thereby generating the first input-side middle layer data (X1-s1). Meanwhile, the approximation function generation processing unit7116generates an approximation function (F(x)) based on the first cache table (X1and Z1). The prediction processing unit7117generates the first output-side middle layer data (Z1-s1) based on the first input-side middle layer data (X1-s1) and the approximation function (F(x)). Further, the final output data (Z0-s1) is generated based on the first output-side middle layer data (Z1-s1). The error backpropagation processing unit7118back propagates the error between the final output data (Z0-s1) and the teacher data (Z0) from the final output stage to the input stage via an approximation function. After that, the parameter update processing unit7119updates the parameters including the weights from the final output stage to the second middle layer, and from the first middle layer to the input stage. Further, the first cache table (X1-s1, Z1-s1) generated at this time is provided to the intermediate server6under prescribed conditions. As is clear from the drawing, when the first input-side middle layer data (X1-s1) is input to the intermediate server6, the prediction processing unit6118performs prediction processing from the first middle layer to the third middle layer, thereby generating the second input-side middle layer data (X2-s2). Meanwhile, the approximation function generation processing unit6117generates an approximation function (G(x)) based on the first cache table (X1-s1, Z1-s1). The prediction processing unit6118generates the second output-side middle layer data (Z2-s2) based on the second input-side middle layer data (X2-s2) and the approximation function (G(x)). Further, the first output-side middle layer data (Z1-s2) is generated based on the second output-side middle layer data (Z2-s2). The error backpropagation processing unit6119back propagates the error between the final output data (Z1-s2) and the teacher data (Z1-s1) from the second middle layer to the first middle layer via an approximation function. After that, the parameter update processing unit6120updates the parameters including the weights from the second middle layer to the fourth middle layer, and from the third middle layer to the first middle layer. Further, the second cache table (X2-s2, Z2-s2) generated at this time is provided to the final server5under prescribed conditions. Moreover, as is clear from the drawing, when the second input-side middle layer data (X2-s2) is input to the final server5, the prediction processing unit5112performs prediction processing from the third middle layer to the fourth middle layer, thereby generating the second output-side middle layer data (Z2-s3). The error backpropagation processing unit5113back propagates the error between the second output-side middle layer data (Z2-s3) and the teacher data (Z2-s2) from the fourth middle layer to the third middle layer. After that, the parameter update processing unit5114updates the parameters including the weights from the fourth middle layer to the fourth middle layer. 4. Modification The present invention is not limited to the configuration and operation of the aforementioned embodiment, and can be modified in various ways. In the third embodiment, the approximation function generated from the cache table is described as used only in learning processing. However, the present invention is not limited to such a configuration. For instance, an approximation function may be generated based on the cache table obtained so far for the purpose of prediction processing, and prediction processing may be performed for the section from the first middle layer extending to the second middle layer based on the first input-side middle layer data (X1) and approximation function, thereby generating the output-side middle layer data (Z1). With such a configuration, for example, after a certain amount of data is accumulated in the hash table, prediction processing can be performed without significantly reducing the frequency of inquiries to the server side or without making inquiries. In the aforementioned embodiments, the input-side middle layer data (X) (for example, X1or X2) is encrypted and hashed, and hash table search processing is then performed using the hash value as a key (for example, S11inFIG.6and S55inFIG.13). However, the present invention is not limited to such a configuration. Therefore, for example, the input-side middle layer data (X) may be subjected to rounding processing, then encrypted and/or hashed, and searched from the hash table. Rounding processing is processing in which, where the group to which input-side middle layer data (X) belongs is U, specific input-side middle layer data belonging to the group U is regarded as having the equal value (X_u)) (representative value). For example, some node values (neuron firing values) of the input-side middle layer data (X) may be discretized into integer values by, for example, rounding up or down the numerical values, thereby forming a group of multiple integer values. With such a configuration, correspondence with the hash value that was obtained in the past can be improved, which leads to speedup of processing, for example. In the aforementioned embodiments, the robot7as a client device is configured to directly communicate with the intermediate server6or the server1. However, the present invention is not limited to such a configuration.FIG.27is an overall configuration diagram of a system40according to a Modification. In the configuration, the system40consists of a server2that performs prediction processing, an intermediary server8that is connected to the server2via a WAN and is connected to a LAN, and a robot9as a client device connected to the LAN. In this modification, information is exchanged between the server2and the client device9via the intermediary server8. In the aforementioned embodiments, supervised learning using a neural network (or deep learning) was illustrated as a machine learning algorithm. However, the present invention is not limited to such a configuration. Therefore, for example, other machine learning algorithms that are divisible and can handle intermediate values in a similar format may be used. Moreover, not only supervised learning but also unsupervised learning such as Generative Adversarial Networks (GAN), Variational Auto Encoder (VAE), and Self-Organizing Map (SOM), or reinforcement learning may be used. In the case where reinforcement learning is performed, for example, prediction processing or the like on a simulator may be used. In the learning processing in the aforementioned embodiments, the approximation function is generated by approximation by the linear equation shown in Formula 5. However, the approximation method is not limited to such an example, and other methods may be used for the approximation. For instance, a bypass function may be used as the approximation function.FIG.28is a conceptual diagram related to an example of use of a bypass function. In the drawing, H(x) represents an approximation function based on the linear equation shown in Formula 5 or the like, and J(x) represents a bypass function, forming an approximation function as a whole. As is clear from the drawing, the bypass function J(x) is disposed in parallel so as to go around (bypass) the approximation function H(x) based on the linear equation. Note that the error backpropagation method can be applied to any functions. FIG.29is a conceptual diagram of a bypass function J(x). In the example shown in the drawing, a case is shown where the number of nodes in the input-side middle layer is larger than the number of nodes in the output-side middle layer. When data is input from the input-side middle layer, the bypass function J(x) compresses the data using a pooling layer having less nodes (for example, about half the number of nodes in the input-side middle layer). The node output of the pooling layer is then provided to the output-side middle layer. Here, zero (0) is provided to the nodes through which no connection is established from the pooling layer to the output-side middle layer (zero padding). For instance, when the number of nodes n_x in the input-side middle layer is 32 and the number of nodes n_z in the output-side middle layer is 20, the number of nodes in the pooling layer is 16 which is half of the number of nodes n_x in the input-side middle layer. Here, the pooling method may be average pooling or the like that takes the average of the adjacent node values. The16outputs from the pooling layer are then provided to the output-side middle layer. Here, zero (0) is provided to the four output-side middle layer nodes that are not associated with pooling layer nodes. Although the pooling layer is used in this modification, the pooling layer should not necessarily be used: for instance, a bypass route that allows data to directly pass may be formed. With such a configuration, error backpropagation is promoted by bypassing the approximation function generated based on the cache table, and as a result, learning efficiency can be increased. Also, for example, the sum of multiple subapproximation functions may be used as the approximation function.FIG.30is a conceptual diagram of approximation using the sum of subapproximation functions. As is clear from the drawing, the output of the approximation function is the total sum of the values (weighted sum) obtained by multiplying multiple different approximation functions K_1(x), K_2(x), K_3(x), . . . K_n(x) (these functions will hereinafter be referred to as subapproximation functions for convenience) by the contribution coefficients a_1, a_2, a_3, . . . a_n, respectively. Here, each of the contribution coefficients a_i (i=1, 2, . . . N) is a value of 0 or more and 1 or less, and the total sum of a_i is 1, that is, a_1+a_2+ . . . +a_n=1. This contribution coefficient may be a fixed value, or may be varied in such a manner that a different value is given in each forward calculation or error backpropagation. Each subapproximation function is, for example, an approximation function generated based on a cache table or an approximation function based on a linear equation used in a neural network or the aforementioned embodiments. All subapproximation functions are configured such that the error backpropagation method can be applied. With such a configuration, the approximation accuracy is expected to be improved by the ensemble effect with the layers before and after the approximation function, whereby even if data accumulated in the cache table is inadequate, the approximation accuracy can be expected to be maintained or to improve. In the aforementioned embodiments, the robots, intermediate servers, final servers, and the like were all illustrated as single devices. However, the present invention is not limited to such a configuration. Therefore, for example, a part of a device configuration may be separately provided as an external device. For instance, an external large-capacity storage may be installed and connected to a server or other devices. Alternatively, instead of a single device, multiple devices may be used for distributed processing or the like. Alternatively, virtualization technology or the like may be used. Although one client device holds one hash table in the aforementioned embodiments, the present invention is not limited to such a configuration. Therefore, the hash table may be shared among multiple client devices, for example. Consequently, the cache of the prediction processing performed in each client device is accumulated to be shared, thereby more rapidly reducing the server usage cost, increasing the processing speed, and allowing the client devices to operate autonomously, and the like. Note that the hash table may be shared, for example, using the intermediary server8in the system shown inFIG.27, or using a distributed hash table or other techniques to allow each of the client devices to directly obtain information from each other without a server or the like. Although an example in which learning processing is sequentially performed is shown in the aforementioned embodiments, the present invention is not limited to such a configuration. Therefore, for example, a configuration can be such that the parameters are updated in batch after accumulation of a certain amount of errors corresponding to multiple input and output pairs. Alternatively, so-called online learning in which learning processing is performed concurrently with prediction processing may be performed. In the aforementioned embodiments, robots were illustrated as the client devices. However, the present invention is not limited to such a configuration. The client devices should be construed as including any devices with or without physical operation. Note that examples of the client devices include all information processing devices such as smartphones, tablet terminals, personal computers, smart speakers, and wearable terminals. Although robot operation information (sensor signals or motor signals) are expressed as learning targets in the aforementioned embodiments, the present invention is not limited to such a configuration. Therefore, for example, learning target data may include all kinds of information such as imaging signals, voice signals, image signals, video signals, language information, and character information, and may undergo processing for various purposes such as voice recognition processing, image signal processing, and natural language processing. Although the client devices are configured to cause the server side to perform arithmetic operations between the input-side middle layer (X) and the output-side middle layer (Z) in the aforementioned embodiments, the present invention is not limited to such a configuration. Therefore, for example, client devices may also perform prediction processing by holding a part of a prescribed divided middle layer, and transmitting and receiving a part of the prediction results to and from the server more than once. In the aforementioned embodiments, processing of updating parameters such as weights for portions of learning models excluding approximation functions based on the error back propagated by the error backpropagation method is performed (for example, S115and S157). However, the present invention is not limited to such a configuration. Therefore, for example, processing of updating the parameters in the approximation functions may also be performed. INDUSTRIAL APPLICABILITY The present invention is available in any industries that utilize machine learning technology. REFERENCE SIGNS LIST 1Server3Robot5Final server6Intermediate server7Robot8Intermediary server10System | 64,278 |
11943278 | DETAILED DESCRIPTION FIG.1is a schematic view of an example of a method1for loading a web page at a user equipment in a telecommunication system, according to the present invention. The User Equipment, UE, is indicated with reference numeral2and the web server6hosting the web page is indicated with reference numeral6. The User Equipment2may be any device suitable for internet access, such as a mobile phone, laptop, tablet, desktop personal computer, etc. Mobile User Equipment are arranged to obtain Internet Protocol, IP, connectivity through the establishment of a Packet Data Protocol Context, PDPc, in telecommunication networks such as General Packet Radio Service/third Generation, GPRS/3G networks. In Long Term Evolution, LTE, combined with Evolved Packet Core based networks, i.e. Evolved Packet System, EPS, networks, IP connectivity is established through the use of a Data Bearer. In any case, the IP access point server8is a first server in the telecommunication network to which the User Equipment2communicates. The IP access point server8may be a Gateway General Packet Radio Service, GPRS, Support Node for GPRS/3G telecommunication networks or a Packet Data Gateway Network for EPS telecommunication networks. User Equipment2usually has a single IP access point server8configured for internet access and Multimedia Messaging Service, MMS. This has the implication that data traffic for generic internet services and data traffic for MMS is handled in the same PDPc or data bearer. An IP packet that is related to the transfer of an MMS can, however, be treated differently from an IP message that contains another IP protocol, such as Hypertext Transfer Protocol, HTTP. This differentiation may e.g. be done based on destination or origination web addresses of the IP packets sent by the User Equipment2. IP packets related to MMS may e.g. be charged differently, or may be exempt from charging when MMS charging is performed at another server in the telecommunication network. In a first step for loading a web page at the UE2, the IP access point server8receives a request, sent from the UE2, for loading the web page. Usually, for the loading of a web page, a number of HTTP GET requests are initiated by the UE2. For example, an initial HTTP GET request is received by the IP access point server,2for retrieving web page markup data of a web page, such as a “home.html” file. Once the “home.html” file is provided to the UE1, the UE may generate subsequent HTTP GET requests directed to web addresses comprised in the “home.html” file for also retrieving linked content. Hence, there is generally a large amount of signaling between a UE2and a web server6, such as an HTTP server, for the loading of a HyperText Markup Language, HTML, web page. If a certain web page qualifies for a web page loading policy handling, for example access to the web page is to be exempt from charging, it is regarded that it is not sufficient to filter on requests comprising a web address of the web page as destination address, i.e. a Request URL for an HTTP GET request, for determining that the web page qualifies for a web page loading policy handling. Reason is that the loading of this web page may comprise subsequent HTTP GET transactions directed to web addresses of other web pages, which web pages should also qualify for a same web page loading policy handling. A solution to the above sketched problem is presented in that the UE2should be provided with the web page markup data received from the web server6, accompanied with, or comprising, policy handling information. The policy handling information may be a prefix to web addresses comprised in the web page markup data or a parameter indicating that the web addresses qualify for web page loading policy handling. The policy handling information may either be added to the web page markup data by the web server6hosting the web page or by the IP access point server8. In case the web server6adds the policy handling information to the web page markup data, the IP access point server forwards5the received request3for loading the web page to the web server6along with an indication that the web server should provide the policy handling information to the IP access point server8. One principle for the web server6to generate the policy handling information is to parse the requested web page, i.e. HTML page, though an “HTML parser”. Such a parsing may entail that all URL's embedded in the HTML page are appended with a prefix, i.e. the policy handling information, in case the above mentioned indication is present in the forwarded request5, resulting in an adapted HTML page. In case no indication is present, the web server6will not amend the URL's embedded in the HTML page, i.e. resulting in a non-adapted HTML page. The parsing of the HTML web page is a process that may be done either online or off-line, and by or on behalf of the web server6that hosts the web page. The IP access point server will then receive the web page markup data9from the web server6, which data comprises the handling information in the form of prefixes to URL's embedded in the HTML page. It is regarded that the above-mentioned principle may require that a secure connection, for example a Virtual Private Network tunnel, is established between the telecommunication network and the web server6. Such a secure connection is advantageous to guarantee that the web server6can authenticate the trustworthiness of the forwarded request5, i.e. an HTTP GET transaction, comprising the indication for policy handling information. The IP access point server8will then provide, to the UE2, the web page markup data accompanied with, or comprising, the policy handling information relating to the web page to the UE2such that subsequent requests from the UE for retrieving content for loading the web page are in accordance with the policy handling information. Subsequent requests from the UE2are directed to web pages having the appended URL, such that the IP access point server is able to determine, based on the prefix of the URL, that the intended web page also qualifies for a web page loading policy handling. In such a case, for subsequent requests, related to and following an initial request for loading a web page, the IP access point server8will process the requests10by removing the prefix from the URL, mark the request as being qualified for a web page loading policy handling, and continue with the request in a normal fashion, i.e. forward the request towards the web server6. It may not always be practical to generate an adapted HTML web page, as disclosed above, as in such a case, the web server needs to be equipped with additional functionality. Therefore, the policy handling information may, in another embodiment, be included in, or accompanied with the web page markup data provided to the UE3by the IP access point server8. The IP access point server8may, for example, automatically amend URL's embedded in the web page markup data received9from the web server6, before the web page markup data is transferred to the UE2. Normally, deep packet inspection (DPI) is performed on IP packets which are sent from a UE2towards any server in the telecommunication network. DPI may also be performed on IP packets sent from any server in the telecommunication network, such as the web server6, towards the UE2. A received request2for loading a web page, i.e. an HTTP GET requests, by the IP access point server8, will eventually be followed by a receipt of the web page markup data9, i.e. an HTTP response message, by the IP access point server8. The IP access point server8may be arranged to correlate incoming HTTP response messages with HTTP GET requests. In case of a match, the IP access point server8may decide, for example, to amend URL's embedded in the web page markup data, i.e. comprised in the HTTP response message, with a prefix if a matched HTTP GET request was directed to a web page qualified for a web page loading policy handling. Another option for the IP access point server8is to provide the web page markup data as well as an indication that the URL's embedded in the web page markup data should be amended, to the UE2. Subsequent requests, triggered by the received web page markup data, need to be amended, by the UE2, such that the requests are directed to amended URL's, for example URL's with a certain prefix. FIG.2is a schematic view of an Internet Protocol access point server21for operation in a telecommunication network. The IP access point server21comprises a memory30, provider31, retriever24, a policy handling table28, handler27, determiner26and receiver, all connected to a control unit29, comprising, for example a processor, Field Programmable Gate Array or a micro controller. The receiver23is arranged to receive requests for loading a web page from a User Equipment via its input terminal22. The determiner26is arranged to determine that the web page qualifies for a web page loading policy handling by matching a web address comprised in the request with entries of a policy handling table28. Once a match is found, the determiner26determines that the web page, or the corresponding request to the web page, qualifies for a web page loading policy handling. Further, the retriever24is arranged to retrieve, via output terminal25, at a web server hosting the web page, in response to request received from the UE, amended web page markup data relating to the web page instead of the regular web page markup data. The amended web page markup data may be an amended parsed “home.html” file of the web page, wherein the amendments are made to URL's, for example web addresses, comprised in the parsed “home.html” file. Generally, the above-mentioned request is an HTTP GET request and the response received from the web server, by the receiver23, is an HTTP response message comprising the web page markup data. The HTTP response message comprising, for example, the amended parsed “home.html” is then provided to the UE, by the provider31via the output terminal25, such that corresponding, subsequent requests for loading content of a web page, initiated by web addresses comprised in the parsed “home.html” qualify for a web page loading policy handling. FIG.3is a schematic view illustrating an aspect of the method41according to the present invention, wherein it is illustrated that the loading of a web page comprises several requests being sent from the UE42to the IP access point server, for example a Gateway GPRS Support Node, GGSN,44. Here, the UE42initiates a request for loading a web page via its associated GGSN44towards the web server, for example an HTTP server46. As a first step in the process for obtaining IP connectivity for the UE42, a Packet Data Protocol Context43, PDPc, is established in the telecommunication network such as a GPRS/3G network between the UE42and the serving access point for the UE, i.e. the GGSN44. Initially, the UE42sends an initial HTTP GET request message47for obtaining a parsed HTML page of the website www.provider.nl towards the HTTP server46, hosting the web page, through the PDPc43. The request message47is received by the GGSN44, as the GGSN44is the first server in the telecommunication network to which the UE42communicates, and comprises a Request URI, R-URI, indicating the destination server of the request message47. The GGSN44is arranged to determine that the received request message47qualifies for a web page loading policy handling. To do so, it checks whether the R-URI comprised in the request message47meets the requirement for the web page loading policy handling. Such a check may be performed by determining whether the R-URI is present as an entry in a lookup table, which lookup table is stored and maintained by the GGSN44. An operator of the telecommunication network may decide to update the lookup table in case web addresses of web pages should be added to the lookup table, or should be deleted from the lookup table. Once it has been determined, by the GGSN44, that the received request message47qualifies for a web page loading policy handling, the GGSN44forwards the request message53to the HTTP server46, along with an indication that the HTTP server is to provide the adapted parsed HTML page instead of a regular parsed HTML page. The HTTP server46provides a response message54to the request message53received, along with the adapted parsed HTML page of the requested web page, i.e. the web page markup data along with the policy handling information relating to the web page. The GGSN44, receiving such a response message54, forwards the message48to the UE, trusting that the UE will use information embedded in the response for requests derived from the initial HTTP GET request message47. In case the adapted parsed HTML page comprises URL to content on other web pages, which content is to be loaded with the requested web page, the UE42initiates a further HTTP GET request message49for obtaining that content towards the HTTP server46hosting that content. In the present example, the HTTP server46is the same for hosting the initial requested web page, as for the web pages corresponding to subsequent request messages, i.e. following the initial request message47. The GGSN44, receiving the subsequent HTTP GET request message49, determines that this message49also qualifies for a web page loading policy handling, as the R-URI comprised in that request message49is modified according to the policy handling information. As such, the GGSN44forwards the HTTP GET request message55towards the HTTP server46, along with an indication that the HTTP server46is to provide the adapted HTML page instead of a regular HTML page. The response to the HTTP GET request message55, i.e. the response message56is received by the GGSN44, and forwarded to the UE42, such that even further subsequent requests initiated by the UE42will be treated as qualified for a web page loading policy handling. In a very similar fashion, the steps indicated with reference numeral51,57,58and52, perform a same procedure as explained above, and are included to illustrate that the loading of a web page may comprise of several HTTP GET request messages, which are initiated subsequently. FIG.4is a schematic view of a method for parsing71an HTML web page72such that an adapted HTML page74is provided. The HTML parsing process renders an HTML file, i.e. an adapted HTML page76, which comprises URLs pointing to a single or a limited set of normalized URL's. The HTTP client, i.e. the UE, may build the requested web page as usual, including the initiation of subsequent HTTP GET transactions, for obtaining linked content. These subsequent HTTP GET transactions are sent towards the normalized URL, or to one of the set of normalized URLs, as applicable. The HTML page72is provided73to an HTML parser74. The HTML parser74may either parse the HTML page72in a normal fashion, i.e. leaving the content of the web page intact, or may parse the HTML page including amending the URL's comprised in the web page with, for example, a prefix. In any case, the parsed HTML page is delivered75to the HTTP server, such that the HTTP server is able to provide the HTML page when requested for. FIG.5is a schematic view of a Gateway General Packet Radio Service, GPRS, Support Node85handling communication to and from a web server88, illustrating the functionality that the GPRS Support Node85receives web page markup data from the web server88, and correlates the data with the request applied at the web server88, and, in case of a match, include the policy handling information in the web page markup data to be provided to the UE82. When the GGSN85has determined that an HTTP GET request is sent towards such URL for which adapted handling is needed, the GGSN marks this transaction as “requiring processing”. In case the GGSN receives a response for this HTPP GET transaction, in the present TCP socket84, for the present PDPc83, it applies the ad hoc URL adaptation. In this manner, HTTP access to regular web pages is not affected. Reference numeral89indicates that the GGSN receives an HTTP GET request, and that it determines that the HTTP GET request comprises a URL for which a web page loading policy handling is to be applied, i.e. a differentiated handling is needed. The URL may be “the original URL”, such as “www.provder.nl” or may be a URL that is adapted due to the differentiated handling of a previous HTTP GET transaction, such as “www.webserver.nl” (not displayed). Such a differentiated handling comprises, for example, exempting an IP packet from charging and adapting the URLs in the GET response message. For the purpose of the second aspect, the GGSN marks this GET transaction, for this PDP Context83, as ‘requiring special processing’. Reference numeral90indicates that the GGSN85receives the response for the HTTP GET transaction. The GGSN85processes the HTTP GET response(s) as described above. The UE82may send additional HTTP transactions over the TCP socket84through which this GET transaction is run. FIG.6is a schematic view of a User Equipment, UE,101arranged for operating in a telecommunication network and arranged for generating requests for loading a web page. In a first step, a generator104comprised in the UE101is arranged to generate an initial request for loading a web page at the UE101. The initial request may be, for example, an HTTP GET Request directed to a web server hosting the web page. The transmitter102is arranged to transmit the request, via output terminal102, towards an Internet Protocol access point server, which access point server serves as the primary access point for connecting the UE101to the telecommunication network. After performing several subsequent steps, the access point server is arranged to provide web page markup data of the web page along with policy handling information to the UE101. The policy handling information is, for example, an indication to the UE101that the subsequent requests, triggered by the received web page markup data need to be amended. These amendments are performed to make sure that the IP access point server is able to recognize that the subsequent request is initiated, i.e. triggered, by the initial request. The receiver106is arranged to receive the web page markup data and the policy handling information from the IP access point server via input terminal105. Once received, the generator104will generate a subsequent request, triggered by the received web page markup data, in accordance with the policy handling information. The subsequent request is then sent by the transmitter102, using output terminal102, to the IP access point server for loading linked content of the web page. The advantage of an embodiment according to the invention is that no constraints are imposed on the design of a web page, such as the number of embedded links, whether or not the web page may be dynamically altered, etc. As long as the policy handling information is provided to the UE, the IP access point server is able to determine that subsequent requests qualify for the same policy handling, irrespective of the design of the web page. Another advantage according to another embodiment of the invention is that the functionality of subsequent servers or nodes in the chain, i.e. between the IP access point server and the web server, does not need to be altered, in the case that the IP access point server amend the subsequent request by removing the policy handling information. Another advantage is that the web server does not need to be modified to enable a method according the present invention, at least In the case that the characterizing aspects of the invention are performed by the IP access point server. The present invention is not limited to the embodiments as disclosed above, and can be modified and enhanced by those skilled in the art beyond the scope of the present invention as disclosed in the appended claims without having to apply inventive skills. | 20,102 |
11943279 | DETAILED DESCRIPTION OF THE INVENTION The invention relates to media listening amongst different users. For example, methods, systems or computer program code can enable users to have a remote listening experience in real time. Advantageously, a remote user at a remote client device can in effect listen to a particular digital media asset that is being played at a local client device of a local user. According to one aspect of the invention, one remote user can listen to media content being played by another user. In one embodiment, a media playback and management application is provided with remote listening capabilities. As a result, different users utilizing media playback and management applications can be presented with information that other designated users have provided. In one implementation, the information being presented can indicate the media item being played at another of the other media playback and management applications. Users can also authorize sharing of playback status using user settings or preferences. According to another aspect, users can provide profiles about themselves. The user profiles can also be viewed by other users. A profile for a particular user can be associated with media playback information for the particular user. In one embodiment, a first client device (e.g., first user computer) can inform a central media listening server of its playback status. Playback status represents data indicating at least a particular digital media asset being played back at the first client device. For example, the first client device can inform the central listening server that a particular digital media asset is being played at the first client device. Other client devices, that have been previously authorized, can access the stored playback status via the central media listening server. The users of the other client devices can thus opt to hear the same digital media asset as is being played by the first client device. The stored playback status can motivate the users of the other client devices to play back or purchase the particular digital media asset. Embodiments of the invention are discussed below with reference toFIGS.1-7. However, those skilled in the art will readily appreciate that the detailed descript ion given herein with respect to these figures is for explanatory purposes as the invention extends beyond these limited embodiments. FIG.1is a block diagram of a media system100according to one embodiment of the invention. The media system100supports a plurality of client devices, represented inFIG.1by client devices102,104and106. Each of the client devices can support operation of a media application. As illustrated inFIG.1, the client device102supports a media application108, the client device104supports a media application110, and client device106can support a media application112. The media applications can permit users of the respective client devices to navigate, select, acquire (e.g., rent or purchase) and/or playback media items. The media applications can also assist user in managing storage, categorization, grouping (e.g., playlists), rating, etc. of media items. The media system100can also include or utilize a data network114. The data network114can pertain to a global data network, a regional data network or a local area network. The data network114can include one or more wired or wireless networks. Typically, however, the data network114represents a global data network, such as the Internet. The data network114allows the client devices102,104and106to communicate with other remotely located computing devices that provide processing and/or data for the client devices102,104and106. The media system100can further include a media listening server116and a central media repository118. The media listening server116can manage the storage of media sharing information from the various client devices102,104and106supported by the media system100. The media listening server116can also manage the delivery of media sharing information to the various client devices102,104and106supported by the media system100. The media sharing information facilitates sharing of media amongst the client devices102,104and106. For example, if the media application108operating on the client device102is playing a particular media item, media sharing information provided to the media listening server116by the client device102informs the media listening server116that the particular media item is being played by the client device102. Thereafter, on request or when appropriate, the media listening server116can inform other media applications, such as the media application110operating on the client device104, that the media application108is playing the particular media item. Consequently, the user of the client device104can be informed via the media application110that the media application108is playing the particular media item. Furthermore, if the user of the client device104desires to also have the particular media item played at the client device104, the media application110can request to play the corresponding media content and it can be listened to on the client device104. In one embodiment, the media content can be obtained from the central media repository118and delivered to the client device104via the data network114. As an example, the media content can played (e.g., streamed) from the central media repository118to the client device104. In an alternative embodiment, the media content can be obtained from another of the client device. For example, a peer-to-peer connection can be established between the client device102and the client device104so that the media content can be listened to at the client device104. In still another embodiment, if the client device104already stores the media content for the particular media item, such as in its local media library, such content could be played locally. Regardless of how the media content for the particular media item is played, the media content can be played by the media application110operating on the client device104. In one embodiment, the media listening server116can also receive and utilize playback position. As an example, the media sharing information that the client device102provides to the media listening server116can specify current media playback position of the particular media item. Hence, by receiving the media playback position, the media application110operating on the client device104can, in one embodiment, substantially synchronized with the playback position of the playback at the media application108operating on the client device102. In one implementation, the synchronization can be managed by the media listening server116, such as by altering the playback start of the media content. In another implementation, the synchronization can be managed by the recipient client device104. For example, if the client device104starts its use of remote media playback two minutes following start of playback of a particular media item at the client device102, the media playback position can be used to start playback at substantially the same as position as at the client device. As another example, if the user of the client device104alters playback position (e.g., fast forward, rewind), the media playback position can inform the client device of the change in playback position. The media application110operating on the client device104can then potentially also alter its playback position in a similar manner. In one embodiment, users can choose whether to operate in a synchronized or non-synchronized manner. The media system100can also include an online media store120. The online media store120can provide a network-based destination for browsing, searching, purchasing, or renting media items. Hence, in the event that any of the media applications108,110or112access the online media store120via the data network114, such users are able to browse, search, purchase or rent media items. In the event that media items are purchased or rented, the associated media content can be delivered to the client device associated with the purchaser. In one embodiment, the media content for such media items can be stored in the central media repository118. In other embodiments, the media content for such media items can be stored in the online media store120or some other accessible server or data storage device. In other embodiments, it should be understood that the media listening server116, the central media repository118and the online media store120can reside on the same computing device or on different computing devices. Similarly, the online media store120can reside on one or more separate computing devices, or can reside on the one or more computing devices providing the media listening server116or the central media repository118. The client device102,104and106can interact with the data network114by a network data link, which can be a wired link and/or a wireless link. For example, inFIG.1, the dashed lines that connect the client device104to the data network114indicate that the client device104can interact with the data network114by a wireless link. Also, the dashed lines connecting the media listening server116, the central media repository118and the online media store120indicate that these computing resources can be interconnected in a private network fashion as well as a public network fashion through the data network114. FIG.2is a block diagram of a media application200according to one embodiment of the invention. The media application200represents one embodiment of a media application, such as any of the media applications108,110and112illustrated inFIG.1. The media application200includes a media playback module202that couples to an output interface204. The output interface204can drive an output device, such as a display, a speaker and the like. The media application200can also include a media playback monitor206. The media playback monitor206is coupled to the media playback module202. The media playback monitor206monitors media playback by the media playback module202such that media playback information can be provided by the media application200to external computing devices, such as the media listening server116illustrated inFIG.1. In addition, the media application200can include a now playing module208. The now playing module208can receive media information from the media listening server116. Using the media information, the now playing module208can understand the particular media item that a particular other media application is playing. The now playing information obtained by the now playing module208can also be provided to the output interface204so that now playing information can be presented on an output device. The media application200can further include a user profile module210. The user profile module210can store and/or manage a user profile associated with a user of the media application200. The user profile can take various forms and contain various different information as directed by the user. The user profile, or user settings or preferences associated therewith, can also indicate whether or not the user agrees to participate in media information sharing with other media applications associated with different users. According to one embodiment of the invention, user profiles can be provided and shared as well as sharing information about media. The user profiles can be view by others to identify and provide information about users. A user can determine whether playback status can be monitored based on the user's profile. A user can authorize remote media listening by others based on profiles, user categorizations (e.g., friends group), user preferences, etc. FIG.3is a flow diagram of a profile generation process300according to one embodiment of the invention. The profile generation process300is used to produce a profile for a user of a media application. For example, the profile generation process300can be implemented by the user profile module210of the media application200illustrated inFIG.2. The profile generation process300can begin with a decision302that determines whether a user request for profile generation has been received. When the decision302determines that there has not been a user request for profile generation, the profile generation process300can await such a user request. Once the decision302determines that a user request for profile generation has been received, a default profile can be generated304. In one embodiment, a default profile can be initially generated304. By producing a default profile, a user is able to have a profile with little or no effort. The profile generation process300thus makes profile generation very user friendly so as to facilitate profile generation. However, instead of utilizing the default profile, a user can decide to produce a customized profile. Hence, the profile generation process300can also include a decision306that determines whether a customized profile is desired. Men the decision306determines that a user desires to provide a customized profile, the profile generation process300permits the user to produce308a customized profile. The user can produce a customized profile in a variety of different ways. For example, the user can choose one or more features to provide in their profile. These features can pertain to text, images, audio, video and/or machine-readable code. The features can be static or dynamic. One example of dynamic features are small application programs, such as widgets which are typically dedicated to particular purposes. Hence, a user can select one or more application program (e.g., widget) to include within their profile. Following the block308, or directly following the decision306when a customized profile is not to be produced, a Universal Resource Locator (URL) for the profile can be created310. The URL can be provided to others so that they can easily access the profile. In addition, a privacy level can be set312for the profile. Following the block312, the profile generation process300can end. FIG.4Ais an illustration of an exemplary user profile400according to one embodiment of the invention. The exemplary user profile400pertains to a particular user of a media system, such as the media system100illustrated inFIG.1. The exemplary user profile400can, for example, be produced by the profile generation process300and can thus represent a default profile or a customized profile. The exemplary user profile400can include a static portion402that contains static content, and a dynamic portion404that contains dynamic content. The static portion402can include information such as user name406and user location408. The static portion402can also include an image414(e.g., a photo), which is typically chosen by the user. The dynamic portion404can include one or more dynamic components. These components can be automatically chosen or user-chosen. In one implementation, one or more of the dynamic components can be small application programs, such as widgets. Widgets are small specialized GUI applications that provide some visual information and/or easy access to frequently used functions. Such widgets can be referred to as desktop widgets or applets. For example, with respect to the exemplary user profile400illustrated inFIG.4A, the dynamic portion404of the exemplary user profile400includes a My Collection widget416, a My Favorite Songs widget418and a My Now Playing widget420. Although the static portion402and the dynamic portion404are separated in FIG.4A, these regions can be intermingled. FIG.4Bis an illustration of an exemplary media playback interface440according to one embodiment of the invention. The exemplary media playback interface440is provided by a media application when playing back a media item is of interest to its user. The exemplary media playback interface440include a media source region442, a track listing region444and a remote listening region446. The media source region442can list one or more selectable media sources. As examples, the selectable media sources can be a Library (locally stored media), Media Store (remotely available media), removeable media storage (e.g., compact disc, DVD, memory card). A user can also organize media using playlists including media from any of the different media sources. The track listing region444can display a list of the media items associated with the selected media source from the media source region442. However, in one embodiment, selection of the Media Store source causes the track listing region444to be replaced with a media store browser for displaying data provided by a remotely located online media store. The exemplary media playback interface440also supports remote listening. Remote listening is the ability to listen to the same media item that some other user is listening to at another location. In this regard, the remote listening region446is provided in the exemplary media playback interface440. The remote listening region446displays information concerning media being played by one or more other remote users. As shown inFIG.4B, for a given remote user, the remote listening region446can provide now playing information448pertaining to the given remote user. In the exemplary embodiment shown inFIG.4B, the now playing information448can include a user identifier450for the given remote user and a media identifier452. The user identifier450can be a name of the given remote user (e.g., John Doe”). The media identifier452is an indication of the media item being played on the client device associated with the given remote user. As an example, the media identifier452can pertain to metadata for the media item. In the example shown inFIG.4B, the media identifier452includes title and artist for the media item (“Rock With U” by Janet Jackson). Additionally, the now playing information448can also include a control454that can link to a profile for the given remote user. In one embodiment, to exchange now playing information448or profile information, it can be required that the local user have established a relationship and/or given permission for such exchange. For example, in one implementation those users deemed “friends” are able to exchange now playing information448or profile information. The now playing information448can further include a Tune-In control456and a Buy control458. The Tune-In control456and the Buy control458are typically virtual buttons within (or proximate to) the now playing information448. The Tune-In control456, when selected, can initiate playing of the media item indicated in the now playing information448. In other words, on selection of the Tune-In control456, remote listening is activated and the local user can listen to the same media item as being played by the given remote user. The system can also support locally listening to a series of media items that correspond to those being played be the given remote user. The Buy control458when selected, can initiate purchase of the media item indicated in the now playing information448. For example, when the Buy control458is selected, the media application can interact with an online media store (e.g., online media store120) to purchase the media item indicated in the now playing information. Once purchased, media content for the media item can be delivered to the client device associated with the user (purchaser). Although the remote listening region446illustrated inFIG.4Bdisplays now playing information448for one a single remote user, it should be understood that the remote listening region446could similarly present multiple instances of the remote listening region, one instance for each of a plurality of different remote users. In one embodiment, the one or more remote users that are candidates for remote listening can be specified or influenced by the user. Also, a control460can be selected by the user to disable remote listening when the user is not interested in remotely listening to any of the media items that might be played by others. The control460, when selected, can remove or minimize the remote listening region446. As noted above, the now playing information448can include the control454that can link to a profile for the given remote user. In one embodiment, selection of the control454can cause a profile for the given remote user to be presented.FIG.4Cis an illustration of an exemplary remote user profile460according to one embodiment of the invention. The exemplary remote user profile460pertains to a given remote user. Here, the profile for the remote given user is presented to the local user so that the local user can view the profile of the remote given user. For example, in this example, the remote given user is “John Doe” and the profile for such user was previously generated as discussed above with reference toFIG.4A. The widgets associated with the profile can present updated information (e.g., media collections, favorite media item, now playing information). In other embodiments, the remote listening region446illustrated inFIG.4Bcan further or alternatively display user interface elements that facilitates user actions (e.g., gifting, adding to a wish list, telling a friend, or adding to one's user profile) with respect to the one or more media items indicated in the now playing information448. FIG.5is a flow diagram of a media playback monitoring process500according to one embodiment of the invention. The media playback monitoring process500can, for example, be performed by a media application operating on a client device. More particularly, the media playback monitoring process500can be performed by the media playback monitor206illustrated inFIG.2. The media playback monitoring process500can begin with a decision502. The decision502can determine whether media playback is active. When the decision502determines that media playback is not active, the media playback monitoring process500can await media playback activation. When the decision502determines that media playback is active, a decision504can determine whether media listening is enabled with respect to the application program performing the media playback monitoring process500. In one embodiment, the determination of whether media listening is enabled can depend upon user settings or preferences. When the decision504determines that media listening is not enabled, the media playback monitoring process500can return to repeat the use decision502and subsequent blocks. On the other hand, when the decision504determines that media listening is enabled, a media playback active message can be sent506to a media listening server (e.g., the media listening server116). As an example, the media playback active message can inform the media listening server that the application program operating on the client device is playing back media. In addition, current media playback information can be sent508to the media listening server. For example, the current media playback information provides media playback status to the media listening server. A decision510can then determine whether a media playback update is needed. When the decision510determines that a media playback update is needed, the media playback monitoring process500can return to repeat the block508so that current media playback information can again be sent508to the media listening server. Alternatively, when the decision510determines that a media playback update is not needed, a decision512can determine whether media playback is inactive. When the decision512determines that media playback is still active, the media playback monitoring process500can return to repeat the decision510. On the other hand, when the decision512determines that media playback is inactive, a media playback inactive message can be sent514to the media listening server. As an example, the media playback inactive message can inform the media listening server that the application program operating on the client device is no longer playing back media. Following the block514, the media playback monitoring process500can end. As noted above, the decision504can determine whether media listening is enabled for remote media listening. As an example, the remote media listening can be enabled for all “friends” or selected “friends”. Also, even if setting or preferences are generally set to permit remote media listening, the user can operate to exclude (e.g., hide) certain media items from being remotely listened too. Still further, the user can decide to “blacklist” certain artists, songs or albums such they are never available for remote listening. FIGS.6A and6Bare flow diagrams of a remote listening process600according to one embodiment of the invention. The remote listening process600can be performed by a media application, such as any of the media applications108,110or112illustrated inFIG.1. More particularly, the remote listening process600can be performed by the now playing module208illustrated inFIG.2. The remote listening process600can request602available remote listening targets and now playing information from a media listening server. The media listening server is, for example, the media listening server116illustrated inFIG.1. The available remote listening targets are those other media applications operating on other client devices that are currently playing media and have authorized the requesting user to obtain the now playing information. Next, a decision604determines whether a response to the request has been received. When the decision604determines that a response to the request has not yet been received, the remote listening process600can await such a response. Once the decision604determines that a response has been received, available remote listening targets and now playing information can be presented606. For example, a display device associated with the client device that is performing the remote listening process600can present in the available remote listing targets in the now playing information on a display device. Next, a decision608determines whether one of the available remote listening targets has been selected by the user. When the decision608determines that a particular available remote listening target has not been selected, the remote listening process600awaits such a selection. On the other hand, when the decision608determines that a particular available remote listening target has been selected, now playing information can be requested610from a media listening server for the selected target. The now playing information can further include information regarding media items currently playing, previously played or soon to be played. For example, the now playing information can be provided as a playlist, of which some of the media items have played and others are scheduled to be played. The now playing information can then be presented612. Media content for the now playing media item can be requested614. Next, a decision616can determine whether media content has been received. For example, the media content can be provided by the central media repository118illustrated inFIG.1. In such case, the media content is centrally located and able to be delivered via the data network114to any of a large number of different client devices. In one implementation, the delivery mechanism operates to stream the media content to the requesting client devices via the data network114. As a result, the media content for the media items is not persistently stored on the recipient client devices. When the decision616determines that the media content has been received, the media content can be presented618. For example, the media content can be played at the recipient client device. For example, if the media content pertains to an audio recording, the audio recording can be played at the client device. Next, a decision620determines whether an update should be performed. An update can be performed periodically or intelligently. For example, the update620can be periodically performed so that the now playing information for the client device is relatively up to date. Alternatively, the update can be intelligently performed, such as when the media item currently playing has completed or when a next media item starts playing. In any case, when the decision620determines that an update is to be requested, the remote listening process600can return to repeat the block610so that now playing information can be again requested. Alternatively, when the decision620determines that an update has not yet to be requested, a decision622determines whether the remote listening process600is done. When the decision622determines that the remote listening process600is not done, the remote listening process600can return to repeat the decision620so that when it is an appropriate time to update the now playing information, the appropriate processing can be carried out. Alternatively, when the decision622determines that the remote listening process600is done, the remote listening process600can end. FIG.7shows an exemplary computer system700suitable for use with the invention. The methods, processes and/or graphical user interfaces discussed above can be provided by a computer system. The computer system700includes a display monitor702having a single or multi-screen display704(or multiple displays), a cabinet706, a keyboard708, and a mouse710. The cabinet706houses a processing unit (or processor), system memory and a hard drive (not shown). The cabinet706also houses a drive712, such as a DVD, CD-ROM or floppy drive. The drive712can also be a removable hard drive, a Flash or EEPROM device, etc. Regardless, the drive712may be utilized to store and retrieve software programs incorporating computer code that implements some or all aspects of the invention, data for use with the invention, and the like. Although CD-ROM714is shown as an exemplary computer readable storage medium, other computer readable storage media including floppy disk, tape. Flash or EEPROM memory, memory card, system memory, and hard drive may be utilized. In one implementation, a software program for the computer system700is provided in the system memory, the hard drive, the drive712, the CD-ROM714or other computer readable storage medium and serves to incorporate the computer code that implements some or all aspects of the invention. The various aspects, embodiments, implementations or features of the invention can be used separately or in any combination. Digital media assets (i.e., media items) can pertain to audio (e.g., songs, audio books, podcasts), videos (e.g., movies, music videos) or images (e.g., photos), as different types of media assets. Digital media assets also include any combinations of these different types of media assets with other data. The invention can be implemented by software, hardware, or a combination of hardware and software. The invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer readable medium generally include read-only memory and random-access memory. More specific examples of computer readable medium are tangible and include Flash memory, EEPROM memory, memory card, CD-ROM, DVD, hard drive, magnetic tape, and optical data storage device. The computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. The advantages of the invention are numerous. Different embodiments or implementations may, but need not, yield one or more of the following advantages. One advantage of certain embodiments of the invention is that remote users can remotely listen to digital media assets being listened to by a local user. Another advantage of certain embodiments of the invention is that other users receive recommendations of digital media assets through remote media listening. These recommendations can serve to encourage purchase of such recommended digital media assets. The many features and advantages of the present invention are apparent from the written description. Further, since numerous modifications and changes will readily occur to those skilled in the art, the invention should not be limited to the exact construction and operation as illustrated and described. Hence, all suitable modifications and equivalents may be resorted to as falling within the scope of the invention. | 32,637 |
11943280 | DETAILED DESCRIPTION In the following description, methods, configurations, and related apparatuses are disclosed for network edge and core service dimensioning utilizing artificial intelligence (AI) techniques and data processing. As an overview, the technological solutions disclosed herein integrate MEC with various types of IoT or Fog/Edge Computing networking implementations with specific forms of dynamic network slicing and resource utilization management. These may benefit a variety of use cases, such as fifth generation (5G) network communications among automotive devices, including those use cases termed as vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), and vehicle-to-everything (V2X). As is understood, MEC architectures offer application developers and content providers cloud-computing capabilities and an IT service environment at the edge of the network. This environment offers ultra-low latency and high bandwidth throughput as well as real-time access to radio network information that may be leveraged by applications. MEC technology permits flexible and rapid deployments of innovative applications and services towards mobile subscribers, enterprises, or vertical segments. The following configurations provide an enhanced network architecture that allows Data Center Cloud/Core Mobile Services and Applications to be processed closer to the end-user, thereby reducing latencies, allocating computing cycles, and reducing network congestion of 5G subscribers. The process of allocating and distributing applications (or portions of an application, e.g., threads, services, microservices, lambdas, etc.) across an entire network slice (end-to-end premise to core) is known as dimensioning. Dimensioning may also include distributing applications or portions of applications across a layer in a network. Although the following examples are provided with specific reference to a MEC installation, the following systems and techniques may be implemented in, or augment, virtualized environments which may be implemented within various types of MEC, network function virtualization (NFV), or fully virtualized 5G network environments. As with most MEC installations, the goal with the present configurations is to bring the application endpoints as close to the end user (e.g., vehicular) environment, or endpoints, as possible and to dynamically adjust compute resources as well as resources used by one or more network (e.g., 5G) slices. The following configurations resolve issues related to dimensioning 5G Services across a 5G Network slice dynamically to meet latency, congestion, power, and service provider service level agreement (SLA) requirements. Also, in specific examples, the following configurations use AI/machine learning (ML) based inferences and learning algorithms based on real-time hardware usage heuristics, culminating in a heat map that indicates the optimum “phased” approach to dimensioning workloads across a particular 5G network slice. The present techniques and configurations may be utilized in connection with many aspects of current networking systems, but are provided with reference to IoT, MEC, and NFV deployments. The present techniques and configurations specifically may be (but are not required to be) relevant to the standards and approaches published in ETSI GS MEC-003“Mobile Edge Computing (MEC); Framework and Reference Architecture” (e.g., V2.0.3); ETSI GS NFV-SEC 013 “Network Functions Virtualization (NFV) Release 3; Security; Security Management and Monitoring” (e.g., v. 3.1.1) and related MEC, NFV, or networked operational implementations. However, while the present techniques and configurations may provide significant benefits to MEC architectures and other IoT device network architectures, the applicability of the present techniques and configurations may be extended to any number of edge computing devices or fog computing platforms. The following provides a detailed discussion of these techniques within specific systems and services, but which are applicable to the larger context of IoT, Fog/interconnected networks, and Edge computing deployments. Further, the disclosed MEC architectures and service deployment examples provide one illustrative example of a Fog device or Fog system, but many other combinations and layouts of devices and systems located at the edge of a network may be provided. Further, the techniques disclosed herein may relate to other IoT and network communication standards and configurations, and other intermediate processing entities and architectures. FIG.1illustrates a MEC communication infrastructure100A with a common core network, the MEC infrastructure including slice management, resource management, and traceability functions, according to an example. The connections represented by some form of a dashed line (as noted in the legend inFIG.1) may be defined according to a specification from an ETSI MEC standards family. The MEC communication infrastructure100A can include entities from a MEC-based architecture as well as entities from a third-generation partnership project (3GPP) based architecture. For example, the MEC communication infrastructure100A can include a plurality of MEC hosts such as MEC hosts102and104, a MEC platform manager106, and a MEC orchestrator108. The 3GPP based entities can include a centralized core network (CN)110coupled to an application server114via the network112(e.g., the Internet), as well as radio access networks (RANs) represented by base stations148and150coupled to corresponding user equipments (UEs)152and154. The base stations148and150can include evolved Node-Bs (eNBs), Next Generation Node-Bs (gNBs), or other types of base stations operating in connection with a 3GPP wireless family of standards or another type of wireless standard. In some aspects, the MEC communication infrastructure100A can be implemented by different network operators in the same country and/or in different countries, using different network traffic types. For example, the radio access network associated with base station148(with a coverage area149) can be within a first public land mobile network (PLMN) (i.e., associated with a first mobile services provider or operator and a first network traffic type), and base station150(with a coverage area151) can be within a second public land mobile network (PLMN) (i.e., associated with a second mobile services provider or operator and a second network traffic type). As used herein, the terms “mobile services provider” and “mobile services operator” are interchangeable. In this regard, the MEC communication infrastructure100A can be associated with a multi-operator scenario composed by two coverage areas149and151where communication services (e.g., V2X services) can be provided, with each coverage area being operated by a mobile services operator. Additionally, each of the UEs152and154can be configured for network slice operation, where each UE can use one or more types of network slices configured by, e.g., the core network110using the slice management functionality164. Techniques disclosed herein can be used to provide resource management and resource usage traceability (e.g., via management modules160and162) in connection with computing and communication resources used by the UEs and/or the core network in connection with configuring and using network slices (e.g., 5G slices). In some aspects, techniques disclosed herein can be used to dynamically manage resources used for communication slices (e.g., deploy new slices, re-assign resources from one slice to another, close one or more slices, and so forth). The solid line connections inFIG.1represent non-MEC connections, such as utilizing 3GPP cellular network connections S1, S1-AP, etc. Other connection techniques (e.g., protocols) and connections may also be used. Accordingly, in the scenario ofFIG.1, the system entities (e.g., MEC orchestrator108, MEC platform manager106, MEC hosts102,104are connected by MEC (or NFV) logical links (indicated with dashed lines), in addition to network infrastructure links (e.g., a 5G Long Term Evolution (LTE) network, such as provided among UEs152,154, eNBs148,150, a CN site110, etc.) (indicated with solid lines). A further connection to cloud services (e.g., an application server114access via the network112) may also be connected via backhaul network infrastructure links. Techniques disclosed herein apply to 2G/3G/4G/LTE/LTE-A (LTE Advanced) and 5G networks, with the examples and aspects disclosed using 4G/LTE networks. In aspects, the CN110may be an evolved packet core (EPC) network, a NextGen Packet Core (NPC) network (e.g., a 5G network), or some other type of CN (e.g., as illustrated in reference toFIGS.2A-3E). In EPC (Evolved Packet Core), which is associated with 4G/LTE, the CN110can include a serving gateway (S-GW or SGW)138, a packet data network (PDN) gateway (P-GW or PGW)140, a mobility management entity (MME)142, and a home subscriber server (HSS)144coupled to a V2X control function146. In 5G, the Core Network is referred to as the NextGen Packet Network (NPC). In NPC (and as illustrated inFIGS.3A-3D), the S/P-GW is replaced with a user plane function (UPF), and the MME is replaced with two individual functional components, the Access Management Function (AMF) and the Session Management Function (SMF). The 4G HSS is split into different entities in 5G: the Authentication Server Function (AUSF) and the Universal Data Management (UDM), with the subscription data being managed via the Universal Data Management (UDM) function. In EPC, the S1 interface can be split into two parts: the S1-U (user plane) interface which carries traffic data between the eNBs148,150and the S-GW138via the MEC hosts102,104, and the S1-AP (control plane) interface which is a signaling interface between the eNBs148,150and the MME142. The MME142may be similar in function to the control plane of legacy Serving General Packet Radio Service (GPRS) Support Nodes (SGSN). The MME142may manage mobility aspects in access such as gateway selection and tracking area list management. The HSS144may comprise a database for network users, including subscription-related information to support the network entities' handling of communication sessions, including subscription information associated with V2X communications. The CN110may comprise one or several HSSs144, depending on the number of mobile subscribers, on the capacity of the equipment, on the organization of the network, etc. For example, the HSS144can provide support for routing/roaming, authentication, authorization (e.g., V2X communication authorization), naming/addressing resolution, location dependencies, etc. The S-GW138may terminate the S1 interface413towards the RANs of eNBs148,150, and route data packets between the RANs and the CN110. In addition, the S-GW138may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3GPP mobility. Other responsibilities may include charging and some policy enforcement. The P-GW140may terminate an SGi interface toward a PDN. The P-GW140may route data packets between the RANs and external networks such as a network including the application server (AS)114(alternatively referred to as application function (AF)) via an Internet Protocol (IP) interface (e.g., an interface to the network112coupled to the AS114. The P-GW140can also communicate data to other external networks, which can include the Internet, IP multimedia subsystem (IPS) network, and other networks. Generally, the application server114may be an element offering applications that use IP bearer resources with the core network (e.g., UMTS Packet Services (PS) domain, LTE PS data services, etc.). The application server114can also be configured to support one or more communication services (e.g., Voice-over-Internet Protocol (VoIP) sessions, PTT sessions, group communication sessions, social networking services, etc.) for the UEs152,154via the CN110and one or more of the MEC hosts102,104. The P-GW140may further include a node for policy enforcement and charging data collection. A Policy and Charging Enforcement Function (PCRF) (not illustrated inFIG.1) can be the policy and charging control element of the CN110. In a non-roaming scenario, there may be a single PCRF in the Home Public Land Mobile Network (HPLMN) associated with a UE's Internet Protocol Connectivity Access Network (IP-CAN) session. In a roaming scenario with a local breakout of traffic, there may be two PCRFs associated with a UE's IP-CAN session: a Home PCRF (H-PCRF) within an HPLMN and a Visited PCRF (V-PCRF) within a Visited Public Land Mobile Network (VPLMN). The PCRF may be communicatively coupled to the application server114via the P-GW140. The application server114may signal the PCRF to indicate a new service flow and select the appropriate Quality of Service (QoS) and charging parameters. The V2X control function146is used in connection with authorizing UEs to use V2X services based on HSS information (e.g., subscription information managed by the HSS144), assist one or more UEs in obtaining the network address of an application server (e.g.,114) or a V2X application server, as well as providing V2X configuration parameters for direct communication (i.e., device-to-device communications). The interface for direct device-to-device communication is referred to as PC5. The PC5 parameters may be provided by the V2X control function146to one or more UEs for purposes of configuring V2X communication between the UEs. The slice management function can be used for configuring one or more network slices (e.g., 5G slices) for use by UEs or other devices within the communication architecture100A. In some aspects, the communication architecture further includes an artificial intelligence (AI)-based resource management (AIBRM) module160and a blockchain traceability management (BCTM) module162, which modules can provide functionalities in connection with dynamic slice configuration, dynamic resource management, and resource traceability within the architecture100A. The AIBRM module160may comprise suitable circuitry, logic, interfaces and/or code and can be configured to provide resource management functions. More specifically, the AIBRM module160can use AI-based (e.g., machine learning) techniques to dynamically assess resource usage within the architecture100A and provide a resource allocation recommendation (e.g., to the CN110or the MEC platform manager106) for dynamic allocation (or re-allocation) or computing and communication resources based on current resource usage, past resource usage, or intended (future) resource usage (e.g., based on previous dynamic slice allocations or current slice allocation requests). The BCTM module162may comprise suitable circuitry, logic, interfaces and/or code and can be configured to provide resource usage traceability using blockchain techniques. Blockchain technology offers a way to record transactions or any digital interaction that is designed to be secure, transparent, resistant to outages, auditable, and efficient. A blockchain is a digital, distributed transaction ledger that is stored and maintained on multiple systems belonging to multiple entities sharing identical information. This creates a web that shares the responsibility of storing, maintaining, and validating the information present on the blockchain. Any authorized participant can review entries and users can update information stored on the blockchain only if the network consensus algorithm validates it. Information stored in a blockchain can never be deleted and serves as a verifiable and accurate record of every transaction made within the ledger. In this regard, blockchain technology offers the following key functionalities that can be used within the architecture100A for resource traceability and resource usage traceability: fast transaction settlement (transactions are processed directly from peer to peer with fewer intermediaries; ledgers are automatically updated; and both sides of the transaction are executed simultaneously); low cost (resources used for validating transactions are computing power which can be inexpensive; little to no reconciliation work is required and little to no use of intermediaries is required); transparent and auditable ledger (all transactions are visible to authorized participants and are traceable within the ledger); and reliable (transactions processed within the blockchain do not have a point of failure and are irrevocable). In some aspects, the BCTM module162can use blockchain technology to provide traceability of user equipment slice requests, current resource usage by one or more slices, dynamic slice allocations and reallocations, as well as slice resource usage changes due to the dynamic slice allocations and reallocations. In some aspects, resource management and traceability functions provided by the AIBRM module160and the BCTM module162can be incorporated within one or more MEC hosts (e.g., as MEC AIBRM-BCTM module121within MEC host102or module131within MEC host104). In some aspects, the MEC AIBRM-BCTM module can be incorporated within the MEC platform or can be incorporated as an MEC app instantiated by the MEC platform (e.g., MEC app116A instantiated by the MEC platform using MEC hardware123and133). In some aspects, resource management and traceability functions provided by the AIBRM module160and the BCTM module162can be provided by the MEC platform manager106, the MEC orchestrator108, and/or other modules within the MEC communication architecture100A. The MEC hosts102, . . . ,104can be configured in accordance with the ETSI GS MEC-003 specification. The MEC host102can include a MEC platform118, which can be coupled to one or more MEC applications (apps) such as MEC apps116A, . . . ,116N (collectively, MEC app116) and to MEC data plane122. The MEC host104can include a MEC platform126, which can be coupled to a MEC app116and MEC data plane130. The MEC platform manager106can include a MEC platform element management module132, MEC application rules and requirements management module134, and MEC application lifecycle management module136. The MEC host102also includes MEC hardware123, such as network interfaces (e.g. network interface cards or NICs)125A, . . . ,125N, one or more CPUs127, and memory129. Additional description of the MEC related entities102,104,106, and108are provided hereinbelow in connection withFIG.4. In some aspects, the MEC apps116A, . . . ,116N can each provide an NFV instance configured to process network connections associated with a specific network traffic type (e.g., 2G, 3G, 4G, 5G or another network traffic type). In this regard, the terms “MEC app” and “NFV” (or “MEC NFV”) are used interchangeably. Additionally, the term “NFV” and “NFV instance” are used interchangeably. The MEC platform118can further include one or more schedulers120A, . . . ,120N (collectively, a scheduler120). Each of the schedulers120A, . . . ,120N may comprise suitable circuitry, logic, interfaces, and/or code and is configured to manage instantiation of NFVs116A, . . . ,116N (collectively, an NFV116). More specifically, a scheduler120can select a CPU (e.g., one of the CPUs127) and/or other network resources for executing/instantiating the NFV116. Additionally, since each of the NFVs116A, . . . ,116N is associated with processing a different network traffic type, the scheduler120can further select a NIC (e.g., from the available NICs125A, . . . ,125N) for use by the NFV116. Each of the schedulers120A, . . . ,120N can have a different type of SLA and QoS requirements, based on the network traffic type handled by the associated NFV. For example, each traffic type (e.g., 2G, 3G, 4G, 5G, or any other type of wireless connection to the MEC host) has an associated class of service (CloS) (e.g., 2G_low, 2G_mid, 2G_high, etc.) which can be preconfigured in the MEC host, defining CloS-specific resource requirements (i.e., I/O, memory, processing power, etc.) for different loads of that particular traffic type. FIG.1further illustrates MEC host104including MEC hardware133, MEC QoS manager131, and schedulers128A, . . . ,128N, which can have the same functionality as MEC hardware123, MEC AIBRM-BCTM module121, and schedulers120A, . . . ,120N described in connection with MEC host102. Even though MEC AIBRM-BCTM module121is illustrated as being implemented within the MEC platform118, the present disclosure is not limited in this regard and one or more components of the MEC AIBRM-BCTM module121can be implemented within other modules of the MEC host102, the MEC orchestrator108, or the MEC platform manager106. FIG.2Aillustrates an example Cellular Internet-of-Things (CIoT) network architecture with a MEC host using a MEC QoS manager, according to an example. Referring toFIG.2A, the CIoT architecture200A can include the UE202and the RAN204coupled to a plurality of core network entities. In some aspects, the UE202can be a machine-type communication (MTC) UE. The CIoT network architecture200A can further include a mobile services switching center (MSC)206, MME208, a serving GPRS support node (SGSN)210, a S-GW212, an IP-Short-Message-Gateway (IP-SM-GW)214, a Short Message Service-Service Center (SMS-SC)/gateway mobile service center (GMSC)/Interworking MSC (IWMSC)216, MTC interworking function (MTC-IWF)222, a Service Capability Exposure Function (SCEF)220, a gateway GPRS support node (GGSN)/Packet-GW (P-GW)218, a charging data function (CDF)/charging gateway function (CGF)224, a home subscriber server (HSS)/a home location register (HLR)226, short message entities (SME)228, MTC authorization, authentication, and accounting (MTC AAA) server230, a service capability server (SCS)232, and application servers (AS)234and236. In some aspects, the SCEF220can be configured to securely expose services and capabilities provided by various 3GPP network interfaces. The SCEF220can also provide means for the discovery of the exposed services and capabilities, as well as access to network capabilities through various network application programming interfaces (e.g., API interfaces to the SCS232). FIG.2Afurther illustrates various reference points between different servers, functions, or communication nodes of the CIoT network architecture200A. Some example reference points related to MTC-IWF222and SCEF220include the following: Tsms (a reference point used by an entity outside the 3GPP network to communicate with UEs used for MTC via SMS), Tsp (a reference point used by a SCS to communicate with the MTC-IWF related control plane signaling), T4 (a reference point used between MTC-IWF222and the SMS-SC216in the HPLMN), T6a (a reference point used between SCEF220and serving MME208), T6b (a reference point used between SCEF220and serving SGSN210), T8 (a reference point used between the SCEF220and the SCS/AS234,236), S6m (a reference point used by MTC-IWF222to interrogate HSS/HLR226), S6n (a reference point used by MTC-AAA server230to interrogate HSS/HLR226), and S6t (a reference point used between SCEF220and HSS/HLR226). In some aspects, the UE202can be configured to communicate with one or more entities within the CIoT architecture200A via the RAN204(e.g., CIoT RAN) according to a Non-Access Stratum (NAS) protocol, and using one or more radio access configuration, such as a narrowband air interface, for example, based on one or more communication technologies, such as Orthogonal Frequency-Division Multiplexing (OFDM) technology. As used herein, the term “CIoT UE” refers to a UE capable of CIoT optimizations, as part of a CIoT communications architecture. In some aspects, the NAS protocol can support a set of NAS messages for communication between the UE202and an Evolved Packet System (EPS) Mobile Management Entity (MME)208and SGSN210. In some aspects, the CIoT network architecture200A can include a packet data network, an operator network, or a cloud service network, having, for example, among other things, servers such as the Service Capability Server (SCS)232, the AS234, or one or more other external servers or network components. The RAN204can be coupled to the HSS/HLR servers226and the AAA servers230using one or more reference points including, for example, an air interface based on an S6a reference point, and configured to authenticate/authorize CIoT UE202to access the CIoT network. The RAN204can be coupled to the CIoT network architecture200A using one or more other reference points including, for example, an air interface corresponding to an SGi/Gi interface for 3GPP accesses. The RAN204can be coupled to the SCEF220using, for example, an air interface based on a T6a/T6b reference point, for service capability exposure. In some aspects, the SCEF220may act as an API GW towards a third-party application server such as server234. The SCEF220can be coupled to the HSS/HLR226and MTC AAA230servers using an S6t reference point and can further expose an Application Programming Interface to network capabilities. In certain examples, one or more of the CIoT devices disclosed herein, such as the UE202, the RAN204, etc., can include one or more other non-CIoT devices, or non-CIoT devices acting as CIoT devices, or having functions of a CIoT device. For example, the UE202can include a smartphone, a tablet computer, or one or more other electronic device acting as a CIoT device for a specific function, while having other additional functionality. In some aspects, the RAN204can include a CIoT enhanced Node B (CIoT eNB) communicatively coupled to a CIoT Access Network Gateway (CIoT GW). In certain examples, the RAN204can include multiple base stations (e.g., CIoT eNBs or other types of base stations) connected to the CIoT GW, which can include MSC206, MME208, SGSN210, or S-GW212. In certain examples, the internal architecture of RAN204and the CIoT GW may be left to the implementation and need not be standardized. In some aspects, the CIoT architecture200A can include one or more MEC hosts that can provide a communication link between different components of the CIoT architecture. For example, MEC host102can be coupled between the RAN204and the S-GW212. In this case, the MEC host102can use one or more NFV instances to process wireless connections with the RAN204and the S-GW212. The MEC host102can also be coupled between the P-GW218and the application server236. In this case, the MEC host102can use the one or more NFV instances to process wireless connections originating from or terminating at the P-GW218and the application server236. In some aspects, the MEC host102includes a MEC AIBRM-BCTM module121, which is configured according to techniques disclosed herein to perform resource management and traceability functions. FIG.2Billustrates an example Service Capability Exposure Function (SCEF) used by the CIoT network architecture ofFIG.2B, according to an example. Referring toFIG.2B, the SCEF220can be configured to expose services and capabilities provided by 3GPP network interfaces to external third-party service provider servers hosting various applications. In some aspects, a 3GPP network such as the CIoT architecture200A, can expose the following services and capabilities: a home subscriber server (HSS)256A, a policy and charging rules function (PCRF)256B, a packet flow description function (PFDF)256C, a MME/SGSN256D, a broadcast multicast service center (BM-SC)256E, a serving call server control function (S-CSCF)256F, a RAN congestion awareness function (RCAF)256G, and one or more other network entities256H. The above-mentioned services and capabilities of a 3GPP network can communicate with the SCEF220via one or more interfaces as illustrated inFIG.2B. The SCEF220can be configured to expose the 3GPP network services and capabilities to one or more applications running on one or more service capability server (SCS)/application server (AS), such as SCS/AS254A,254B, . . . ,254N. Each of the SCS/AS254A-254N can communicate with the SCEF220via application programming interfaces (APIs)252A,252B,252C, . . . ,252N, as seen inFIG.2B. FIG.3Ais a simplified diagram of an exemplary Next-Generation (NG) system architecture with a MEC host using a MEC QoS manager, according to an example. Referring toFIG.3A, the NG system architecture300A includes NG-RAN304and a 5G network core (5GC)306. The NG-RAN304can include a plurality of NG-RAN nodes, for example, gNBs308and310, and NG-eNBs312and314. The gNBs308/310and the NG-eNBs312/314can be communicatively coupled to the UE302via a wireless connection. The core network306(e.g., a 5G core network or 5GC) can include an access and mobility management function (AMF)316or a user plane function (UPF)318. The AMF316and the UPF318can be communicatively coupled to the gNBs308/310and the NG-eNBs312/314via NG interfaces. More specifically, in some aspects, the gNBs308/310and the NG-eNBs312/314can be connected to the AMF316by N2 interface, and to the UPF318by N3 interface. The gNBs308/310and the NG-eNBs312/314can be coupled to each other via Xn interfaces. In some aspects, a gNB308can include a node providing New Radio (NR) user plane and control plane protocol termination towards the UE and can be connected via the NG interface to the 5GC306. In some aspects, an NG-eNB312/314can include a node providing evolved universal terrestrial radio access (E-UTRA) user plane and control plane protocol terminations towards the UE and is connected via the NG interface to the 5GC306. In some aspects, any of the gNBs308/310and the NG-eNBs312/314can be implemented as a base station (BS), a mobile edge server, a small cell, a home eNB, although aspects are not so limited. In some aspects, the NG system architecture300A can include one or more MEC hosts that can provide a communication link between different components of the NG architecture. For example, MEC host102can provide an interface between the AMF316(or UPF318) in the 5GC306and the application server114. The MEC host102can use one or more NFV instances to process wireless connections with the 5GC306and the application server114. The MEC host102can also be coupled between one or more of the gNBs (e.g., gNB308) and the AMF/UPF in the 5GC306. In this case, the MEC host102can use the one or more NFV instances to process wireless connections originating from or terminating at the gNB308and the 5GC306. In some aspects, the MEC host102includes an MEC AIBRM-BCTM module121, which is configured according to techniques disclosed herein to provide resource management and traceability functions. In some aspects, the MEC AIBRM-BCTM module121can be incorporated as a standalone server or an application running on a virtual machine, which is accessible to the 5G core306as well as the MEC host102. In some aspects, the 5G core306can provide slice management functionalities performed by the slice management module164, as disclosed herein. In some aspects, the system architecture300A (which can be the same as100A) can be a 5G-NR system architecture providing network slicing and supporting policy configuration and enforcement between network slices as per service level agreements (SLAs) within the RAN304(or204). Additionally and as illustrated in greater detail inFIG.3E, the RAN304can provide separation of central unit control plane (CU-CP) and central unit user plane (CU-UP) functionalities while supporting network slicing (e.g., using resource availability and latency information communication via different RAN interfaces, such as E1, F1-C, and F1-U interfaces). In some aspects, the UE302(or152) can communicate RRC signaling to the gNB308for establishing a connection with an entity (e.g., UPF318) of the 5GC306. The gNB308can include separate distributed units (DUs), CU-CP, and CU-UP entities (as illustrated inFIG.3E). The CU-CP entity can obtain resource utilization and latency information from the DU and CU-UP entities and select a DU/CU-UP pair based on such information for purposes of configuring the network slice. Network slice configuration information associated with the configured network slice (including resources for use while communicating via the slice) can be provided to the UE302for purposes of initiating data communication with the 5GC UPF entity318using the network slice. FIG.3Billustrates an exemplary functional split between next generation radio access network (NG-RAN) and the 5G Core network (5GC) in connection with the NG system architecture ofFIG.3A, according to an example.FIG.3Billustrates some of the functionalities the gNBs308/310and the NG-eNBs312/314can perform within the NG-RAN304, as well as the AMF316, the UPF318, and a Session Management Function (SMF)326(not illustrated inFIG.3A) within the 5GC306. In some aspects, the 5GC306can provide access to a network330(e.g., the Internet) to one or more devices via the NG-RAN304. In some aspects, the gNBs308/310and the NG-eNBs312/314can be configured to host the following functions: functions for Radio Resource Management (e.g., inter-cell radio resource management320A, radio bearer control320B, connection mobility control320C, radio admission control320D, measurement and measurement reporting configuration for mobility and scheduling320E, and dynamic allocation of resources to UEs in both uplink and downlink (scheduling)320F); IP header compression; encryption and integrity protection of data; selection of an AMF at UE attachment when no routing to an AMF can be determined from the information provided by the UE; routing of User Plane data towards UPF(s); routing of Control Plane information towards AMF; connection setup and release; scheduling and transmission of paging messages (originated from the AMF); scheduling and transmission of system broadcast information (originated from the AMF or Operation and Maintenance); transport level packet marking in the uplink; session management; support of network slicing; QoS flow management and mapping to data radio bearers; support of UEs in RRC INACTIVE state; distribution function for non-access stratum (NAS) messages; radio access network sharing; dual connectivity; and tight interworking between NR and E-UTRA, to name a few. In some aspects, the AMF316can be configured to host the following functions, for example: NAS signaling termination; NAS signaling security322A; access stratum (AS) security control; inter-core network (CN) node signaling for mobility between 3GPP access networks; idle state/mode mobility handling322B, including mobile device, such as a UE reachability (e.g., control and execution of paging retransmission); registration area management; support of intra-system and inter-system mobility; access authentication; access authorization including check of roaming rights; mobility management control (subscription and policies); support of network slicing; or SMF selection, among other functions. The UPF318can be configured to host the following functions, for example: mobility anchoring324A (e.g., anchor point for Intra-/Inter-RAT mobility); packet data unit (PDU) handling324B (e.g., external PDU session point of interconnect to data network); packet routing and forwarding; packet inspection and user plane part of policy rule enforcement; traffic usage reporting; uplink classifier to support routing traffic flows to a data network; branching point to support multi-homed PDU session; QoS handling for user plane, e.g., packet filtering, gating, UL/DL rate enforcement; uplink traffic verification (SDF to QoS flow mapping); or downlink packet buffering and downlink data notification triggering, among other functions. The Session Management function (SMF)326can be configured to host the following functions, for example: session management; UE IP address allocation and management328A; selection and control of user plane function (UPF); PDU session control328B, including configuring traffic steering at UPF318to route traffic to proper destination; control part of policy enforcement and QoS; or downlink data notification, among other functions. FIG.3CandFIG.3Dillustrate exemplary non-roaming 5G system architectures with a MEC host using a MEC QoS manager, according to an example. Referring toFIG.3C, an exemplary 5G system architecture300C is illustrated in a reference point representation. More specifically, UE302can be in communication with RAN304as well as one or more other 5G core (5GC) network entities. The 5G system architecture300C includes a plurality of network functions (NFs), such as access and mobility management function (AMF)316, session management function (SMF)326, policy control function (PCF)332, application function (AF)352, user plane function (UPF)318, network slice selection function (NSSF)334, authentication server function (AUSF)336, and unified data management (UDM)338. The UPF318can provide a connection to a data network (DN)354, which can include, for example, operator services, Internet access, or third-party services. The AMF316can be used to manage access control and mobility and can also include network slice selection functionality. The SMF326can be configured to set up and manage various sessions according to a network policy. The UPF318can be deployed in one or more configurations according to the desired service type. The PCF332can be configured to provide a policy framework using network slicing, mobility management, and roaming (similar to PCRF in a 4G communication system). The UDM338can be configured to store subscriber profiles and data (similar to an HSS in a 4G communication system), such as V2X subscription information or another type of subscription information for services available within the architecture300C. In some aspects, the 5G system architecture300C includes an IP multimedia subsystem (IMS)342as well as a plurality of IP multimedia core network subsystem entities, such as call session control functions (CSCFs). More specifically, the IMS342includes a CSCF, which can act as a proxy CSCF (P-CSCF)344, a serving CSCF (S-CSCF)346, an emergency CSCF (E-CSCF) (not illustrated inFIG.3C), or interrogating CSCF (I-CSCF)348. The P-CSCF344can be configured to be the first contact point for the UE302within the IMS342. The S-CSCF346can be configured to handle the session states in the network, and the E-CSCF can be configured to handle certain aspects of emergency sessions such as routing an emergency request to the correct emergency center or public safety answering point (PSAP). The I-CSCF348can be configured to function as the contact point within an operator's network for all IMS connections destined to a subscriber of that network operator, or a roaming subscriber currently located within that network operator's service area. In some aspects, the I-CSCF348can be connected to another IP multimedia network350, e.g. an IMS operated by a different network operator. In some aspects, the UDM338can be coupled to an application server340, which can include a telephony application server (TAS) or another application server (AS) including a MEC host. The AS340can be coupled to the IMS342via the S-CSCF346or the I-CSCF348. In some aspects, the 5G system architecture300C can use one or more MEC hosts to provide an interface and offload processing of wireless communication traffic. For example and as illustrated inFIG.3C, the MEC host102can provide a connection between the RAN304and UPF318in the core network. The MEC host102can use one or more NFV instances instantiated on virtualization infrastructure within the host to process wireless connections to and from the RAN304and the UPF318. Additionally, the MEC host102can use the MEC AIBRM-BCTM module121and techniques disclosed herein to manage resource management and traceability functions. FIG.3Dillustrates an exemplary 5G system architecture300D in a service-based representation. System architecture300D can be substantially similar to (or the same as) system architecture300C. In addition to the network entities illustrated inFIG.3C, system architecture300D can also include a network exposure function (NEF)356and a network repository function (NRF)358. In some aspects, 5G system architectures can be service-based and interaction between network functions can be represented by corresponding point-to-point reference points Ni (as illustrated inFIG.3C) or as service-based interfaces (as illustrated inFIG.3D). A reference point representation shows that an interaction can exist between corresponding NF services. For example,FIG.3Cillustrates the following reference points: N1 (between the UE302and the AMF316), N2 (between the RAN304and the AMF316), N3 (between the RAN304and the UPF318), N4 (between the SMF326and the UPF318), N5 (between the PCF332and the AF352), N6 (between the UPF318and the DN354), N7 (between the SMF326and the PCF332), N8 (between the UDM338and the AMF316), N9 (between two UPFs318), N10 (between the UDM338and the SMF326), N11 (between the AMF316and the SMF326), N12 (between the AUSF336and the AMF316), N13 (between the AUSF336and the UDM338), N14 (between two AMFs316), N15 (between the PCF332and the AMF316in case of a non-roaming scenario, or between the PCF332and a visited network and AMF316in case of a roaming scenario), N16 (between two SMFs; not shown), and N22 (between AMF316and NSSF334). Other reference point representations not shown inFIG.3Ccan also be used. In some aspects, as illustrated inFIG.3D, service-based representations can be used to represent network functions within the control plane that enable other authorized network functions to access their services. In this regard, 5G system architecture300D can include the following service-based interfaces: Namf364A (a service-based interface exhibited by the AMF316), Nsmf364B (a service-based interface exhibited by the SMF326), Nnef364C (a service-based interface exhibited by the NEF356), Npcf364D (a service-based interface exhibited by the PCF332), Nudm364E (a service-based interface exhibited by the UDM338), Naf364F (a service-based interface exhibited by the AF352), Nnrf364G (a service-based interface exhibited by the NRF358), Nnssf364H (a service-based interface exhibited by the NSSF334), Nausf3641(a service-based interface exhibited by the AUSF360). Other service-based interfaces (e.g., Nudr, N5g-eir, and Nudsf) not shown inFIG.3Dcan also be used. In some aspects, the NEF356can provide an interface to a MEC host such as MEC host102, which can be used to process wireless connections with the RAN304. FIG.3Eillustrates components of an exemplary 5G-NR architecture with control unit control plane (CU-CP)-control unit user plane (CU-UP) separation, according to an example. Referring toFIG.3E, the 5G-NR architecture300E can include a 5G core (5GC)306and NG-RAN304. The NG-RAN304can include one or more gNBs such as gNB308and310. In some aspects, network elements of the NG-RAN304may be split into central and distributed units, and different central and distributed units, or components of the central and distributed units, may be configured for performing different protocol functions (e.g., different protocol functions of the protocol layers). In some aspects, the gNB308can comprise or be split into one or more of a gNB Central Unit (gNB-CU)322E and gNB Distributed Unit(s) (gNB-DU)324E,326E. Additionally, the gNB308can comprise or be split into one or more of a gNB-CU-Control Plane (gNB-CU-CP)328E and a gNB-CU-User Plane (gNB-CU-UP)330E. The gNB-CU322E is a logical node configured to host the radio resource control (RRC) layer, service data adaptation protocol (SDAP) layer, and packet data convergence protocol layer (PDCP) protocols of the gNB or RRC, and PDCP protocols of the E-UTRA-NR gNB (en-gNB) that controls the operation of one or more gNB-DUs. The gNB-DU (e.g.,324E or326E) is a logical node configured to host the radio link control layer (RLC), medium access control layer (MAC), and physical layer (PHY) layers of the gNB128A,128B or en-gNB, and its operation is at least partly controlled by gNB-CU322E. In some aspects, one gNB-DU (e.g.,324E) can support one or multiple cells. The gNB-CU322E comprises a gNB-CU-Control Plane (gNB-CU-CP) entity328E and a gNB-CU-User Plane entity (gNB-CU-UP)330E. The gNB-CU-CP328E is a logical node configured to host the RRC and the control plane part of the PDCP protocol of the gNB-CU322E for an en-gNB or a gNB. The gNB-CU-UP330E is a logical (or physical) node configured to host the user plane part of the PDCP protocol of the gNB-CU322E for an en-gNB, and the user plane part of the PDCP protocol and the SDAP protocol of the gNB-CU322E for a gNB. The gNB-CU322E and the gNB-DUs324E,326E can communicate via the F1 interface, and the gNB308can communicate with the gNB-CU322E via the Xn-C interface. The gNB-CU-CP328E and the gNB-CU-UP330E can communicate via the E1 interface(s). Additionally, the gNB-CU-CP328E and the gNB-DUs324E,326E can communicate via the F1-C interface, and the gNB-DUs324E,326E and the gNB-CU-UP330E can communicate via the F1-U interface. In some aspects, the gNB-CU322E terminates the F1 interface connected with the gNB-DUs324E,326E, and in other aspects, the gNB-DUs324E,326E terminate the F1 interface connected with the gNB-CU322E. In some aspects, the gNB-CU-CP328E terminates the E1 interface connected with the gNB-CU-UP330E and the F1-C interface connected with the gNB-DUs324E,326E. In some aspects, the gNB-CU-UP330E terminates the E1 interface connected with the gNB-CU-CP328E and the F1-U interface connected with the gNB-DUs324E,326E. In some aspects, the F1 interface is a point-to-point interface between endpoints and supports the exchange of signaling information between endpoints and data transmission to the respective endpoints. The F1 interface can support control plane and user plane separation and separate the Radio Network Layer and the Transport Network Layer. In some aspects, the E1 interface is a point-to-point interface between a gNB-CU-CP and a gNB-CU-UP and supports the exchange of signaling information between endpoints. The E1 interface can separate the Radio Network Layer and the Transport Network Layer, and in some aspects, the E1 interface may be a control interface not used for user data forwarding. Referring to the NG-RAN304, the gNBs308,310of the NG-RAN304may communicate to the 5GC306via the NG interfaces, and can be interconnected to other gNBs via the Xn interface. In some aspects, the gNBs308,310can be configured to support FDD mode, TDD mode, or dual mode operation. In certain aspects, for EN-DC, the S1-U interface and an X2 interface (e.g., X2-C interface) for a gNB, consisting of a gNB-CU and gNB-DUs, can terminate in the gNB-CU. In some aspects, gNB310supporting CP/UP separation, includes a single CU-CP entity328E, multiple CU-UP entities330E, and multiple DU entities324E, . . . ,326E, with all entities being configured for network slice operation. As illustrated inFIG.3E, each DU entity324E, . . . ,326E can have a single connection with the CU-CP328E via a F1-C interface. Each DU entity324E, . . . ,326E can be connected to multiple CU-UP entities330E using F1-U interfaces. The CU-CP entity328E can be connected to multiple CU-UP entities330E via E1 interfaces. Each DU entity324E, . . . ,326E can be connected to one or more UEs, and the CU-UP entities330E can be connected to a user plane function (UPF) and the 5G core306. In some aspects, entities within the gNB310can perform one or more procedures associated with interfaces or radio bearers within the NG-RAN304with the separation of CP/UP. For example, NG-RAN304can support the following procedures associated with network slice configuration:E1 interface setup: this procedure allows to setup the E1 interface, and it includes the exchange of the parameters needed for interface operation. The E1 setup is initiated by the CU-CP328E;E1 interface reset: this procedure allows to reset the E1 interface, including changes in the configuration parameters. The E1 interface reset is initiated by either the CU-CP328E or the CU-UP330E;E1 error indication: this procedure allows to report detected errors in one incoming message. The E1 interface reset is initiated by either the CU-CP328E or the CU-UP330E;E1 load information: this procedure allows CU-UP328E to inform CU-CP328E of the prevailing load condition periodically. The same procedure could also be used to indicate overload of CU-UP330E with overload status (Start/Stop);E1 configuration update: this procedure supports updates in CU-UP330E configuration, such as capacity changes;Data Radio Bearer (DRB) setup: this procedure allows the CU-CP328E to setup DRBs in the CU-CP, including the security key configuration and the quality of service (QoS) flow to DRB mapping configuration;DRB modification: this procedure allows the CU-CP328E to modify DRBs in the CU-CP, including the modification of security key configuration and the modification of the QoS flow to DRB mapping configuration;DRB release: this procedure allows the CU-CP328E to release DRBs in the CU-CP; andDownlink Data Notification (DDN): This procedure allows CU-UP330E to request CU-CP328E to trigger paging procedure to support RRC Inactive state. In some aspects, the NG-RAN304can be configured to support E1 interface management procedures for network slicing including resource availability indication from the CU-UP330E, resource management in CU-UP330E, and latency indication from the CU-UP330E. In some aspects, the NG-RAN304can be configured to support F1-C interface management procedures for network slicing including resource availability indication from the DU entities324E, . . .326E, the resource management in the DU entities324E, . . . ,326E, and latency indication from the DU entities324E, . . . ,326E. In some aspects, the NG-RAN304can be configured to support latency measurements over the F1-U interface so that the UP elements including DU entities (324E, . . . ,326E) and CU-UP entities330E are able to communicate latency information to other neighboring UP elements. In this regard, network slicing can be supported in the NG-RAN304with the separation of CP/UP. In some aspects, slice-level isolation and the improved resource utilization can be provided by the central RRM in the CU-CP328E. In some aspects, procedures associated with network slicing include operations and communications over the E1 interface, the F1-C interface, and the F1-U interface. With these procedures, the CU-CP328E can select the appropriate DU and CU-UP entities to serve the specific network slicing request associated with a certain service level agreement (SLA). In some aspects, the procedure over the E1 interface can include information collection from the CU-UP entities330E and resource management in the CU-CP328E. Specifically, the information collection can include resource availability indication and latency indication, while the resource management can include resource allocation and resource release. The CU-CP328E can be configured to collect the information from the CU-UP entities330E periodically or issue an on-demanding query based on a network slice request. In some aspects, a resource availability indication procedure can allow the CU-UP entities330E to inform the CU-CP328E of the availability of resources to process a network slicing request. For example, the indication of the available resource can assist the CU-CP328E to determine whether the specific CU-UP can serve the specific network slice requesting associated with a certain SLA. In some aspects, a resource allocation procedure can allow the CU-CP328E to allocate the resource in the CU-UP330E that is associated with a specific slice. Upon the reception of a request for a network slice creation, the CU-CP328E can select the CU-UP330E (e.g., one of the CU-UP entities) following the indicated SLA and allocate the resource in the selected CU-UP to the network slice. In some aspects, a resource release procedure can allow the CU-CP328E to release the resource in the CU-UP that is assigned to an established network slice. Upon the removal of the slice, the CU-CP328E can notify the corresponding CU-UP to release the resource used by the removed network slice. FIG.4illustrates a MEC network architecture400modified for supporting slice management, resource management, and traceability functions, according to an example.FIG.4specifically illustrates a MEC architecture400with MEC hosts402and404providing functionalities in accordance with the ETSI GS MEC-003 specification, with the shaded blocks used to indicate processing aspects for the MEC architecture configuration described herein in connection with slice management, resource management, and traceability functions. Specifically, enhancements to the MEC platform432and the MEC platform manager406may be used for providing slice management, resource management, and traceability functions within the MEC architecture400. This may include provisioning of one or more network slices, dynamic management of resources used by the network slices, as well as resource traceability functions within the MEC architecture. Referring toFIG.4, the MEC network architecture400can include MEC hosts402and404, a virtualization infrastructure manager (VIM)408, an MEC platform manager406, an MEC orchestrator410, an operations support system412, a user app proxy414, a UE app418running on UE420, and CFS portal416. The MEC host402can include a MEC platform432with filtering rules control module440, a DNS handling module442, service registry438, and MEC services436. The MEC services436can include at least one scheduler437, which can be used to select resources for instantiating MEC apps (or NFVs)426and428upon virtualization infrastructure422. The MEC apps426and428can be configured to provide services430/431, which can include processing network communications traffic of different types associated with one or more wireless connections (e.g., connections to one or more RAN or core network entities as illustrated inFIGS.1-3D). The MEC hardware433and the at least one scheduler437can be similar to the MEC hardware123and the scheduler120discussed in connection withFIG.1. The MEC platform manager406can include MEC platform element management module444, MEC app rules and requirements management module446, and MEC app lifecycle management module448. The various entities within the MEC architecture400can perform functionalities as disclosed by the ETSI GS MEC-003 specification. In some aspects, UE420can be configured to communicate to one or more of the core networks482via one or more of the network slices480. In some aspects, the core networks482can use slice management functions (e.g., as provided by slice management module164) to dynamically configure slices480, including dynamically assign a slice to a UE, reassign a slice to a UE, dynamically allocate or reallocate resources used by one or more of the slices480, or other slice related management functions. One or more of the functions performed in connection with slice management can be initiated based on user requests (e.g., via a UE) or request by a service provider. In some aspects, the slice management functions in connection with network slices480can be facilitated by AIBRM and BCTM resource management and traceability related functions (provided by, e.g., MEC AIBRM-BCTM module434within the MEC host402or the MEC platform manager406). Additional aspects of network dimensioning/segmenting/slicing and resource management use cases are illustrated in connection withFIG.11,FIG.12,FIG.13, andFIG.14. FIG.5illustrates a MEC and FOG network topology500, according to an example. Referring toFIG.5, the network topology500can include a number of conventional networking layers, that can be extended through the use of a MEC QoS manager discussed herein. Specifically, the relationships between endpoints (at endpoints/things network layer550), gateways (at gateway layer540), access or edge computing nodes (e.g., at neighborhood nodes layer530), core network or routers (e.g., at regional or central office layer520), may be represented through the use of data communicated via MEC hosts that use MEC QoS managers that can be located at various nodes within the topology500. A FOG network (e.g., established at gateway layer540) may represent a dense geographical distribution of near-user edge devices (e.g., FOG nodes), equipped with storage capabilities (e.g., to avoid the need to store data in cloud data centers), communication capabilities (e.g., rather than routed over the internet backbone), control capabilities, configuration capabilities, measurement and management capabilities (rather than controlled primarily by network gateways such as those in the LTE core network), among others. In this context,FIG.5illustrates a general architecture that integrates a number of MEC and FOG nodes—categorized in different layers (based on their position, connectivity and processing capabilities, etc.), with each node implementing a MEC V2X API that can enable a MEC app or other entity of a MEC enabled node to communicate with other nodes. It will be understood, however, that such FOG nodes may be replaced or augmented by edge computing processing nodes. FOG nodes may be categorized depending on the topology and the layer where they are located. In contrast, from a MEC standard perspective, each FOG node may be considered as a MEC host, or a simple entity hosting a MEC app and a light-weighted MEC platform. In an example, a MEC or FOG node may be defined as an application instance, connected to or running on a device (MEC host) that is hosting a MEC platform. Here, the application consumes MEC services and is associated with a MEC host in the system. The nodes may be migrated, associated with different MEC hosts, or consume MEC services from other (e.g., local or remote) MEC platforms. In contrast to this approach, traditional V2V applications are reliant on remote cloud data storage and processing to exchange and coordinate information. A cloud data arrangement allows for long-term data collection and storage but is not optimal for highly time-varying data, such as a collision, traffic light change, etc. and may fail in attempting to meet latency challenges, such as stopping a vehicle when a child runs into the street. In some aspects, the MEC or FOG facilities can be used to locally create, maintain, and destroy MEC or FOG nodes to host data exchanged via NFVs and using resources managed by a MEC QoS manager, based upon need. Depending on the real-time requirements in a vehicular communications context, a hierarchical structure of data processing and storage nodes can be defined. For example, including local ultra-low-latency processing, regional storage, and processing as well as remote cloud data-center based storage and processing. Key Performance Indicators (KPIs) may be used to identify where sensor data is best transferred and where it is processed or stored. This typically depends on the ISO layer dependency of the data. For example, the lower layer (PHY, MAC, routing, etc.) data typically changes quickly and is better handled locally in order to meet latency requirements. Higher layer data such as Application Layer data is typically less time critical and may be stored and processed in a remote cloud data-center. In some aspects, the KPIs are metrics or operational parameters that can include spatial proximity to a V2X-related target event (e.g., accident, etc.); physical proximity to other objects (e.g., how much time is required to transfer data from one data or application object to another object); available processing power; or current load of the target (network) node and corresponding processing latency. In some aspects, the KPIs can be used to facilitate automated location and relocation of data in a MEC architecture. FIG.6illustrates the processing and storage layers in a MEC and FOG network600, according to an example. The illustrated data storage or processing hierarchy610relative to the cloud and fog/edge networks allows dynamic reconfiguration of elements to meet latency and data processing parameters. The lowest hierarchy level is on a vehicle-level. This level stores data on past observations or data obtained from other vehicles. The second hierarchy level is distributed storage across a number of vehicles. This distributed storage may change on short notice depending on vehicle proximity to each other or a target location (e.g., near an accident). The third hierarchy level is in a local anchor point, such as a MEC component, carried by a vehicle in order to coordinate vehicles in a pool of cars. The fourth level of the hierarchy is storage shared across MEC components. For example, data is shared between distinct pools of vehicles that are in the range of each other. The fifth level of the hierarchy is fixed infrastructure storage, such as in road side units (RSUs). This level may aggregate data from entities in hierarchy levels 1-4. The sixth level of the hierarchy is storage across the fixed infrastructure. This level may, for example, be located in the Core Network of a telecommunications network, or an enterprise cloud. Other types of layers and layer processing may follow from this example. Even though techniques disclosed herein for network slicing, service dimensioning, and resource management are discussed in connection with MEC-related architectures where at least one MEC entity is present, the disclosure is not limited in this regard and the disclosed techniques may be used in architectures that do not use MEC entities. Thus, techniques associated with network slicing, service dimensioning, and resource management can be performed in non-MEC architectures as well. Likewise, although techniques disclosed herein are described in connection with a MEC architecture and 5G architecture, the disclosure is not limited in this regard and the disclosed techniques can be used with other types of wireless architectures (e.g., 2G, 3G, 4G, etc.) that use one or more MEC entities. Any of the radio links described herein may operate according to any one or more of the following radio communication technologies and/or standards including but not limited to: a Global System for Mobile Communications (GSM) radio communication technology, a General Packet Radio Service (GPRS) radio communication technology, an Enhanced Data Rates for GSM Evolution (EDGE) radio communication technology, and/or a Third Generation Partnership Project (3GPP) radio communication technology, for example Universal Mobile Telecommunications System (UMTS), Freedom of Multimedia Access (FOMA), 3GPP Long Term Evolution (LTE), 3GPP Long Term Evolution Advanced (LTE Advanced), Code division multiple access 2000 (CDMA2000), Cellular Digital Packet Data (CDPD), Mobitex, Third Generation (3G), Circuit Switched Data (CSD), High-Speed Circuit-Switched Data (HSCSD), Universal Mobile Telecommunications System (Third Generation) (UMTS (3G)), Wideband Code Division Multiple Access (Universal Mobile Telecommunications System) (W-CDMA (UMTS)), High Speed Packet Access (HSPA), High-Speed Downlink Packet Access (HSDPA), High-Speed Uplink Packet Access (HSUPA), High Speed Packet Access Plus (HSPA+), Universal Mobile Telecommunications System-Time-Division Duplex (UMTS-TDD), Time Division-Code Division Multiple Access (TD-CDMA), Time Division-Synchronous Code Division Multiple Access (TD-CDMA), 3rd Generation Partnership Project Release 8 (Pre-4th Generation) (3GPP Rel. 8 (Pre-4G)), 3GPP Rel. 9 (3rd Generation Partnership Project Release 9), 3GPP Rel. 10 (3rd Generation Partnership Project Release 10), 3GPP Rel. 11 (3rd Generation Partnership Project Release 11), 3GPP Rel. 12 (3rd Generation Partnership Project Release 12), 3GPP Rel. 13 (3rd Generation Partnership Project Release 13), 3GPP Rel. 14 (3rd Generation Partnership Project Release 14), 3GPP Rel. 15 (3rd Generation Partnership Project Release 15), 3GPP Rel. 16 (3rd Generation Partnership Project Release 16), 3GPP Rel. 17 (3rd Generation Partnership Project Release 17) and subsequent Releases (such as Rel. 18, Rel. 19, etc.), 3GPP 5G, 3GPP LTE Extra, LTE-Advanced Pro, LTE Licensed-Assisted Access (LAA), MuLTEfire, UMTS Terrestrial Radio Access (UTRA), Evolved UMTS Terrestrial Radio Access (E-UTRA), Long Term Evolution Advanced (4th Generation) (LTE Advanced (4G)), cdmaOne (2G), Code division multiple access 2000 (Third generation) (CDMA2000 (3G)), Evolution-Data Optimized or Evolution-Data Only (EV-DO), Advanced Mobile Phone System (1st Generation) (AMPS (1G)), Total Access Communication System/Extended Total Access Communication System (TACS/ETACS), Digital AMPS (2nd Generation) (D-AMPS (2G)), Push-to-talk (PTT), Mobile Telephone System (MTS), Improved Mobile Telephone System (WITS), Advanced Mobile Telephone System (AMTS), OLT (Norwegian for Offentlig Landmobil Telefoni, Public Land Mobile Telephony), MTD (Swedish abbreviation for Mobiltelefonisystem D, or Mobile telephony system D), Public Automated Land Mobile (Autotel/PALM), ARP (Finnish for Autoradiopuhelin, “car radio phone”), NMT (Nordic Mobile Telephony), High capacity version of NTT (Nippon Telegraph and Telephone) (Hicap), Cellular Digital Packet Data (CDPD), Mobitex, DataTAC, Integrated Digital Enhanced Network (iDEN), Personal Digital Cellular (PDC), Circuit Switched Data (CSD), Personal Handy-phone System (PHS), Wideband Integrated Digital Enhanced Network (WiDEN), iBurst, Unlicensed Mobile Access (UMA), also referred to as also referred to as 3GPP Generic Access Network, or GAN standard), Zigbee, Bluetooth®, Wireless Gigabit Alliance (WiGig) standard, mmWave standards in general (wireless systems operating at 10-300 GHz and above such as WiGig, IEEE 802.11ad, IEEE 802.11ay, etc.), technologies operating above 300 GHz and THz bands, (3GPP/LTE based or IEEE 802.11p and other) Vehicle-to-Vehicle (V2V) and Vehicle-to-X (V2X) and Vehicle-to-Infrastructure (V2I) and Infrastructure-to-Vehicle (I2V) communication technologies, 3GPP cellular V2X, DSRC (Dedicated Short Range Communications) communication systems such as Intelligent-Transport-Systems and others (typically operating in 5850 MHz to 5925 MHz), the European ITS-G5 system (i.e. the European flavor of IEEE 802.11p based DSRC, including ITS-G5A (i.e., Operation of ITS-G5 in European ITS frequency bands dedicated to ITS for safety related applications in the frequency range 5,875 GHz to 5,905 GHz), ITS-G5B (i.e., Operation in European ITS frequency bands dedicated to ITS non-safety applications in the frequency range 5,855 GHz to 5,875 GHz), ITS-G5C (i.e., Operation of ITS applications in the frequency range 5,470 GHz to 5,725 GHz)), DSRC in Japan in the 700 MHz band (including 715 MHz to 725 MHz), etc. Aspects described herein can be used in the context of any spectrum management scheme including a dedicated licensed spectrum, unlicensed spectrum, (licensed) shared spectrum (such as LSA=Licensed Shared Access in 2.3-2.4 GHz, 3.4-3.6 GHz, 3.6-3.8 GHz and further frequencies and SAS=Spectrum Access System/CBRS=Citizen Broadband Radio System in 3.55-3.7 GHz and further frequencies). Applicable spectrum bands include IMT (International Mobile Telecommunications) spectrum as well as other types of spectrum/bands, such as bands with national allocation (including 450-470 MHz, 902-928 MHz (note: allocated for example in US (FCC Part 15)), 863-868.6 MHz (note: allocated for example in European Union (ETSI EN 300 220)), 915.9-929.7 MHz (note: allocated for example in Japan), 917-923.5 MHz (note: allocated for example in South Korea), 755-779 MHz and 779-787 MHz (note: allocated for example in China), 790-960 MHz, 1710-2025 MHz, 2110-2200 MHz, 2300-2400 MHz, 2.4-2.4835 GHz (note: it is an ISM band with global availability and it is used by Wi-Fi technology family (11b/g/n/ax) and also by Bluetooth), 2500-2690 MHz, 698-790 MHz, 610-790 MHz, 3400-3600 MHz, 3400-3800 MHz, 3.55-3.7 GHz (note: allocated for example in the US for Citizen Broadband Radio Service), 5.15-5.25 GHz and 5.25-5.35 GHz and 5.47-5.725 GHz and 5.725-5.85 GHz bands (note: allocated for example in the US (FCC part 15), consists four U-NII bands in total 500 MHz spectrum), 5.725-5.875 GHz (note: allocated for example in EU (ETSI EN 301 893)), 5.47-5.65 GHz (note: allocated for example in South Korea, 5925-7125 MHz and 5925-6425 MHz band (note: under consideration in US and EU, respectively), IMT-advanced spectrum, IMT-2020 spectrum (expected to include 3600-3800 MHz, 3.5 GHz bands, 700 MHz bands, bands within the 24.25-86 GHz range, etc.), spectrum made available under FCC's “Spectrum Frontier” 5G initiative (including 27.5-28.35 GHz, 29.1-29.25 GHz, 31-31.3 GHz, 37-38.6 GHz, 38.6-40 GHz, 42-42.5 GHz, 57-64 GHz, 71-76 GHz, 81-86 GHz and 92-94 GHz, etc.), the ITS (Intelligent Transport Systems) band of 5.9 GHz (typically 5.85-5.925 GHz) and 63-64 GHz, bands currently allocated to WiGig such as WiGig Band 1 (57.24-59.40 GHz), WiGig Band 2 (59.40-61.56 GHz) and WiGig Band 3 (61.56-63.72 GHz) and WiGig Band 4 (63.72-65.88 GHz), 57-64/66 GHz (e.g., having near-global designation for Multi-Gigabit Wireless Systems (MGWS)/WiGig; in US (FCC part 15) allocated as total 14 GHz spectrum, while EU (ETSI EN 302 567 and ETSI EN 301 217-2 for fixed P2P) allocated as total 9 GHz spectrum), the 70.2 GHz-71 GHz band, any band between 65.88 GHz and 71 GHz, bands currently allocated to automotive radar applications such as 76-81 GHz, and future bands including 94-300 GHz and above. Furthermore, the scheme can be used on a secondary basis on bands such as the TV White Space bands (typically below 790 MHz), where particularly the 400 MHz and 700 MHz bands are promising candidates. Besides cellular applications, specific applications for vertical markets may be addressed such as PMSE (Program Making and Special Events), medical, health, surgery, automotive, low-latency, drones, etc. applications. Aspects described herein can also implement a hierarchical application of the scheme by, e.g., introducing a hierarchical prioritization of usage for different types of users (e.g., low/medium/high priority, etc.), based on a prioritized access to the spectrum e.g. with the highest priority to tier-1 users, followed by tier-2, then tier-3 users, and so forth. Aspects described herein can also be applied to different Single Carrier or OFDM flavors (CP-OFDM, SC-FDMA, SC-OFDM, filter bank-based multicarrier (FBMC), OFDMA, etc.) and in particular 3GPP NR (New Radio) by allocating the OFDM carrier data bit vectors to the corresponding symbol resources. Some of the features in this document are defined for the network side, such as Access Points, eNodeBs, New Radio (NR) or next generation Node-Bs (gNodeB or gNB), such as used in the context of 3GPP fifth generation (5G) communication systems, etc. Still, a User Equipment (UE) may take this role as well and act as an Access Points, eNodeBs, gNodeBs, etc. Accordingly, some or all features defined for network equipment may be implemented by a UE or a mobile computing device. In further examples, the preceding examples of network communications and operations may be integrated with IoT and like device-based network architectures.FIG.7illustrates an example domain topology for respective IoT networks coupled through links to respective gateways. The IoT is a concept in which a large number of computing devices are interconnected to each other and to the Internet to provide functionality and data acquisition at very low levels. Thus, as used herein, an Edge/IoT processing device may include a semi-autonomous device performing a function, such as sensing or control, among others, in communication with other Edge/IoT processing devices and a wider network, such as the Internet. MEC use cases have been envisioned to integrate into a number of network and application settings, including those to support network arrangements of IoT deployments. Edge/IoT processing devices are physical or virtualized objects that may communicate on a network (typically at the edge or endpoint of a network) and may include sensors, actuators, and other input/output components, such as to collect data or perform actions from a real-world environment. For example, Edge/IoT processing devices may include low-powered devices that are embedded or attached to everyday things, such as buildings, vehicles, packages, etc., to provide sensing, data, or processing functionality. Recently, Edge/IoT processing devices have become more popular and thus applications and use cases using these devices have proliferated. Often, Edge/IoT processing devices are limited in memory, size, or functionality, enabling larger numbers to be deployed for a similar cost to smaller numbers of larger devices. However, an Edge/IoT processing device may be a smartphone, laptop, tablet, PC, or other larger device. Further, an Edge/IoT processing device may be a virtual device, such as an application on a smartphone or another computing device. Edge/IoT processing devices may include IoT gateways, used to couple Edge/IoT processing devices to other Edge/IoT processing devices and to cloud applications, for data storage, process control, and the like. Networks of Edge/IoT processing devices may include commercial and home automation devices, such as water distribution systems, electric power distribution systems, pipeline control systems, plant control systems, light switches, thermostats, locks, cameras, alarms, motion sensors, and the like. The Edge/IoT processing devices may be accessible through remote computers, servers, and other systems, for example, to control systems or access data. The future growth of the Internet and like networks may involve very large numbers of Edge/IoT processing devices. Accordingly, in the context of the techniques discussed herein, a number of innovations for such future networking will address the need for all these layers to grow unhindered, to discover and make accessible connected resources, and to support the ability to hide and compartmentalize connected resources. Any number of network protocols and communications standards may be used, wherein each protocol and standard is designed to address specific objectives. Further, the protocols are part of the fabric supporting human accessible services that operate regardless of location, time or space. The innovations include service delivery and associated infrastructure, such as hardware and software; security enhancements; and the provision of services based on Quality of Service (QoS) terms specified in service level and service delivery agreements. As will be understood, the use of Edge/IoT processing devices and networks present a number of new challenges in a heterogeneous network of connectivity comprising a combination of wired and wireless technologies. FIG.7specifically provides a simplified drawing of a domain topology that may be used for a number of IoT networks comprising Edge/IoT processing devices704, with the IoT networks756,758,760,762, coupled through backbone links702to respective gateways754. For example, a number of Edge/IoT processing devices704may communicate with a gateway754, and with each other through the gateway754. To simplify the drawing, not every Edge/IoT processing device704, or communications link (e.g., link716,722,728, or732) is labeled. The backbone links702may include any number of wired or wireless technologies, including optical networks, and may be part of a local area network (LAN), a wide area network (WAN), or the Internet. Additionally, such communication links facilitate optical signal paths among both Edge/IoT processing devices704and gateways754, including the use of MUXing/deMUXing components that facilitate interconnection of the various devices. The network topology may include any number of types of IoT networks, such as a mesh network provided with the network756using Bluetooth low energy (BLE) links722. Other types of IoT networks that may be present include a wireless local area network (WLAN) network758used to communicate with Edge/IoT processing devices704through IEEE 802.11 (Wi-Fi®) links728, a cellular network760used to communicate with Edge/IoT processing devices704through an LTE/LTE-A (4G) or 5G cellular network, and a low-power wide area (LPWA) network762, for example, a LPWA network compatible with the LoRaWan specification promulgated by the LoRa alliance, or a IPv6 over Low Power Wide-Area Networks (LPWAN) network compatible with a specification promulgated by the Internet Engineering Task Force (IETF). Further, the respective IoT networks may communicate with an outside network provider (e.g., a tier 2 or tier 3 provider) using any number of communications links, such as an LTE cellular link, an LPWA link, or a link based on the IEEE 802.15.4 standard, such as Zigbee®. The respective IoT networks may also operate with the use of a variety of network and internet application protocols such as Constrained Application Protocol (CoAP). The respective IoT networks may also be integrated with coordinator devices that provide a chain of links that form the cluster tree of linked devices and networks. Each of these IoT networks may provide opportunities for new technical features, such as those as described herein. The improved technologies and networks may enable the exponential growth of devices and networks, including the use of IoT networks into fog devices or systems. As the use of such improved technologies grows, the IoT networks may be developed for self-management, functional evolution, and collaboration, without needing direct human intervention. The improved technologies may even enable IoT networks to function without centralized controlled systems. Accordingly, the improved technologies described herein may be used to automate and enhance network management and operation functions far beyond current implementations. In an example, communications between Edge/IoT processing devices704, such as over the backbone links702, may be protected by a decentralized system for authentication, authorization, and accounting (AAA). In a decentralized AAA system, distributed payment, credit, audit, authorization, and authentication systems may be implemented across the interconnected heterogeneous network infrastructure. This enables systems and networks to move towards autonomous operations. In these types of autonomous operations, machines may even contract for human resources and negotiate partnerships with other machine networks. This may enable the achievement of mutual objectives and balanced service delivery against outlined, planned service level agreements as well as achieve solutions that provide metering, measurements, traceability, and trackability. The creation of new supply chain structures and methods may enable a multitude of services to be created, mined for value, and collapsed without any human involvement. Such IoT networks may be further enhanced by the integration of sensing technologies, such as sound, light, electronic traffic, facial and pattern recognition, smell, vibration, into the autonomous organizations among the Edge/IoT processing devices. The integration of sensory systems may enable systematic and autonomous communication and coordination of service delivery against contractual service objectives, orchestration and QoS-based swarming and fusion of resources. Some of the individual examples of network-based resource processing include the following. The mesh network756, for instance, may be enhanced by systems that perform inline data-to-information transforms. For example, self-forming chains of processing resources comprising a multi-link network may distribute the transformation of raw data to information in an efficient manner, and the ability to differentiate between assets and resources and the associated management of each. Furthermore, the proper components of infrastructure and resource-based trust and service indices may be inserted to improve the data integrity, quality, assurance and deliver a metric of data confidence. The WLAN network758, for instance, may use systems that perform standards conversion to provide multi-standard connectivity, enabling Edge/IoT processing devices704using different protocols to communicate. Further systems may provide seamless interconnectivity across a multi-standard infrastructure comprising visible Internet resources and hidden Internet resources. Communications in the cellular network760, for instance, may be enhanced by systems that offload data, extend communications to more remote devices, or both. The LPWA network762may include systems that perform non-Internet protocol (IP) to IP interconnections, addressing, and routing. Further, each of the Edge/IoT processing devices704may include the appropriate transceiver for wide area communications with that device. Further, each Edge/IoT processing device704may include other transceivers for communications using additional protocols and frequencies. This is discussed further with respect to the communication environment and hardware of an IoT processing device depicted inFIG.9andFIG.10. Finally, clusters of Edge/IoT processing devices may be equipped to communicate with other Edge/IoT processing devices as well as with a cloud network. This may enable the Edge/IoT processing devices to form an ad-hoc network between the devices, enabling them to function as a single device, which may be termed a fog device, fog platform, or fog network. This configuration is discussed further with respect toFIG.8below. FIG.8illustrates a cloud-computing network in communication with a mesh network of Edge/IoT processing devices (devices802) operating as fog devices at the edge of the cloud computing network, according to an example. The mesh network of Edge/IoT processing devices may be termed a fog network820, established from a network of devices operating at the edge of the cloud800. To simplify the diagram, not every Edge/IoT processing device802is labeled. The fog network820may be considered to be a massively interconnected network wherein a number of Edge/IoT processing devices802are in communications with each other, for example, by radio links822. The fog network820may establish a horizontal, physical, or virtual resource platform that can be considered to reside between IoT edge devices and cloud or data centers. A fog network, in some examples, may support vertically-isolated, latency-sensitive applications through layered, federated, or distributed computing, storage, and network connectivity operations. However, a fog network may also be used to distribute resources and services at and among the edge and the cloud. Thus, references in the present document to the “edge”, “fog”, and “cloud” are not necessarily discrete or exclusive of one another. As an example, the fog network820may be facilitated using an interconnect specification released by the Open Connectivity Foundation™ (OCF). This standard enables devices to discover each other and establish communications for interconnects. Other interconnection protocols may also be used, including, for example, the optimized link state routing (OLSR) Protocol, the better approach to mobile ad-hoc networking (B.A.T.M.A.N.) routing protocol, or the OMA Lightweight M2M (LWM2M) protocol, among others. Three types of Edge/IoT processing devices802are shown in this example, gateways804, data aggregators826, and sensors828, although any combinations of Edge/IoT processing devices802and functionality may be used. The gateways804may be edge devices that provide communications between the cloud800and the fog820and may also provide the backend process function for data obtained from sensors828, such as motion data, flow data, temperature data, and the like. The data aggregators826may collect data from any number of the sensors828and perform the back-end processing function for the analysis. The results, raw data, or both may be passed along to the cloud800through the gateways804. The sensors828may be full Edge/IoT processing devices802, for example, capable of both collecting data and processing the data. In some cases, the sensors828may be more limited in functionality, for example, collecting the data and enabling the data aggregators826or gateways804to process the data. Communications from any of the Edge/IoT processing devices802may be passed along a convenient path (e.g., a most convenient path) between any of the Edge/IoT processing devices802to reach the gateways804. In these networks, the number of interconnections provides substantial redundancy, enabling communications to be maintained, even with the loss of a number of Edge/IoT processing devices802. Further, the use of a mesh network may enable Edge/IoT processing devices802that are very low power or located at a distance from infrastructure to be used, as the range to connect to another Edge/IoT processing devices802may be much less than the range to connect to the gateways804. The fog820provided from these Edge/IoT processing devices802may be presented to devices in the cloud800, such as a server806, as a single device located at the edge of the cloud800, e.g., a fog device. In this example, the alerts coming from the Fog device may be sent without being identified as coming from a specific Edge/IoT processing devices802within the fog820. In this fashion, the fog820may be considered a distributed platform that provides computing and storage resources to perform processing or data-intensive tasks such as data analytics, data aggregation, and machine learning, among others. In some examples, the Edge/IoT processing devices802may be configured using an imperative programming style, e.g., with each Edge/IoT processing devices802having a specific function and communication partners. However, the Edge/IoT processing devices802forming the fog device may be configured in a declarative programming style, enabling the Edge/IoT processing devices802to reconfigure their operations and communications, such as to determine needed resources in response to conditions, queries, and device failures. As an example, a query from a user located at a server806about the operations of a subset of equipment monitored by the Edge/IoT processing devices802may result in the fog820device selecting the Edge/IoT processing devices802, such as particular sensors828, needed to answer the query. The data from these sensors828may then be aggregated and analyzed by any combination of the sensors828, data aggregators826, or gateways804, before being sent on by the fog820device to the server806to answer the query. In this example, Edge/IoT processing devices802in the fog820may select the sensors828used based on the query, such as adding data from flow sensors or temperature sensors. Further, if some of the Edge/IoT processing devices802are not operational, other Edge/IoT processing devices802in the fog820device may provide analogous data, if available. In other examples, the operations and functionality described above may be embodied by an Edge/IoT processing device machine in the example form of an electronic processing system, within which a set or sequence of instructions may be executed to cause the electronic processing system to perform any one of the methodologies discussed herein, according to an example embodiment. The machine may be an Edge/IoT processing device or an IoT gateway, including a machine embodied by aspects of a personal computer (PC), a tablet PC, a personal digital assistant (PDA), a mobile telephone or smartphone, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, these and like examples to a processor-based system shall be taken to include any set of one or more machines that are controlled by or operated by a processor, set of processors, or processing circuitry (e.g., a machine in the form of a computer, UE, MEC processing device, IoT processing device, etc.) to individually or jointly execute instructions to perform any one or more of the methodologies discussed herein. Accordingly, in various examples, applicable means for processing (e.g., processing, controlling, generating, evaluating, etc.) may be embodied by such processing circuitry. FIG.9illustrates a block diagram of a cloud computing network, or cloud900, in communication with a number of Edge/IoT processing devices, according to an example. The cloud computing network (or “cloud”)900may represent the Internet or may be a local area network (LAN), or a wide area network (WAN), such as a proprietary network for a company. The Edge/IoT processing devices may include any number of different types of devices, grouped in various combinations. For example, a traffic control group906may include Edge/IoT processing devices along streets in a city. These Edge/IoT processing devices may include stoplights, traffic flow monitors, cameras, weather sensors, and the like. The traffic control group906, or other subgroups, may be in communication with the cloud900through wired or wireless links908, such as LPWA links, optical links, and the like. Further, a wired or wireless sub-network912may allow the Edge/IoT processing devices to communicate with each other, such as through a local area network, a wireless local area network, and the like. The Edge/IoT processing devices may use another device, such as a gateway910or928to communicate with remote locations such as the cloud900; the Edge/IoT processing devices may also use one or more servers930to facilitate communication with the cloud900or with the gateway910. For example, the one or more servers930may operate as an intermediate network node to support a local edge cloud or fog implementation among a local area network. Further, the gateway928that is depicted may operate in a cloud-to-gateway-to-many edge devices configuration, such as with the various Edge/IoT processing devices914,920,924being constrained or dynamic to an assignment and use of resources in the cloud900. Other example groups of Edge/IoT processing devices may include remote weather stations914, local information terminals916, alarm systems918, automated teller machines920, alarm panels922, or moving vehicles, such as emergency vehicles924or other vehicles926, among many others. Each of these Edge/IoT processing devices may be in communication with other Edge/IoT processing devices, with servers904, with another IoT fog platform or system, or a combination therein. The groups of Edge/IoT processing devices may be deployed in various residential, commercial, and industrial settings (including in both private or public environments). As may be seen fromFIG.9, a large number of Edge/IoT processing devices may be communicating through the cloud900. This may allow different Edge/IoT processing devices to request or provide information to other devices autonomously. For example, a group of Edge/IoT processing devices (e.g., the traffic control group906) may request a current weather forecast from a group of remote weather stations914, which may provide the forecast without human intervention. Further, an emergency vehicle924may be alerted by an automated teller machine920that a burglary is in progress. As the emergency vehicle924proceeds towards the automated teller machine920, it may access the traffic control group906to request clearance to the location, for example, by lights turning red to block cross traffic at an intersection insufficient time for the emergency vehicle924to have unimpeded access to the intersection. Clusters of Edge/IoT processing devices, such as the remote weather stations914or the traffic control group906, may be equipped to communicate with other Edge/IoT processing devices as well as with the cloud900. This may allow the Edge/IoT processing devices to form an ad-hoc network between the devices, allowing them to function as a single device, which may be termed a fog platform or system (e.g., as described above with reference toFIG.8). FIG.10is a block diagram of an example of components that may be present in an Edge/IoT processing device1050for implementing the techniques (e.g., operations, processes, methods, and methodologies) described herein. The Edge/IoT processing device1050may include any combinations of the components shown in the example or referenced in the disclosure above, and it may include any device usable with an Edge/Fog/IoT communication network or a combination of such networks. The components may be implemented as ICs, portions thereof, discrete electronic devices, or other modules, logic, hardware, software, firmware, or a combination thereof adapted in the Edge/IoT processing device1050, or as components otherwise incorporated within a chassis of a larger system. Additionally, the block diagram ofFIG.10is intended to depict a high-level view of components of the Edge/IoT processing device1050. However, some of the components shown may be omitted, additional components may be present, and a different arrangement of the components shown may occur in other implementations. The Edge/IoT processing device1050may include processing circuitry in the form of a processor1052, which may be a microprocessor, a multi-core processor, a multithreaded processor, an ultra-low voltage processor, an embedded processor, or other known processing elements. The processor1052may be a part of a system on a chip (SoC) in which the processor1052and other components are formed into a single integrated circuit, or a single package, such as the Edison™ or Galileo™ SoC boards from Intel. As an example, the processor1052may include an Intel® Architecture Core™ based processor, such as a Quark™, an Atom™, an i3, an i5, an i7, or an MCU-class processor, or another such processor available from Intel® Corporation, Santa Clara, California. However, any number other processors may be used, such as available from Advanced Micro Devices, Inc. (AMD) of Sunnyvale, Calif, a MIPS-based design from MIPS Technologies, Inc. of Sunnyvale, Calif, an ARM-based design licensed from ARM Holdings, Ltd. or a customer thereof, or their licensees or adopters. The processors may include units such as an A5-A12 processor from Apple® Inc., a Snapdragon™ processor from Qualcomm® Technologies, Inc., or an OMAP™ processor from Texas Instruments, Inc. The processor1052may communicate with a system memory1054over an interconnect1056(e.g., a bus). Any number of memory devices may be used to provide for a given amount of system memory. As examples, the memory may be random access memory (RAM) in accordance with a Joint Electron Devices Engineering Council (JEDEC) design such as the DDR or mobile DDR standards (e.g., LPDDR, LPDDR2, LPDDR3, or LPDDR4). In various implementations, the individual memory devices may be of any number of different package types such as single die package (SDP), dual die package (DDP) or quad die package (Q17P). These devices, in some examples, may be directly soldered onto a motherboard to provide a lower profile solution, while in other examples the devices are configured as one or more memory modules that in turn couple to the motherboard by a given connector. Any number of other memory implementations may be used, such as other types of memory modules, e.g., dual inline memory modules (DIMMs) of different varieties including but not limited to microDIMMs or MiniDIMMs. To provide for persistent storage of information such as data, applications, operating systems and so forth, a storage1058may also couple to the processor1052via the interconnect1056. In an example, the storage1058may be implemented via a solid-state disk drive (SSDD). Other devices that may be used for the storage1058include flash memory cards, such as SD cards, microSD cards, XD picture cards, and the like, and USB flash drives. In low power implementations, the storage1058may be on-die memory or registers associated with the processor1052. However, in some examples, the storage1058may be implemented using a micro hard disk drive (HDD). Further, any number of new technologies may be used for the storage1058in addition to, or instead of, the technologies described, such resistance change memories, phase change memories, holographic memories, or chemical memories, among others. The components may communicate over the interconnect1056. The interconnect1056may include any number of technologies, including industry standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extended (PCIx), PCI express (PCIe), or any number of other technologies. The interconnect1056may be a proprietary bus, for example, used in a SoC based system. Other bus systems may be included, such as an I2C interface, an SPI interface, point to point interfaces, and a power bus, among others. The interconnect1056may couple the processor1052to a mesh transceiver1062, for communications with other mesh devices1064. The mesh transceiver1062may use any number of frequencies and protocols, such as 2.4 Gigahertz (GHz) transmissions under the IEEE 802.15.4 standard, using the Bluetooth® low energy (BLE) standard, as defined by the Bluetooth® Special Interest Group, or the ZigBee® standard, among others. Any number of radios, configured for a particular wireless communication protocol, may be used for the connections to the mesh devices1064. For example, a WLAN unit may be used to implement Wi-Fi™ communications in accordance with the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard. In addition, wireless wide area communications, e.g., according to a cellular or other wireless wide area protocol, may occur via a WWAN unit. The mesh transceiver1062may communicate using multiple standards or radios for communications at a different range. For example, the Edge/IoT processing device1050may communicate with close devices, e.g., within about 10 meters, using a local transceiver based on BLE, or another low power radio, to save power. More distant mesh devices1064, e.g., within about 50 meters, may be reached over ZigBee or other intermediate power radios. Both communications techniques may take place over a single radio at different power levels or may take place over separate transceivers, for example, a local transceiver using BLE and a separate mesh transceiver using ZigBee. A wireless network transceiver1066may be included to communicate with devices or services in the cloud1000via local or wide area network protocols. The wireless network transceiver1066may be an LPWA transceiver that follows the IEEE 802.15.4, or IEEE 802.15.4g standards, among others. The Edge/IoT processing device1050may communicate over a wide area using LoRaWAN™ (Long Range Wide Area Network) developed by Semtech and the LoRa Alliance. The techniques described herein are not limited to these technologies but may be used with any number of other cloud transceivers that implement long range, low bandwidth communications, such as Sigfox, and other technologies. Further, other communications techniques, such as time-slotted channel hopping, described in the IEEE 802.15.4e specification may be used. Any number of other radio communications and protocols may be used in addition to the systems mentioned for the mesh transceiver1062and wireless network transceiver1066, as described herein. For example, the radio transceivers1062and1066may include an LTE or another cellular transceiver that uses spread spectrum (SPA/SAS) communications for implementing high-speed communications. Further, any number of other protocols may be used, such as Wi-Fi® networks for medium speed communications and provision of network communications. The radio transceivers1062and1066may include radios that are compatible with any number of 3GPP (Third Generation Partnership Project) specifications, notably Long Term Evolution (LTE), Long Term Evolution-Advanced (LTE-A), and Long Term Evolution-Advanced Pro (LTE-A Pro). It may be noted that radios compatible with any number of other fixed, mobile, or satellite communication technologies and standards may be selected. These may include, for example, any Cellular Wide Area radio communication technology, which may include e.g. a 5th Generation (5G) communication systems, a Global System for Mobile Communications (GSM) radio communication technology, a General Packet Radio Service (GPRS) radio communication technology, or an Enhanced Data Rates for GSM Evolution (EDGE) radio communication technology, a UMTS (Universal Mobile Telecommunications System) communication technology, In addition to the standards listed above, any number of satellite uplink technologies may be used for the wireless network transceiver1066, including, for example, radios compliant with standards issued by the ITU (International Telecommunication Union), or the ETSI (European Telecommunications Standards Institute), among others. The examples provided herein are thus understood as being applicable to various other communication technologies, both existing and not yet formulated. A network interface controller (NIC)1068may be included to provide a wired communication to the cloud1000or to other devices, such as the mesh devices1064. The wired communication may provide an Ethernet connection or may be based on other types of networks, such as Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, among many others. An additional NIC1068may be included to enable connecting to a second network, for example, a NIC1068providing communications to the cloud over Ethernet, and a second NIC1068providing communications to other devices over another type of network. Given the variety of types of applicable communications from the device to another component or network, applicable communications circuitry used by the device may include or be embodied by any one or more of components1062,1066,1068, or1070. Accordingly, in various examples, applicable means for communicating (e.g., receiving, transmitting, etc.) may be embodied by such communications circuitry. The interconnect1056may couple the processor1052to an external interface1070that is used to connect external devices or subsystems. The external devices may include sensors1072, such as accelerometers, level sensors, flow sensors, optical light sensors, camera sensors, temperature sensors, a global positioning system (GPS) sensors, pressure sensors, barometric pressure sensors, and the like. The external interface1070further may be used to connect the Edge/IoT processing device1050to actuators1074, such as power switches, valve actuators, an audible sound generator, a visual warning device, and the like. In some optional examples, various input/output (I/O) devices may be present within or connected to, the Edge/IoT processing device1050. For example, a display or other output device1084may be included to show information, such as sensor readings or actuator position. An input device1086, such as a touch screen or keypad may be included to accept input. An output device1084may include any number of forms of audio or visual display, including simple visual outputs such as binary status indicators (e.g., LEDs) and multi-character visual outputs, or more complex outputs such as display screens (e.g., LCD screens), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of the Edge/IoT processing device1050. A battery1076may power the Edge/IoT processing device1050, although, in examples in which the Edge/IoT processing device1050is mounted in a fixed location, it may have a power supply coupled to an electrical grid. The battery1076may be a lithium ion battery, or a metal-air battery, such as a zinc-air battery, an aluminum-air battery, a lithium-air battery, and the like. A battery monitor/charger1078may be included in the Edge/IoT processing device1050to track the state of charge (SoCh) of the battery1076. The battery monitor/charger1078may be used to monitor other parameters of the battery1076to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of the battery1076. The battery monitor/charger1078may include a battery monitoring integrated circuit, such as an LTC4020 or an LTC2990 from Linear Technologies, an ADT7488A from ON Semiconductor of Phoenix Arizona, or an IC from the UCD90xxx family from Texas Instruments of Dallas, TX The battery monitor/charger1078may communicate the information on the battery1076to the processor1052over the interconnect1056. The battery monitor/charger1078may also include an analog-to-digital (ADC) converter that enables the processor1052to directly monitor the voltage of the battery1076or the current flow from the battery1076. The battery parameters may be used to determine actions that the Edge/IoT processing device1050may perform, such as transmission frequency, mesh network operation, sensing frequency, and the like. A power block1080, or other power supply coupled to a grid, may be coupled with the battery monitor/charger1078to charge the battery1076. In some examples, the power block1080may be replaced with a wireless power receiver to obtain the power wirelessly, for example, through a loop antenna in the Edge/IoT processing device1050. A wireless battery charging circuit, such as an LTC4020 chip from Linear Technologies of Milpitas, California, among others, may be included in the battery monitor/charger1078. The specific charging circuits may be selected based on the size of the battery1076, and thus, the current required. The charging may be performed using the Airfuel standard promulgated by the Airfuel Alliance, the Qi wireless charging standard promulgated by the Wireless Power Consortium, or the Rezence charging standard, promulgated by the Alliance for Wireless Power, among others. The storage1058may include instructions1082in the form of software, firmware, or hardware commands to implement the techniques described herein. Although such instructions1082are shown as code blocks included in the memory1054and the storage1058, it may be understood that any of the code blocks may be replaced with hardwired circuits, for example, built into an application specific integrated circuit (ASIC). In an example, the instructions1082provided via the memory1054, the storage1058, or the processor1052may be embodied as a non-transitory, machine-readable medium1060including code to direct the processor1052to perform electronic operations in the Edge/IoT processing device1050. The processor1052may access the non-transitory, machine-readable medium1060over the interconnect1056. For instance, the non-transitory, machine-readable medium1060may be embodied by devices described for the storage1058or may include specific storage units such as optical disks, flash drives, or any number of other hardware devices. The non-transitory, machine-readable medium1060may include instructions to direct the processor1052to perform a specific sequence or flow of actions, for example, as described with respect to the flowchart(s) and block diagram(s) of operations and functionality depicted above. As used in, the terms “machine-readable medium” and “computer-readable medium” are interchangeable. In further examples, a machine-readable medium also includes any tangible medium that is capable of storing, encoding or carrying instructions for execution by a machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. A “machine-readable medium” thus may include but is not limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including but not limited to, by way of example, semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The instructions embodied by a machine-readable medium may further be transmitted or received over a communications network using a transmission medium via a network interface device utilizing any one of a number of transfer protocols (e.g., HTTP). A machine-readable medium may be provided by a storage device or other apparatus which is capable of hosting data in a non-transitory format. In an example, information stored or otherwise provided on a machine-readable medium may be representative of instructions, such as instructions themselves or a format from which the instructions may be derived. This format from which the instructions may be derived may include source code, encoded instructions (e.g., in compressed or encrypted form), packaged instructions (e.g., split into multiple packages), or the like. The information representative of the instructions in the machine-readable medium may be processed by processing circuitry into the instructions to implement any of the operations discussed herein. For example, deriving the instructions from the information (e.g., processing by the processing circuitry) may include: compiling (e.g., from source code, object code, etc.), interpreting, loading, organizing (e.g., dynamically or statically linking), encoding, decoding, encrypting, unencrypting, packaging, unpackaging, or otherwise manipulating the information into the instructions. In an example, the derivation of the instructions may include assembly, compilation, or interpretation of the information (e.g., by the processing circuitry) to create the instructions from some intermediate or preprocessed format provided by the machine readable medium. The information, when provided in multiple parts, may be combined, unpacked, and modified to create the instructions. For example, the information may be in multiple compressed source code packages (or object code, or binary executable code, etc.) on one or several remote servers. The source code packages may be encrypted when in transit over a network and decrypted, uncompressed, assembled (e.g., linked) if necessary, and compiled or interpreted (e.g., into a library, stand-alone executable etc.) at a local machine, and executed by the local machine. In some aspects, the present disclosure provides methods and configurations of automatic segmenting of applications and or software in order to place applications distributed on different devices, whether such devices are part of client, infrastructure, or cloud settings. Rather than an emphasis on orchestration, the following methods and configuration provide dimensioning of the application itself, automatically dimensioning with results in different granularity of modules. Such dimensioning will be extremely important for 5G and Edge computing use cases, because many applications do not (or cannot) take into account the network conditions or environment provided by 5G and Edge computing concepts. The following methods and configurations may enable an improved customer deployment for many types of legacy or new apps, while still supporting development and use of cloud applications. In particular, the use of dimensioning enables an appropriate placement of app development to be offered from end to end. In prior approaches, an application owner or developer would need to change code, in a non-static fashion, which would not take into account different degrees of dimensioning or segmenting. Thus, such prior approaches did not fully consider account resource availability, cost structures, and the time required to change and validate for each different scenario. The following methods and configurations also support the end to end deployment of hardware products and platforms, due to the flexible use cases for processors and other components that is introduced by dimensioning. The methods and configuration also support the use of dimensioning as a service, which enables different cloud providers, software vendors, and network operators who are interested some software to have the software written once and dimensioned based on time of day, cost of placement, and time of use. Technical benefits of these methods and configurations include, among others, include power saving, efficiencies of code for different locations, time, cost savings, and portions of app movement in critical times (power failure), enabling a full use of a mesh compute network (such as multiple clients, cloud, edge, network) or other type networks. FIG.11illustrates an overview of services and use cases among edge and core networks. Referring toFIG.11, this diagram illustrates the different types of use cases that exist at the network edge (with devices and things), and how on-premise edge hardware is designed to provide connectivity with such devices and things. The present techniques for dimensioning and segmenting, in particular, are relevant to computing aspects of the edge and core networks, extending to a variety of data centers, MEC deployments, core and data center equipment, even extending into cloud data center equipment. Several challenges exist today for managing services and applications in this type of an environment, such as how to identify and collect data for latencies between nodes, connection types between client and servers, an unknown amount of compute, I/O, memory, and like capabilities at various locations, and how to dimension applications to accommodate such capabilities with different placement within the network. Thus, given the many types of client, network, and data center locations for computing operations, the present techniques are designed to automatically dimension an application to be distributed across such locations. In an example, dimensioning techniques include capturing knowledge and training data representing latencies between computing devices and insight into the application, thus providing a learning of the application or a framework to enable application and service dimensioning. For instance, such insights may relate to what piece of the application can operate on a particular client, base station, central or regional office hardware, and how such an application operates. Once the learning of the application is performed, tags can be derived to enable placement of network dimensioning with fine-grained ability. Another aspect evaluated by the present techniques may include a length of time needed to execute a particular application task, service, or compute operation. Tracking this information may enable evaluation of how long (or what amount) the compute, memory, and I/O resource is needed to meet requirements of service level agreements (SLAs) and to maximize the ability to utilize operations with mobile devices. This dynamic dimensioning may also change based on events, such as emergencies or changes in network resources. As will be understood, MEC services may include a variety of use cases, such as (but not limited to): live broadcast processing, cloud gaming, industrial automation, smart retail, smart stadium, and the like. MEC applications may include applications developed for a variety of settings, such as: enterprise and cloud customer premise equipment; wireless access equipment; mobile access edge computing (MEC) systems; edge central offices; core network equipment; and the like. In this context, the proper operation of SLAs and resources within a MEC system may be improved through service dimensioning. FIG.12illustrates a flowchart1200of service dimensioning operations deployed in connection with network service slicing. Referring toFIG.12, this diagram illustrates the use of machine learning to dynamically dimension an application. Relevant system and operational data (e.g., telemetry data), such as CPU usage, memory usage, I/O event data, prefetches, brand detection, timing between function calls or workload execution phases, and other statistical, operational, or usage data is collected (operation1202). This data is analyzed to determine modularity (operation1204). Modularity encompasses the concepts of functional organization, memory access consolidation, independence of execution, and other aspects of modular programming. By separating functionality of an application or a suite of applications into independent, interchangeable modules (or workloads, or workload tasks), modular programming provides a way to scale processing and memory resources to meet the requirements of each module. Dimensioning refers to the concepts of modularizing an application and distributing the execution of the workload across one or more layers in a network to satisfy SLAs of an individual workload, an application, a function, or a service. Modularity of an application may be determined by analyzing how the application is used across many uses of the application. Tracking the functions used, the memory access patterns, network usage, and other aspects of operation, portions of an application may be identified as being separable into a module. An application's execution may be viewed as a series of phases. Each phase of an application may include use of one or more workloads. An application that is executed across multiple instances (e.g., when called by multiple clients) may cause resource contention because of overlapping phases. Phase resource requirements and the identification of respective options for application and service dimensioning may be analyzed (operation1206). Timing characteristics and dependencies of application phases, both inter-application and intra-application, may be determined (operation1208). Phase timing and dependency information along with modularity determinations are used to create workloads (operation1210). Some functions may be modularized in a number of alternative ways. Based on phase requirements, timing, and dependencies, a more optimal modulization scheme may be created in operation1210. Workloads are not created based solely on CPU requirements, memory usage, I/O requirements, or the like, but instead are created more holistically by considering the entirety of the network resources, network and processing latencies, phase timing and dependencies, and other related information. This operation may be performed using an artificial intelligence engine, such as an engine that implements one or more machine learning models or algorithms. One or more modulization schemes are provided to an orchestrator (operation1212). The orchestrator may select a pathway or modification to the modularization scheme to implement. In existing approaches, limited work has been considered in determining how much compute and cache is needed to result in a fingerprint which was created to optimize utilization of CPUs. Other approaches have attempted the implementation of dimensioning based on OSI Layers. However, the present techniques may consider additional hardware and software elements aspects relevant to dimensioning such as keying, latency, data access, length of time in and out, branches, branch prediction, calls, etc. The consideration of these individual elements enables dimensioning of code to enable automatic distribution enhancements for network slicing, edge compute, and many other industry initiatives. As a result, it will be understood that the use of machine learning and other AI techniques may identify software enhancements or segmentation within libraries or a variety of modules for dimensioning or segmenting code operations. FIG.13illustrates heatmaps depicting phase transition graph1300and residency graph1302, according to an example. The phase transition graph1300indicates the probability of a workload transitioning from one phase (on the y-axis) to another phase (on the x-axis), with a transition having a higher or lower probability mapped to darker or lighter shades, respectively. A darker shaded section of the phase transition graph1300indicates that there are multiple co-located workloads that are likely to change phase at approximately the same time. For instance, the co-located workloads with darker shades in phase transition 2→0 may be analyzed for resource and contention requirements (e.g., resource analysis operation1206). Additionally, the residency graph1302includes shaded areas that are used to visualize timing requirements, which considered when determining timing and dependencies in operation1208from above. In further examples, these heat maps are used as input to the AI models or algorithms discussed herein, such as in a scenario where a graphical representation of the phase transitions is used to model or modify application dimensioning scenarios. FIG.14illustrates an example edge computing use case invoking application dimensioning, according to an example. Specifically, in this use case, data read from a device (in this case a smartwatch) needs to be analyzed and used for visualization and other computation. An application is used to analyze the data obtained from the device (operation1402). The data may include various environmental data sensed from one or more sensors built into the device, user activity data, contextual factors, or the like. Examples include but are not limited to GPS data, ambient temperature data, IMU data, time, date, day of week, user biometric data, user calendar data, user contacts data, or the like. The application is executed and performance data of how the application uses various functional components is collected (operation1402). A workload phase graph is constructed based on the analysis (operation1404). This may be performed using a machine learning (ML) platform that analyzes one or many executions of the application on the data or similar data. The workload phase graph may include both a workload phase transition graph (e.g., phase transition graph1300) and a workload residency graph (e.g., residency graph1302). The workload phase graph (or graphs) represents a workload fingerprint in that it is relatively unique to the application and the data set that the application acted upon. The workload phase graph is saved as a workload fingerprint (operation1406) so that it may be referenced again. The workload fingerprint and a service level objective (SLO) are sent to an orchestrator (operation1408). The orchestrator uses the workload fingerprint and SLO to determine which dimensions to activate (e.g., which functions are instantiated in the cloud, middle layer, or edge devices) (operation1410). Based on the SLO and workload fingerprint, separate dimensions are activated. As illustrated inFIG.14, one workload portion (e.g., module)1450is instantiated in a cloud provider1452and another workload portion (e.g., module)1460is instantiated in an edge device1462. Data sources may common among multiple dimensions and may be hosted on a local memory (e.g., storage class memory) or in ephemeral storage on top of non-volatile memory express (NVMe) drives. In other situations, each workload portion (e.g., workload portion1450and workload portion1460) maintain their own data storage and pass intermediate results as parameters from one workload portion to another. FIG.15is a flowchart illustrating a method1500for dimensioning an application, according to an example. Method1500may be performed by processing circuitry that is disposed on one or more compute nodes in a network. Compute nodes are processing devices capable of receiving data, processing data, storing data, and transmitting data to other compute nodes in the network. Example compute nodes include, but are not limited to edge nodes, edge servers, MEC hosts, UEs, application servers, gateways, routers, cloud servers, data center servers, orchestrators, and the like. The method1500provides a way to determine which pieces of an application may be moved toward the edge to improve responsiveness and adhere to an SLA. The method1500may be extended to analyze and dimension multiple applications that may operate on the same network. The applications may be related (e.g., from the same vendor, provide inter-application functionality, or the like). Alternatively, the applications may not be related other than by sharing hardware or software resources on the network. The method1500may analyze multiple applications, construct phase transition graphs for each application based on the interactions with other applications, resources, or users on the network to dimension workloads across the network to improve several applications. In the single-application example, execution of an application is analyzed to obtain operational data (operation1502). The application may refer to a service, platform, executable application, thread, or other operational code that is able to execute on one or more nodes in a network. To analyze an application, one more software or hardware monitors may be used. The monitors may gather usage statistics such as memory usage, CPU usage, network usage, or the like. Monitoring may also gather more detailed usage down to the individual call stacks, event trapping, input or output data, or the like. In an embodiment, analyzing execution of the application includes analyzing time complexity of the application. Time complexity refers to the computational complexity of a problem being solved by a computer program or function. In an embodiment, analyzing execution of the application includes analyzing memory usage of the application. Functions that have more memory usage or memory accesses may be differentiated from those functions or modules that have fewer memory access or use less memory. In an embodiment, analyzing execution of the application includes analyzing a plurality of simulated executions of the application. Simulating inputs and analyzing the operation of the application over a large number of executions is useful to determine trends during operation. In an embodiment, analyzing execution of the application includes analyzing source code of the application. For instance, the number of lines of code for a particular function or module may be indicative of the complexity, memory usage, or processor usage corresponding to execution of such code. In an embodiment, analyzing execution of the application includes analyzing call chains of the application. Call chains provide insight into how complex a call tree is, how many times a function or module is called in a time period, how much memory is used to service the call chain, etc. In an embodiment, analyzing execution of the application includes analyzing events trapped by the application. Event-based programming relies on events being trapped by an operating system or other event monitor. When an event is trapped, certain event handler software is used. The greater number of events trapped may indicate a more complex or demanding function or module. At1504, functions of the application are modularized based on the operational data to construct modularized functions. In an embodiment, modularizing functions of the application includes identifying related functionality and constructing modularized functions based on the related functionality. At1506, a phase transition graph is constructed using a machine-learning based analysis, the phase transition graph representing state transitions from one modularized function to another modularized function, wherein the phase transition graph is used to the application by distributing the modularized functions across the network. In an embodiment, constructing the phase transition graph includes receiving a plurality of input vectors, each input vector representing a facet of the operational data. A machine learning model is used with the plurality of input vectors to identify phase transitions of higher probability and phase transitions of lower probability. The phase transition graph is constructed indicating the phase transitions of higher probability and the phase transitions of lower probability. In an embodiment, to dimension the application, the application is vertically dimensioned across a network. Vertical dimensioning refers to dimensioning among different network layers, such as among the core and edge layers. In a related embodiment, to dimension the application, the application is horizontally dimensioned across a network. Horizontal dimensioning refers to dimensioning the application at approximately the same layer in the network (e.g., dimensioning an application across several edge nodes). In further examples, other aspects of the operations1502,1504, and1506may be enhanced by the use of machine learning (or other artificial intelligence) models or algorithms, to assist the deployment, implementation, or use of network dimensioning. For example, AI analysis may be used to identify the characteristics of network resource latency, speed, and availability, on a real-time, historical, or as-needed basis, to help determine where functions may be dimensioned. Also for example, AI analysis may be used to verify that particular workloads or workload portions can be suitably dimensioned (or, rebalanced with dimensioning) to respective computer nodes or network locations, using horizontal or vertical dimensioning. Also for example, AI analysis may be utilized to determine the particular areas or phases of an application that are transitioning or trending (positively or negatively), which may assist in dividing or modularizing individual functions of an application. In all of these examples, the AI analysis works to assist the refactoring of the application to more effectively dimension and locate application modules, so that SLAs, latency requirements, backhaul and bandwidth, and other operational considerations can be met. It should be understood that the functional units or capabilities described in this specification may have been referred to or labeled as components or modules, in order to more particularly emphasize their implementation independence. Such components may be embodied by any number of software or hardware forms. For example, a component or module may be implemented as a hardware circuit comprising custom very-large-scale integration (VLSI) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A component or module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like. Components or modules may also be implemented in software for execution by various types of processors. An identified component or module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified component or module need not be physically located together but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the component or module and achieve the stated purpose for the component or module. Indeed, a component or module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices or processing systems. In particular, some aspects of the described process (such as code rewriting and code analysis) may take place on a different processing system (e.g., in a computer in a data center) than that in which the code is deployed (e.g., in a computer embedded in a sensor or robot). Similarly, operational data may be identified and illustrated herein within components or modules and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. The components or modules may be passive or active, including agents operable to perform desired functions. Although an aspect has been described with reference to specific exemplary aspects, it will be evident that various modifications and changes may be made to these aspects without departing from the broader scope of the present disclosure. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof show, by way of illustration, and not of limitation, specific aspects in which the subject matter may be practiced. The aspects illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other aspects may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various aspects is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. Such aspects of the inventive subject matter may be referred to herein, individually and/or collectively, merely for convenience and without intending to voluntarily limit the scope of this application to any single aspect or inventive concept if more than one is in fact disclosed. Thus, although specific aspects have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific aspects shown. This disclosure is intended to cover any and all adaptations or variations of various aspects. Combinations of the above aspects and other aspects not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. Example 1 is a compute node, comprising: processing circuitry; and a memory device including instructions embodied thereon, wherein the instructions, which when executed by the processing circuitry, configure the processing circuitry to perform operations to: analyze execution of an application to obtain operational data; modularize functions of the application based on the operational data to construct modularized functions; and construct a phase transition graph using a machine-learning based analysis, the phase transition graph representing state transitions from one modularized function to another modularized function, wherein the phase transition graph is used to dimension the application by distributing the modularized functions across a network. In Example 2, the subject matter of Example 1 includes, wherein to analyze execution of the application, the processing circuitry is to analyze time complexity of the application. In Example 3, the subject matter of Examples 1-2 includes, wherein to analyze execution of the application, the processing circuitry is to analyze memory usage of the application. In Example 4, the subject matter of Examples 1-3 includes, wherein to analyze execution of the application, the processing circuitry is to analyze a plurality of simulated executions of the application. In Example 5, the subject matter of Examples 1-4 includes, wherein to analyze execution of the application, the processing circuitry is to analyze source code of the application. In Example 6, the subject matter of Examples 1-5 includes, wherein to analyze execution of the application, the processing circuitry is to analyze call chains of the application. In Example 7, the subject matter of Examples 1-6 includes, wherein to analyze execution of the application, the processing circuitry is to analyze events trapped by the application. In Example 8, the subject matter of Examples 1-7 includes, wherein to modularize functions of the application, the processing circuitry is to identify related functionality and construct modularized functions based on the related functionality. In Example 9, the subject matter of Examples 1-8 includes, wherein to construct the phase transition graph using a machine-learning based analysis, the processing circuitry is to: receive a plurality of input vectors, each input vector representing a facet of the operational data; use a machine learning model with the plurality of input vectors to identify phase transitions of higher probability and phase transitions of lower probability; and construct the phase transition graph indicating the phase transitions of higher probability and the phase transitions of lower probability. In Example 10, the subject matter of Examples 1-9 includes, wherein to dimension the application, the application is vertically dimensioned across the network. In Example 11, the subject matter of Examples 1-10 includes, wherein to dimension the application, the application is horizontally dimensioned across the network. Example 12 is at least one machine-readable storage medium comprising instructions stored thereupon, which when executed by processing circuitry of a computing system, cause the processing circuitry to perform operations comprising: analyzing execution of an application to obtain operational data; modularizing functions of the application based on the operational data to construct modularized functions; and constructing a phase transition graph using a machine-learning based analysis, the phase transition graph representing state transitions from one modularized function to another modularized function, wherein the phase transition graph is used to dimension the application by distributing the modularized functions across a network. In Example 13, the subject matter of Example 12 includes, wherein analyzing execution of the application comprises analyzing time complexity of the application. In Example 14, the subject matter of Examples 12-13 includes, wherein analyzing execution of the application comprises analyzing memory usage of the application. In Example 15, the subject matter of Examples 12-14 includes, wherein analyzing execution of the application comprises analyzing a plurality of simulated executions of the application. In Example 16, the subject matter of Examples 12-15 includes, wherein analyzing execution of the application comprises analyzing source code of the application. In Example 17, the subject matter of Examples 12-16 includes, wherein analyzing execution of the application comprises analyzing call chains of the application. In Example 18, the subject matter of Examples 12-17 includes, wherein analyzing execution of the application comprises analyzing events trapped by the application. In Example 19, the subject matter of Examples 12-18 includes, wherein modularizing functions of the application comprises identifying related functionality and construct modularized functions based on the related functionality. In Example 20, the subject matter of Examples 12-19 includes, constructing the phase transition graph using a machine-learning based analysis comprises: receiving a plurality of input vectors, each input vector representing a facet of the operational data; using a machine learning model with the plurality of input vectors to identify phase transitions of higher probability and phase transitions of lower probability; and constructing the phase transition graph indicating the phase transitions of higher probability and the phase transitions of lower probability. In Example 21, the subject matter of Examples 12-20 includes, wherein dimensioning the application comprises vertically dimensioning across the network. In Example 22, the subject matter of Examples 12-21 includes, wherein dimensioning the application comprises horizontally dimensioning across the network. Example 23 is a method comprising: analyzing execution of an application to obtain operational data; modularizing functions of the application based on the operational data to construct modularized functions; and constructing a phase transition graph using a machine-learning based analysis, the phase transition graph representing state transitions from one modularized function to another modularized function, wherein the phase transition graph is used to dimension the application by distributing the modularized functions across a network. In Example 24, the subject matter of Example 23 includes, wherein analyzing execution of the application comprises analyzing time complexity of the application. In Example 25, the subject matter of Examples 23-24 includes, wherein analyzing execution of the application comprises analyzing memory usage of the application. In Example 26, the subject matter of Examples 23-25 includes, wherein analyzing execution of the application comprises analyzing a plurality of simulated executions of the application. In Example 27, the subject matter of Examples 23-26 includes, wherein analyzing execution of the application comprises analyzing source code of the application. In Example 28, the subject matter of Examples 23-27 includes, wherein analyzing execution of the application comprises analyzing call chains of the application. In Example 29, the subject matter of Examples 23-28 includes, wherein analyzing execution of the application comprises analyzing events trapped by the application. In Example 30, the subject matter of Examples 23-29 includes, wherein modularizing functions of the application comprises identifying related functionality and construct modularized functions based on the related functionality. In Example 31, the subject matter of Examples 23-30 includes, constructing the phase transition graph using a machine-learning based analysis comprises: receiving a plurality of input vectors, each input vector representing a facet of the operational data; using a machine learning model with the plurality of input vectors to identify phase transitions of higher probability and phase transitions of lower probability; and constructing the phase transition graph indicating the phase transitions of higher probability and the phase transitions of lower probability. In Example 32, the subject matter of Examples 23-31 includes, wherein dimensioning the application comprises vertically dimensioning across the network. In Example 33, the subject matter of Examples 23-32 includes, wherein dimensioning the application comprises horizontally dimensioning across the network. Example 34 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement of any of Examples 1-33. Example 35 is an apparatus comprising means to implement of any of Examples 1-33. Example 36 is a system to implement of any of Examples 1-33. Example 37 is at least one non-transitory machine-readable storage medium comprising instructions or stored data which may be configured into instructions, wherein the instructions, when configured and executed by processing circuitry of a computing device, cause the processing circuitry to perform any of the operations of Examples 1 to 33. Example 38 is one or more computer-readable storage media comprising data to cause an electronic device, upon loading, execution, configuration, or provisioning of the data by one or more processors or electronic circuitry of the electronic device, to perform one or more elements of operations described in or related to any of Examples 1 to 33, or any other method or process described herein. Example 39 is an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of Examples 1 to 33, or any other method or process described herein. Example 40 is a method, technique, or process as described in or related to any of Examples 1 to 33, or portions or parts thereof. Example 41 is an apparatus comprising: one or more processors and one or more computer readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of Examples 1 to 33, or portions thereof. Example 42 is a device for processing communication as described in or related to any of Examples 1 to 33, or as otherwise shown and described herein. Example 43 is a network comprising respective devices and device communication mediums for performing any of the operations of Examples 1 to 33, or as otherwise shown and described herein. Example 44 is a device fog implementation comprising processing nodes and computing units adapted for performing any of the operations of Examples 1 to 33, or as otherwise shown and described herein. Example 45 is an Internet of Things (IoT) network configuration comprising respective communication links, communication circuitry, or processing circuitry for performing any of the operations of Examples 1 to 33, or as otherwise shown and described herein. Example 46 is an edge computing system implementation comprising processing nodes and computing units adapted for performing any of the operations of Examples 1 to 33, or as otherwise shown and described herein. Example 47 is an edge cloud computing device implementation comprising processing nodes and computing units adapted for performing any of the operations of Examples 1 to 33, or as otherwise shown and described herein. Example 48 is an apparatus comprising means to implement of any of Examples 1 to 33. Example 49 is a system to implement of any of Examples 1 to 33. Example 50 is a method to implement of any of Examples 1 to 33. In the above Detailed Description, various features may be grouped together to streamline the disclosure. However, the claims may not set forth every feature disclosed herein as embodiments may feature a subset of said features. Further, embodiments may include fewer features than those disclosed in a particular example. Thus, the following claims are hereby incorporated into the Detailed Description, with a claim standing on its own as a separate embodiment. | 146,844 |
11943281 | DETAILED DESCRIPTION Systems and methods for using a distributed game engine are described. It should be noted that various embodiments of the present disclosure are practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure various embodiments of the present disclosure. FIG.1is diagram of an embodiment of the system100describing a distributed game engine102. The system100includes a plurality of client devices104A,104B, and104C. The system100further includes a switch system106and a plurality of nodes 1, 2, and 3. The system100includes a node assembly server108and a cloud gaming server110. A client device, as used herein, is a device that is operated by a user to gain access to a game that is executed using the distributed game engine102. Examples of a client device include a game console, a computer, a smart phone, a smart television, a head-mounted display (HMD), and a pad, etc. An HMD, as used herein, is a display device that is worn by a user to view a virtual scene, such as a virtual reality (VR) scene or an augmented reality (AR) scene. The VR scene or the AR scene is generated upon execution of the distributed game engine102. A node, as used herein, is a hardware server or a game console to execute the distributed game engine102. As an example, a node has a separate housing than a housing of another node. As another example, a node is placed on a different rack of a data center than a rack on which another node is placed within the data center. In an embodiment, multiple nodes are located within a single housing. For example, in case of PlayStation Now™ servers, a single housing is shared by multiple nodes. When multiple nodes are housed in the single housing, each node has its own network connectivity to a rack and further to a computer network. However, as an alternative, the single housing includes a network device, such as a switch, and the nodes are coupled via the switch and a single cable to the rack for coupling to a computer network. The single housing having multiple nodes allows for better connectivity in terms of throughput and latency. In one embodiment, a node is executed using a virtual machine, which is an emulation of the computer system. In the virtual machine, a hypervisor is a computer software or hardware or a combination thereof that shares and manages hardware resources, such as processors and memory devices, to run the distributed game engine102. As an example, a virtual machine includes an operating system, one or more application computer programs that run on top of the operating system, and one or more hardware resources, e.g., central processing units, graphical processing units, video encoders, audio encoders, network communication devices, memory devices, internal communication devices, network communication devices, etc., that are accessed by the one or more application computer programs via the operating system and the hypervisor for performing the functions described herein as being performed by a node. A switch system, as used herein, includes one or more switches that facilitate a transfer of data between the node assembly server108and one or more of the nodes 1, 2, and 3. For example, a switch system is a switch fabric. The switch fabric has a large amount of bandwidth among nodes and is dynamically reconfigured often and allows for Quality of Service (QoS). To illustrate, the QoS facilitates reducing congestion on links when there is not enough capacity among the nodes and the QoS retries sending data. Some nodes, in time, start processing data received from nodes lacking capacity. As another example, a switch system includes a multiplexer that selects among the nodes 1, 2, and 3 that are to execute the distributed game engine102and to which data is transferred from the node assembly server108and from which data is transferred via a computer network112to one or more of the client devices104A,104B and104C. As another example, a switch system includes one or more transistors that facilitate a transfer of data between the node assembly server108and one or more of the nodes 1, 2, and 3. As yet another example, a switch system includes one or more switches, each of which changes its position between an open position and a closed position. The open position of a switch decouples the node assembly server108from a node that is coupled to the switch. The closed position of the switch couples the node assembly server108to a node that is coupled to the switch. A computer network, as used herein, is used to transfer data between a client device and a server, or between a client device and the node, or between multiple client devices, etc., to facilitate an operation of the distributed game engine102. Examples of the computer network include a wide area network (WAN) such as Internet, or a local area network (LAM) such as the Internet, or a combination thereof. The distributed game engine102includes a game code, e.g., a game computer program, a computer program for generating a VR scene, a computer program for generating an AR scene, etc., and other codes, e.g., a physics code, a rendering code, etc., which are further described below for generating the VR scene or the AR scene. As an example, a portion of the distributed game engine102is stored and executed by the node 1, another portion of the distributed game engine102is stored and executed by the node 2, and the remaining portion of the distributed game engine102is stored and executed by the node 3. The client device104A generates and sends a game request202via the computer network112to the cloud gaming server110. The cloud gaming server110determines based on the game request202whether a user account that is accessed by a user 1 of the client device104A to generate the game request202is authorized to access the distributed game engine102. The user 1 of the client device104A provides login information, e.g., user name, password, etc., via an input device, e.g., hand-held controller, a camera, etc., of the client device104A or an external camera to access the user account. When the login information is authenticated by the cloud gaming server110, the user 1 of the client device104A is provided access to the user account. Upon determining that the user account is authorized to access the distributed game engine102, the cloud gaming server110sends a signal to the node assembly server108for enabling execution of the distributed game engine102. In one embodiment, in addition to the authentication of the login information, there are additional operations that are performed before enabling the client device104A to couple to the node assembly server108for execution of the distributed game engine102. For example, a network test server coupled to the computer network112receives the signal from the cloud gaming server110for enabling execution of the distributed game engine102and executes a bandwidth ping to multiple data centers. Results of the test are provided to a cloud resource manager server by the network test server. The cloud resource manager is coupled to the computer network112. The cloud resource manager112determines which of the data centers the client device104A would connect to. This determination is based on the test results and other information, such as, availability of sufficient number of nodes and in which of the data centers the game is stored. The cloud assembly selects the data center having the nodes 1, 2, and 3, and sends a signal to the node assembly server108to select one or more of the nodes 1 2, and 3. The node assembly server108upon receiving the signal from the cloud gaming server110or the cloud assembly server selects, via the switch system106, one or more of the nodes 1, 2, and 3 that will execute the distributed game engine102to initialize the one or more of the nodes 1, 2, and 3. For example, the node assembly server108sends a signal to a control input of the switch system106to couple to the nodes 1 and 2. Upon receiving the signal at the control input, the switch system106closes positions of two of the switches to connect the node assembly server108to the nodes 1 and 2, and opens a position of one of its switches to disconnect the node assembly server108from the node 3. The distributed game engine102is executed to transfer data, such as encoded frames, from one or more of the nodes 1, 2, and 3 via the computer network112to one or more of the client devices104A,104B, and104C. It should be noted that the system100includes a number of nodes other than that illustrated inFIG.1. For example, the system100includes 50 nodes, or25nodes, or5nodes, among which the game engine102is distributed. FIG.2is a diagram of an embodiment of a system200to illustrate details about a node. The node 1 includes a central processing unit (CPU)202A, a memory device204A, a graphical processing unit (GPU)206A, a network communication device208A, and an internal communication device210A. Similarly, the node 2 includes a CPU202B, a memory device204B, a GPU206B, a network communication device208B, and an internal communication device210B. A CPU, as used herein, of a node is used to process, e.g., analyze, examine, etc., data that is received from the node assembly server108or from one or more of the client devices104A through104C, and stored within a memory system that is coupled to the CPU. Examples of the CPU include a processor, an application specific integrated circuit (ASIC), and a programmable logic device (PLD). A memory device is a device from which data is read or to which the data is written. Examples of a memory device include a read-only memory (ROM) device, or a random access memory (RAM) device, or a combination thereof. To illustrate, a memory device includes a flash memory or a redundant array of independent disks (RAID). A GPU, as used herein, executes a rendering computer program to generate a video frame, which includes texture and lighting information of an AR scene or a VR scene. Examples of the GPU include a processor, an ASIC, and a PLD. In one embodiment, the terms “video frame” and “image frame” are used interchangeably herein. An internal communication device is used to communicate data between one node and another node. The internal communication device applies an internal communication protocol, e.g., a direct memory access (DMA) protocol, a remote DMA (RDMA) protocol, RDMA over converged Ethernet, Infiniband, an Ethernet protocol, a customized protocol, a serial transfer protocol, a parallel transfer protocol, a universal serial bus (USB) protocol, a wireless protocol, a Bluetooth protocol, a wired protocol, a universal datagram protocol (UDP), a UDP over Internet protocol, Transmission Control Protocol (TCP) over IP protocol, Ethernet over TCP/IP, etc., to communicate the data between two nodes. As an example of DMA, an internal communication chip, such as a PCI Express non-transparent switch chip, an RDMA chip, or an RDMA over converged Ethernet chip, or an Infiniband chip, of a node communicates via a peripheral component interconnect-express (PCIe) communication bus to directly write to a memory device in one or more other nodes or read from the memory device. Moreover, in communication busses like PCIe, peripherals such as GPUs and other devices are memory based as each peripheral has an assigned memory address space on the bus. To illustrate, a GPU of one node applies the internal communication protocol to write to or read from a register or a buffer of a GPU of another node. In this manner, a node communicates with another node through shared mailbox registers. There is an interruption in applications running on a CPU of a node when another node reads to or writes from the node. The other node sends an interrupt signal before reading to or writing from the node. Examples of the internal communication device include a processor, an ASIC, and a PLD. To illustrate, the internal communication device is a PCI Express non-transparent switch chip or an RDMA chip, or an RDMA over converged Ethernet chip, or an Infiniband chip. As another illustration, the internal communication device is a network interface controller or a network interface card (NIC), a device that communicates using a serial transfer of data, a device that communicates using a parallel transfer of data, or a device that communicates using a universal serial bus (USB) protocol. It should be noted that PCI-Express and RDMA technology has significantly lower latency and offers higher performance compared to the Ethernet protocol or TCP protocol or UDP protocol, because it eliminates protocol layers which occur overhead in operating systems, which are executed by a CPU. An application, such as a DMA engine executing the DMA protocol, executed within a node directly reads from or writes to memory in other nodes bypassing the operating system within the node when the node has been granted access to blocks of data within the other nodes. There is no network protocol, such as the Ethernet protocol or TCP protocol or UDP protocol, and the application of the node decides how it organizes memory and its internal structure. The internal communication chip of a node has the DMA engine and if memory transfer operation between the node and other nodes are called for, the internal communication chip executes the DMA engine to read and write data from the other nodes without involving a CPU of the node. It should be noted that in one embodiment, the internal communication chip is used in conjunction with a switch fabric coupling multiple nodes, a single rack or even multiple racks. A network communication device is used to transfer data packets between a node and a client device via the computer network112. For example, the network communication device applies an external communication protocol, e.g., TCP/IP, UDP/IP, etc., to receive and send data packets. Examples of a network communication device include a processor, an ASIC, and a PLD. To illustrate, the network communication device is a network interface controller or a NIC. The CPU202A, the memory device204A, the GPU206A, the network communication device208A, and the internal communication device210A are coupled to each other, e.g., via a bus. Similarly, the CPU202B, the memory device204B, the GPU206B, the network communication device208B, and the internal communication device210B are coupled to each other. The node assembly server108allocates two or more of the nodes 1, 2, and 3 based on criteria, such as, quality of service (QoS) between the nodes and a client, availability of the nodes, capacity of the nodes for transferring data to the client and receiving data from the client, bandwidth capacity of the computer network112between the nodes and the client, a subscription level assigned to a user account, or a combination thereof. In an embodiment, instead of one memory device, multiple memory devices are used within the node to store data that is stored within the memory device. In one embodiment, instead of one CPU, multiple CPUs are used within the node to perform functions that are performed by the CPU. In an embodiment, instead of one GPU, multiple GPUs are used within the node to perform functions that are performed by the GPU. FIG.3Ais a diagram of an embodiment of a system300to illustrate a distribution of the game engine102between the nodes 1 and 2. The node assembly server108determines to select the nodes 1 and 2 for distribution of the game engine. Upon determining to select the nodes 1 and 2, the node assembly server108sends a control signal to the switch system106to couple the node assembly server108to the CPU202A of the node 1 and to the CPU202B of the node 2. In one embodiment, the cloud resource manager, instead of the node assembly server108, determines to select the nodes 1 and 2 for distribution of the game engine and sends a control signal to the switch system106to couple the node assembly server108to the CPU202A of the node 1 and to the CPU202B of the node 2. In one embodiment, the functions described herein as being performed by the node assembly server108are performed by the cloud resource manager. In an embodiment, the cloud resource manager delegates its functions to the node assembly server108to perform. Moreover, the node assembly server108sends information identifying the internal communication protocol to be used to communicate data between the internal communication device210A and the internal communication device210B. For example, the node assembly server108sends a software development kit (SDK) via the switch system106to the internal communication device210A and the internal communication device210B. The SDK is used by a programmer user to program an internal communication device to communicate with another internal communication device using the internal communication protocol. As another example, the node assembly server108sends the internal communication protocol via the switch system106to the internal communication device210A and the internal communication device210B. The internal communication devices210A and210B apply the internal communication protocol to communicate with each other. Moreover, the node assembly server108determines from a signal identifying a game to be executed which of the nodes 1 and 2 has the game engine for executing the game to generate a VR scene or an AR scene. The signal identifying the game is received from the cloud gaming server110. For example, the node assembly server108identifies from the signal identifying the game that a game engine A is stored within the node 1. In one embodiment, the node assembly server108sends a request signal to the nodes 1 and 2 via the switch system106to determine which of the nodes 1 and 2 has the game engine A. In one embodiment in which it is determined that any of the nodes do not have the game engine A, the node assembly server108sends a signal to a storage server that is coupled to the computer network112identifying the game for receiving the game engine A. The game engine A is stored in the storage server. The node assembly server108communicates with the storage server to send the game engine A to the nodes 1 and/or 2. When the node 1 has the game engine A, the node assembly server108sends a command signal to the CPU202A via the switch system106to distribute the game engine A from the memory device204A to the node 2. For example, the node assembly server108sends the command signal to the CPU202A to transfer a copy of the game engine A to the node 2. Upon receiving the command signal, the CPU202A accesses the game engine A from the memory device204A and sends the game engine A to the internal communication device210A. The internal communication device210A applies the internal communication protocol to the game engine A to generate transfer units, e.g., frames, packets, etc., and sends the transfer units to the internal communication device210B. The internal communication device210B applies the internal communication protocol to the transfer units that are received to extract the game engine A, and sends the game engine A to the memory device204B for storage. In one embodiment, the CPU202A has none or a very limited role in transferring the game engine A from the node 1 to the node 2. For example, the internal communication device210B of node 2, after obtaining a memory location of the game engine A from the internal communication device210A of node 1, copies the game engine A from the memory device204A of node 1 to the memory device204B of node 2. A location of the game engine A within the memory device204A was reported earlier by the CPU202A or by the internal communication device210A to the internal communication device210B of node 2. For example, the location of the game engine A within the memory device204A was reported earlier by the CPU202A or by the internal communication device210A to the node assembly server108. As another example, the location of the game engine A within the memory device204A was reported earlier by the CPU202A or the internal communication device210A to the internal communication device210B of node 2. FIG.3Bis a diagram of an embodiment of the cloud gaming server110. The cloud gaming server110includes a processor310, a memory device312, and a communication device314. Examples of a processor include a CPU, an ASIC, and a PLD. Examples of the communication device314include a network interface controller, a NIC, a device that communicates using a serial transfer of data, a device that communicates using a parallel transfer of data, and a device that communicates using the USB protocol. To illustrate, the communication device314applies a network communication protocol, such as TCP/IP. The memory device312stores identifiers of multiple user accounts 1 through n, where n is an integer greater than 1. For example, the memory device310stores a user identifier (ID) 1 of the user account 1 and a user ID 2 of the user account 2. A user ID includes one or more alphanumeric characters, or symbols, or a combination thereof, and is assigned to a user account by the processor310. Examples of a user ID include a username, a password, or a combination thereof. A user ID is unique in that the user ID that is assigned to a user account by the processor310is not assigned to any other user account by the processor310. The processor310determines whether a user ID that is received from a user via a client device and the computer network112is assigned to any other user account. Upon determining that the user ID is assigned to the other user account, the processor310indicates to the user via the client device to provide another user ID. The memory device312further stores a plurality of game titles of games 1 through N, where N is an integer greater than 1. Each game title identifies a different game. For example, game title includes a description that describes a game and the description is different from a description of another game. Each game is executed using a different game engine. For example, the game 1 is executed by running the distributed game engine102and the game 2 is executed by running another distributed game engine. The communication device314receives the game request202from the client device104A, illustrated inFIG.2, and sends the game request to the processor310. The game request202includes a request for playing the game 1. The game request202is received via the user account 1 after the user 1 logs into the user account 1 via the client device104A. The user 1 logs into the user account 1 by providing the user ID1 via the client device104A. The user ID1 is communicated from the client device104A via the computer network112and the communication device314to the processor310. The processor310determines whether the user ID1 is authentic, e.g., matches a user ID stored in the memory device312, and upon determining so, the processor310allows the client device104A to log into the user account 1. Upon receiving the game request202, the processor310determines whether the user account 1 is authorized to access the game 1. The authorization is stored in the memory device312and is provided based on various factors, such as, games purchased via the user account 1, a demographic of the user 1 stored within the user account 1, a number of game points earned by the user 1 via the user account 1, etc. Upon determining that the user account is authorized to access the game 1, the processor310sends an instruction via the communication device314to the node assembly server108, illustrated inFIG.2, that the user account 1 is allowed access to play the game 1. In one embodiment, functions described herein as being performed by the processor310are instead performed by multiple processors. In an embodiment, data stored within the memory device312is instead stored within multiple memory devices. FIG.3Cis a diagram of an embodiment of the node assembly server108. The node assembly server108includes a processor350, a memory device352, and a communication device354. Examples of the communication device354include a network interface controller, a NIC, a device that communicates using a serial transfer of data, a device that communicates using a parallel transfer of data, and a device that communicates using the USB protocol. The memory device352stores a correspondence, e.g., a mapping, a listing, a one-to-one relationship, etc., between the user accounts 1 through n, multiple subscription levels 1, 2, and 3, and a number of nodes. For example, the memory device352has an entry that the user account 1 is assigned a subscription level 1 and the subscription level 1 is defined to enable use of two nodes for playing a game that is requested for access via the user account 1. As another example, the memory device352has another entry that the user account 2 is assigned a subscription level 3 and the subscription level 3 is defined to enable use of four nodes for playing a game that is requested for access via the user account 2. The greater the number of nodes, the higher the processing power, e.g., a number of GPUs and CPUs, used to execute the distributed game engine102. For example, the subscription level 3 corresponds to use of 4 nodes to execute the distributed game engine102and the subscription level 1 corresponds to use of 2 nodes to execute the distributed game engine102. When each node has one GPU and one CPU, the subscription level 3 corresponds to 4 nodes, e.g., 4 GPUs and 4 CPUs, and the subscription level 1 corresponds to 2 nodes, e.g., 2 GPUs and 2 CPUs. The subscription level 3 assigns the greater processing power compared to the subscription level 1. A subscription level is assigned to a user account based on a type of the user account. For example, when the user account is used to play one or more games in a regular fashion, e.g., periodically, weekly, daily, etc., the user account is assigned a higher subscription level than a user account that is used to play the one or more games in an irregular fashion. As another example, when a first user account is used to play one or more games and purchases a higher number of virtual items in the one or more games than that purchased in a second user account, the first user account is assigned the higher subscription level than the second user account. As yet another example, both the play of the one or more games in the regular fashion and the purchase of the higher number of virtual items are monitored to determine to assign the higher subscription level. The memory device352further stores a correspondence, e.g., a mapping, a listing, a one-to-one relationship, etc., between a graphics level of a game and a number of nodes executing a distributed game engine for allowing play of the game. For example, the memory device352includes an entry indicating that when a graphics level for the game is A, a number of nodes executing the distributed game engine for playing the game is 4. As another example, the memory device352includes an entry indicating that when a graphics level for the game is B, a number of nodes executing a distributed game engine for playing the game is 3. As yet another example, the memory device352includes an entry indicating that when a graphics level for the game is C, a number of nodes executing a distributed game engine for playing the game is 2. A graphics level is defined based on criteria, such as, a resolution of an image of a game, a number of colors used to generate frames of the game, a number of intensity levels used to generate the frames, a frame rate for playing the game, a number of virtual items whose positions change in the game, an amount of background that stays stationary within the game, or a combination thereof. As an example, there is an increase in a number of nodes are used to execute a distributed game engine when there is an increase in a frame rate, or an increase in a resolution, or an increase in a number of intensity levels, or an increase in a number of virtual items, or an increase in the number of colors used in frames for playing a game. To illustrate, in case of a handover of streaming sessions between devices, there is an increase in the number of nodes. To illustrate further, a user is playing the game on his/her smartphone or tablet. When the user reaches his/her home, the user wishes to transfer the game to his/her PlayStation™ game console. The smartphone or tablet displays the game in a low resolution and a low frame rate but the PlayStation™ game console in conjunction with a 4K or an 8K television applies a higher resolution and a higher frame rate. To support the higher resolution and the higher frame rate, there is an increase in the number of nodes. Such a handover is described in U.S. patent application Ser. No. 15/141,799, filed on Apr. 28, 2016, and titled “Cloud Gaming Device Handover” which is incorporated by reference herein in its entirety. As another example, there is a decrease in a number of nodes used to execute a distributed game engine when there is a decrease in a frame rate, or a decrease in a resolution, or a decrease in a number of intensity levels, or a decrease in a number of virtual items, or a decrease in the number of colors used in frames for playing a game. The communication device354receives the instruction that access to the game 1 is allowed from the communication device314of the cloud gaming server110, and sends the instruction to the processor350. The instruction includes the user ID1 that is assigned to the user account 1. The processor350accesses the memory device352to determine based on the user ID1, a corresponding subscription level that is assigned to the user account 1. For example, the processor350determines that the user account 1 is assigned the subscription level 1 and that two nodes are to be assigned to the user account 1 to execute the distributed game engine102. The processor350sends via the communication device354information, e.g., a library that includes the SDK, to the nodes 1 or via the switch system106to enable the nodes 1 and 2 to communicate with each other using the internal communication protocol to facilitate a play of the game 1. In an embodiment, the processor350sends via the communication device354the information to both the nodes 1 and 2 via the computer network112to enable the nodes 1 and 2 to communicate with each other using the internal communication protocol to facilitate a play of the game 1. In one embodiment, the different graphics levels are for playing different games. For example, the graphics level C is for playing the game 1 and the graphics level B is for playing the game 2. FIG.3Dis a diagram to illustrate various portions of a game engine. The game engine A includes a game code A for playing the game 1, save data A for restoring a state of the game 1, a rendering code A for displaying the game 1, a physical engine A for performing physics operations to execute the game 1, and an audio engine A that applies audio data for playing the game 1. Similarly, the game engine B includes a game code B for playing the game 2, save data B for restoring a state of the game 2, a rendering code B for displaying the game 2, a physical engine B for performing physics operations to execute the game 2, and an audio engine B that applies audio data for playing the game 2. A game code, as used herein, is a computer program that is executed to determine a next state in a game based on a user input received from a client device via the computer network112. The user input is a selection made by the user on a hand-held controller or is audio data that is captured by a microphone of a client device, or is image data that is captured by a camera, e.g., a depth camera, an infrared camera, a digital camera, etc., located in a real-world environment, e.g., a room, a warehouse, a floor, a location, a park, etc., in which the user is located, or a combination thereof. The camera captures a gesture, e.g., one or more hand motions, one or more head motions, one or more leg movements, etc., made by a user to generate the image data. A state of a game defines characteristics, e.g., positions, orientations, sizes, shapes, and orientations, etc., of all portions of a virtual scene generated upon execution of the game. The portions of a virtual scene include a virtual object or a background. As an example, a virtual object moves from one video frame to another and the background remains stationary from one video frame to another. As another example, a virtual object moves from one video scene to another and the background remains stationary from one video scene to another. As an example, a virtual object moves from one video frame to another, and the background remains stationary from one video frame to another but moves after a pre-determined number of video frames. Examples of a virtual object include a vehicle, a building, an avatar of the user, a character, a supernatural hero, a weapon, and an animal, etc. Examples of a background include a desert, mountains, ocean, trees, buildings, cities, audience of people, etc. Save data, as used herein, is state data of a game that is accessed when a user logs into his/her user account after logging out from the user account. For example, during a game session, a user logs out of his/her user account at a game state. When the user logs back into the user account, a client device displays the game state, e.g., same virtual reality scene, etc., that was displayed when the user logged out of the user account. A rendering code, as used herein, is a computer program that is used to generate an image from a two-dimensional (2D) or a three-dimensional (3D) model of one or more portions of a virtual scene. For example, the rendering code defines texturing and light intensities that are applied to one or more portions of a virtual scene. As another example, the rendering code defines colors, texturing, shading, and light intensities that apply to one or more portions of a virtual scene. A physics engine, as used herein, is a computer program that is executed to determine physical relationships between different portions in a virtual scene, e.g., a virtual reality scene, an augmented reality scene, etc., and between different virtual scenes. The physical relationships are determined based on laws of physics, such as, gravitational laws, motion laws, friction laws, etc. An audio engine, is used herein, is a computer program that determines and provides audio data to manage a corresponding virtual scene of a game. For example, when a portion of a virtual scene makes a sound, the audio engine determines audio data for outputting the sound and other variables of the sound, e.g., pitch, tone, amplitude, etc., and links the audio data with the portion of the virtual scene. FIG.4Ais a diagram of an embodiment of a system400to illustrate execution of the distributed game engine102ofFIG.1in which video frame information is sent from the node 2 to the node 1. The system400includes the nodes 1 and 2, the computer network112, and the client devices104A and104B. The node 1 includes an audio encoder402A and a video encoder404A. However, the node 2 does not include an audio encoder, does not include a video encoder, and does not include a GPU. An audio encoder, as used herein, is a hardware device, e.g., an integrated circuit, a processor, etc., or a software module, e.g., a computer program, etc., or a combination thereof, that is compresses or decompresses audio data according to an audio file format or a streaming audio format. A video encoder, as used herein, is a hardware device, e.g., an integrated circuit, a processor, etc., or a software module, e.g., a computer program, etc., or a combination thereof, that is compresses or decompresses video data according to a video file format or a streaming video format, e.g., H.264, H.265/MPEG-H, H.263/MPEG-4, H.262/MPEG-2a, customized protocol, etc. The client device104A includes an audio/video (A/V) sorter418, an audio decoder420, a video decoder422, a display device424, a communication device426, and an audio output device430. An A/V sorter is a hardware device, e.g., an integrated circuit, a processor, etc., or a software module, e.g., a computer program, etc., that distinguishes audio data from video data. Moreover, an audio decoder decodes, e.g., decompresses, encoded audio frames according to an audio file format or a streaming audio format to output audio frames. The audio decoder also encodes audio frames. Similarly, a video decoder decodes, e.g., decompresses, encoded video frames according to a video file format or a streaming video format to output video frames. Moreover, the video decoder encodes video frames. Examples of the display device include a head-mounted display (HMD), or a liquid crystal display (LCD) device, or a light emitting diode (LED) display device, or a display screen of a television, or a monitor, or a display screen of a tablet, or a display screen of a smart phone. Examples of a communication device include a network interface controller or a NIC. An example of an audio output includes a digital-to-analog converter that converts digital audio data to analog audio data, and an amplifier, and one or more speakers. An input of the amplifier is coupled to the digital-to-analog converter and an output of the amplifier is coupled to the one or more speakers. The CPU202A executes the game engine A to generate video frame information406A. For example, the CPU202A executes the game code A and the CPU202A and/or the GPU206A execute the physics engine A to generate positions, sizes, shapes, and orientations of one or more virtual scenes of the game 1. Examples of video frame information, as used herein, include a position, a size, a shape, an orientation, or a combination thereof, of one or more virtual scenes. Similarly, the CPU202B executes the game engine A to generate video frame information406B. For example, the CPU202B executes the game code A and the physics engine A to generate positions, sizes, shapes, and orientations of one or more virtual scenes of the game 1. Moreover, audio frames412B are generated by one or more processors, e.g., the CPU202B, of the node 2. The audio frames412B include audio data that is associated with the video frame information406B in that the audio data is for emission of sound simultaneous with display of one or more virtual scenes having the video frame information406B. The internal communication device210B of the node 2 applies the internal communication protocol to the video frame information406B and the audio frames412B to generate packaged information414, e.g., packets, transfer units, etc., and sends the packaged information414via a communication link416, e.g., a cable that allows that transfers data packaged in the internal communication protocol, to the internal communication device210A. An illustration of the link416is a PCIe communication bus. The internal communication device210A receives the packaged information414and applies the internal communication protocol to the packaged information414, e.g., depacketizes the transfer units, to extract the video frame information406B and the audio frames412B from the packaged information414. The video frame information406B and the audio frames412B are stored in the memory device204A. The GPU206A accesses the video frame information406A and the video frame information406B from the memory device204A and applies the rendering code A to the video frame information406A and the video frame information406B to generate a plurality of video frames408(A+B), which includes video frames408A generated from the video information406A and includes video frames408B generated from the video information406B. As an example, a video frame includes lighting, intensity, color, texture, shading, or a combination thereof, of one or more virtual scenes. Moreover, a plurality of audio frames410A are generated by one or more processors, e.g., the CPU202A, etc., of the node 1. The audio frames410A are associated with the video frames408A in that the audio frames provide audio data for a portion of the scene within the video frames408A. For example, the audio frames410A include audio data for a sound to be emitted by a virtual object. As another example, the audio frames410A include audio data to generate a sound to be emitted by a background within one or more virtual scenes. The audio encoder402A encodes the audio frames410A and the audio frames412B to generate encoded audio frames. Similarly the video encoder404A encodes the video frames408(A+B) to generate encoded video frames. The network communication device208A applies the external communication protocol to the encoded audio frames and the video encoded video frames to generate a plurality of frame packets and sends the frame packets via the computer network112to the client device104A. The communication device426receives the frame packets and depacketizes the frame packets by applying the external communication protocol to output the encoded video frames and the encoded audio frames. The A/V sorter418distinguishes between the encoded video frames and the encoded audio frames, and sends the encoded video frames to the video decoder422and the encoded audio frames to the audio decoder420. The video decoder422decodes the encoded video frames to output the video frames408(A+B). Moreover, the audio decoder420decodes the encoded audio frames to output the audio frames410A and412B. The display device424displays one or more virtual scenes from the video frames408(A+B). For example, the display device424controls color elements and light intensity elements of the display device424to generate a virtual scene that includes the positions, orientations, colors, textures, light intensities, shading, or a combination thereof, of all portions of the virtual scene. Moreover, the audio output device430outputs sounds from the audio frames410A and412B. For example, the analog-to-digital converter of the audit output430converts the audio frames410A and412B from a digital format into an analog format to generate analog audio data. The analog audio data is amplified by the amplifier of the audio output device430to generate amplified audio data. The amplified audio data is converted from electrical energy to sound energy by the one or more speakers of the audio output device430. In an embodiment, the node 1 acts as a master to delegate a task of generating the video frame information406B and the audio frames412B to the node 2. In one embodiment, in a multi-player game, the frame packets that are described inFIG.4Aand sent from the network communication device208A are broadcasted to multiple client devices, e.g., the client devices104A and104B, etc. For example, the frame packets are sent from the network communication device208A to the client devices104A and104B via the computer network112for game play. Such broadcasting saves power in that calculations for generating the frame packets are executed once by the node 1 and the node 2, and the frame packets are broadcasted from the node 1 to the multiple client devices. In an embodiment, the frame packets that include audio and video data and that are described inFIG.4Aand sent from the network communication device208A are sent to spectators of the game. For example, the frame packets are sent from the network communication device208A via the computer network112to a server hosting a spectator service, such as Youtube™ or Twitch™ or an e-sports service, for further viewing by spectators. In one embodiment, the node 1 includes an audio renderer, such as an integrated circuit, that renders audio data to generate an audio wave. The audio wave is then encoded by the audio encoder402A. The rendering of the audio data is performed based on a speaker configuration, such as stereo 5.1 or stereo 7.1, of the client device104A. In an embodiment in which the node 1 excludes an audio renderer, audio data is encoded by the audio encoder402A and streamed from the node 1 via the computer network112to the client device104A. The client device104A includes an audio renderer that renders the audio data to generate the audio wave. For example, the client device104A renders the audio wave by placing each virtual object in space and calculating the audio wave. The calculation depends on a number of speakers of the client device104A, such as stereo 5.1 or stereo 7.1. In one embodiment, rendering of audio data to generate an audio wave is not performed within the node 2. The node 2 does not include an audio renderer. Rather, the audio data associated with the video frame information406B is sent from the node 2 to the node 1 via the internal communication devices210B and210A, and rendering of the audio wave is performed by an audio renderer of the node 1. The audio wave is then encoded by the audio encoder402A. FIG.4Bis a diagram of an embodiment of a system450to illustrate another embodiment of the distributed game engine102ofFIG.1in which video frames are sent from a node 2A to the node 1. The system450includes the node 1, the node 2A, the computer network112, and the client devices104A and104B. The node 2A is the same as the node 2 ofFIG.4Aexcept that the node 2A includes a GPU206B. The node 2A does not include an audio encoder and a video encoder. The GPU206B applies the rendering code A to generate a plurality of video frames452B from the video frame information406B. For example, the GPU206B determines lighting intensity, textures, shading, and colors to be applied to one or more virtual scenes to generate the video frames452B. The internal communication device210B of the node 2 applies the internal communication protocol to the video frames452B and the audio frames412B to generate packaged information454, e.g., packets, transfer units, etc., and sends the packaged information454via the communication link416to the internal communication device210A. In one embodiment, other operations, such as sample rate conversion, amplification, and audio filtering on audio data is performed within the node 2 before generating the audio frames412B. The audio data is later converted to sound that is output simultaneously with a display of the video frames452B. The internal communication device210A receives the packaged information454and applies the internal communication protocol to the packaged information454, e.g., depacketizes the transfer units, etc., to extract the video frames452B and the audio frames412B from the packaged information452. The video frame information406B and the audio frames412B are stored in the memory device204A. The GPU206A accesses the video frame information406A from the memory device204A and applies the rendering code A to the video frame information406A to generate a plurality of video frames452A. The video encoder404A encodes the video frames452A and the video frames452B to generate encoded video frames. The audio encoder402A encodes the audio frames410A and the audio frames412B to generate encoded audio frames. The network communication device208A applies the external communication protocol to the encoded audio frames and the encoded video frames to generate a plurality of frame packets and sends the frame packets via the computer network112to the client device104A. In an embodiment, the node 1 acts as a master to delegate a task of generating the video frames452B to the node 2A. In one embodiment, in the multi-player game, the frame packets described inFIG.4Bthat are sent from the network communication device208A are broadcasted to multiple client devices, e.g., the client devices104A and104B, etc. For example, the frame packets are generated once by the nodes 1 and 2A and are sent from the network communication device208A to the client devices104A and104B via the computer network112for game play to save power. The frame packets do not need to be generated again for each different client device allowed to access the game engine A. FIG.5Ais a diagram of an embodiment of a system500to illustrate yet another embodiment of the distributed game engine102ofFIG.1. The system500includes the node 1, a node 2B, the computer network112, and a client device104A1. The node 2B is the same as the node 2A ofFIG.4Bexcept that the node 2B includes an audio encoder402B, a video encoder404B, and a network communication device208B. The client device104A1is the same as the client device104A ofFIG.4Bexcept that the client device104A1includes a video decoder 1, a video decoder 2, an audio decoder 1, an audio decoder 2, and a frame organizer502. The frame organizer502is a hardware device, e.g., a processor, integrated circuit, etc., or a software module, e.g., a computer program, or a combination thereof to organize the video frames 1 through 4 in a consecutive order for display on the display device424. Moreover, the frame organizer502organizes the audio frames 1 through 4 in a consecutive order for outputting sounds associated with the video frames 1 through 4 in a consecutive order. The video frame information406A includes video frame information 1 and video frame information 3. Similarly, the video frame information406B includes video frame information 2 and video frame information 4. The video frame information 1 is information regarding a video frame 1 of a virtual scene, the video frame information 2 is information regarding a video frame 2 of a virtual scene, the video frame information 3 is information regarding a video frame 3 of a virtual scene, and the video frame information 4 is information regarding a video frame 4 of a virtual scene. As an example, each video frame information 1 through 4 includes positions, orientations, sizes, and shapes of one or more virtual objects and/or background in a virtual scene. The video frames 1 through 4 are to be displayed in a consecutive order. For example, the video frame 2 is to be displayed on the client device104A1after the video frame 1 is displayed on the client device104A1. The video frame 3 is to be displayed on the client device104A1after the video frame 2 is to be displayed on the client device104A1. The video frame 4 is to be displayed on the client device104A1after the video frame 3 is to be displayed on the client device104A1. In one embodiment, a video frame, as used herein, is composed of lines of picture elements and has a resolution that is controlled by a number of the picture elements. Each picture element has a color and a light intensity to define a shape, a size, and a texture of a virtual object or a background. The video frames452A include the video frames 1 and 3. Moreover the video frames452B include the video frames 2 and 4. Similarly, the audio frames410A include an audio frame 1 and an audio frame 3, and the audio frames410B include an audio frame 2 and an audio frame 4. The audio frame 1 has audio data that is to be emitted as sound by the client device104A1simultaneously with the display of the video frame 1. Similarly, the audio frame 2 has audio data that is to be emitted as sound by the client device104A1simultaneously with the display of the video frame 2, the audio frame 3 has audio data that is to be emitted as sound by the client device104A1simultaneously with the display of the video frame 3, and the audio frame 4 has audio data that is to be emitted as sound by the client device104A1simultaneously with the display of the video frame 4. Before generating the video frames 1 and 3, the CPU202A and the GPU206A wait for receipt of the video frames 2 and 4 from the node 2B. In an embodiment, before generating the video frames 1 and 3, the CPU202A sends a request for the video frames 2 and 4 and for the audio frames 2 and 4 via the internal communication device206A, the communication link416, and the internal communication device206B to the CPU202B. Upon receipt of the video frames 2 and 4 and storage of the video frames 2 and 4 in the memory device204A, the CPU202A extracts the video frame information 4 from the video frame 4, extracts the video frame information 2 from the video frame 2, and applies the video frame information 4 and the video frame information 2 to generate the video frame information 3 and to generate the video frame information 1. For example, the video frame information 2 indicates that a virtual ball is at a position 2 and the video frame information 4 indicates that the virtual ball is at a position 4. The CPU202A determines from the positions 2 and 4 that the virtual ball is to be at a position 3 between the positions 2 and 4. The video frame information 3 includes the position 3 of the virtual ball. As another example, the video frame information 2 indicates that a virtual ball is at the position 2 and the physics engine code A indicates that the law of gravity is to be followed. The CPU202A determines from the position 2 and the law of gravity that the virtual ball is to be at the position 3, which is below the position 2. The video frame information 1 is rendered by the GPU206A to generate a video frame 1 and the video frame information 3 is rendered by the GPU206A to generate the video frame 3. Similarly, the audio frames 1 and/or 3 are generated from audio data of the audio frames 2 and/or 4. Furthermore, the video encoder404A encodes the video frames 1 and 3 to generate encoded video frames and sends the encoded video frames to the network communication device208A. Moreover, the audio encoder402A encodes the audio frames 1 and 3 to generate encoded audio frames and sends the encoded audio frames to the network communication device208A. The network communication device208A applies the external communication protocol to the encoded video and encoded audio frames to generate multiple packets that include the encoded video and encoded audio frames. The packets are sent via the computer network112from the node 1 to the client device104A1. Similarly, before generating the video frames 2 and 4, the CPU202B and the GPU206B wait for receipt of the video frames 1 and 3 from the node 1. In an embodiment, before generating the video frames 2 and 4, the CPU202B sends a request for the video frames 1 and 3 and for the audio frames 1 and 3 via the internal communication device206B, the communication link416, and the internal communication device206A to the CPU202A. Upon receipt of the request, the CPU202A sends the video frames 1 and 3 and the audio frames 1 and 3 to the internal communication device210A. The internal communication device210A applies the internal communication protocol to the video frames 1 and 3 and the audio frames 1 and 3 to generate packaged information and sends the package information via the communication link416to the internal communication device210B. The internal communication device210B applies the internal communication protocol to extract the video frames 1 and 3 and the audio frames 1 and 3 from the packaged information. The video frames 1 and 3 and the audio frames 1 and 3 are stored in the memory device204B. Upon receipt of the video frames 1 and 3 and storage of the frames 1 and 3 in the memory device204B, the CPU202B extracts the video frame information 1 from the video frame 1, extracts the video frame information 3 from the video frame 3, and applies the game engine A to video frame information 1 and the video frame information 3 to generate the video frame information 2 and to generate the video frame information 4. For example, the video frame information 1 indicates that the virtual ball is at a position 1 and the video frame information 3 indicates that the virtual ball is at the position 3. The CPU202B determines from the positions 1 and 3 that the virtual ball is to be at a position 2 between the positions 1 and 3. The video frame information 2 includes the position 2 of the virtual ball. As another example, the video frame information 3 indicates that a virtual ball is at the position 3 and the physics engine code A indicates that the law of gravity is to be followed. The CPU202B determines from the position 3 and the law of gravity that the virtual ball is to be at the position 4, which is below the position 3. The video frame information 2 is rendered by the GPU206B to generate the video frame 2 and the video frame information 4 is rendered by the GPU206B to generate the video frame 4. Similarly, the audio frames 2 and/or 4 are generated from audio data of the audio frames 1 and/or 3. Furthermore, the video encoder404B encodes the video frames 2 and 4 to generate encoded video frames and sends the encoded video data to the network communication device208B. Moreover, the audio encoder402B encodes the audio frames 2 and 4 to generate encoded audio frames and sends the encoded audio frames to the network communication device208B. The network communication device208B applies the external communication protocol to the encoded video and encoded audio frames to generate multiple packets that include the encoded video and encoded audio frames. The packets are sent via the computer network112from the node 2B to the client device104A1. The communication device426of the client device104A1receives the packets from the node 1 via the computer network112and applies the external communication protocol to the packets to obtain the encoded video frames and the encoded audio frames sent from the node 1. Similarly, the communication device426receives the packets from the node 2B via the computer network112and applies the external communication protocol to the packets to obtain the encoded video frames and the encoded audio frames sent from the node 2B. The A/V sorter418differentiates among the encoded audio frames received from the node 1, the encoded audio frames received from the node 2B, the encoded video frames received from the node 1, and the encoded video frames received from the node 2B. The A/V sorter418sends the encoded audio frames received from the node 1 to the audio decoder 1, sends the encoded audio frames received from the node 2B to the audio decoder 2, sends the encoded video frames received from the node 1 to the video decoder 1, and sends the encoded video frames received from the node 2B to the video decoder 2. The audio decoder 1 decodes the encoded audio frames received from the node 1 and the video decoder 1 decodes the encoded video frames received from the node 1. Similarly, the video decoder 2 decodes the encoded video frames received from the node 2B and the audio decoder 2 decodes the encoded audio frames received from the node 2B. The frame organizer502receives the video frames from the video decoders 1 and 2, and organizes the video frames 1 through 4 in a sequential consecutive order. Moreover, the frame organizer502receives the audio frames from the audio decoders 1 and 2, and organizes the audio frames 1 through 4 in a sequential consecutive order. For example, the video frames 1 and 3 are stored in a buffer in the client device104A1until the frames 2 and 4 arrive at the client device104A1. The frames 2 and 4 may arrive late due to network latency variations or may be lost. In case the frames 2 and 4 are lost, a forward error correction is applied. When the forward error correction is applied, the client device104A1notifies the video encoders404A and404B and the audio encoders402A and402B of the loss. After receiving the notification, the video encoders404A and404B and the audio encoders402A and402B do not use any data from the lost frames to encode newer frames. Instead, the video encoders404A and404B and the audio encoders402A and402B use other frames, either generated before the lost frames or generated after the lost frames, to encode the newer frames. The audio frames 1 through 4 are sent from the frame organizer502to the audio output device430to output sound associated with the video frames 1 through 4 and the video frames 1 through 4 are sent from the frame organizer502to the display device424to display the video frames 1 through 4. For example, the video frame 1 is displayed simultaneous with an output of a sound from the audio frame 1, the video frame 2 is displayed simultaneous with an output of a sound from the audio frame 2, the video frame 3 is displayed simultaneous with an output of a sound from the audio frame 3, and the video frame 4 is displayed simultaneous with an output of a sound from the audio frame 4 In one embodiment, the video decoder 1 applies a different video file format or a streaming video format, e.g., H.264, than that applied by the video decoder 2, e.g., customized format, to decode encoded video frames. In an embodiment, the audio decoder 1 applies a different audio file format or a streaming audio format than that applied by the audio decoder 2 to decode encoded audio frames. In an embodiment, instead of using two different video decoders 1 and 2, a single video decoder is used to decode the encoded video frames received from the nodes 1 and 2B. Moreover, in one embodiment, instead of using two different audio decoders 1 and 2, a single audio decoder is used to decode the encoded audio frames received from the nodes 1 and 2B. In one embodiment, the video frames 1 and 3 are generated without using any information from the video frames 2 and 4 and the audio frames 1 and 3 are generated without using any information from the audio frames 2 and 4. In this case, there is no need to send the video frames 2 and 4 and the audio frames 2 and 4 from the node 2B to the node 1. Similarly, in an embodiment, the video frames 2 and 4 are generated without using any information from the video frames 1 and 3 and the audio frames 2 and 4 are generated without using any information from the audio frames 1 and 3. In this case, there is no need to send the video frames 1 and 3 and the audio frames 1 and 3 from the node 1 to the node 2B. In one embodiment, the exchange of the video frames 1, 2, 3, and 4 and/or the audio frames 1, 2, 3, and 4 between the nodes 1 and 2B is performed continuously, e.g., without exceeding a pre-determined delay, at a pre-determined frequency, etc., to achieve a frame rate of display of the video frames 1, 2, 3, and 4 on the display device424of the client device104A1. It appears to the client device104A as if the nodes 1 and 2B are an aggregated node, e.g., one and the same node. In an embodiment, the video encoder404A decodes encoded video data that is received from the client device104A1via the computer network112and the network communication device208A. For example, the video data is captured by a camera, such as a depth camera or a web camera or an infrared camera, that is coupled to the client device104A1or is a part of the client device104A1. Moreover, the audio encoder402A decodes encoded audio data that is received from the client device104A1via the computer network112and the network communication device208A. For example, the audio data is captured by a microphone, which is coupled to the client device104A1or is a part of the client device104A1. Similarly, the video encoder404B decodes encoded video data that is received from the client device104A1via the computer network112and the network communication device208A. Moreover, the audio encoder402B decodes encoded audio data that is received from the client device104A1via the computer network112and the network communication device208A. In an embodiment in which each node 1 and node 2B excludes an audio renderer, audio data for a virtual object that is to be represented by the video frames 1 and 3 is encoded by the audio encoder402A and streamed from the node 1 via the computer network112to the client device104A1. The client device104A1includes an audio renderer that renders the audio data to generate the audio wave. For example, the client device104A1renders the audio wave by placing the virtual object in space and calculating the audio wave. The calculation depends on a number of speakers of the client device104A1, such as stereo 5.1 or stereo 7.1. Similarly, audio data for another virtual object or a background that is to be represented by the video frames 2 and 4 is encoded by the audio encoder402B and streamed from the node 2B via the computer network112to the client device104A1. The client device104A1uses the same audio renderer used to render the audio data received from the node 1 or another audio renderer to render the audio data received from the node 2B to generate another audio wave for the other virtual object or background. In one embodiment, in the multi-player game, the frame packets described inFIG.5Athat are sent from the network communication devices208A and208B are broadcasted to multiple client devices, e.g., the client device104A1and another client device, etc. For example, the frame packets are generated once by the nodes 1 and 2B and are sent from the network communication devices208A and208B to the multiple client devices via the computer network112for game play to save power. The frame packets sent by the nodes 1 and 2B do not need to be generated again for each different client device allowed to access the game engine A from the nodes 1 and 2B simultaneously. FIG.5Bis a diagram to illustrate that the frames 1 and 3 are generated by the node 1 and the frames 2 and 4 are generated by the node 2B ofFIG.5Aor the node 2A ofFIG.4B. As indicated in the frame 1, the virtual ball is at the position 1 and/or at an orientation 1 when released by a virtual user570in a virtual scene572A. Moreover, in the frame 2 of a virtual scene572B, the virtual ball is at the position 2 and/or at an orientation 2, in the frame 3 of a virtual scene572C, the virtual ball is at the position 3 and/or at an orientation 3, and in the frame 4 of a virtual scene572D, the virtual ball is at the position 4 and/or at an orientation 4. The virtual user570is an avatar of the user 1, who is real user, e.g., a person. The position 2 of the virtual ball is below the position 1. Also, the position 3 of the virtual ball is below the position 2 and the position 4 of the virtual ball is below the position 2. In one embodiment, the virtual ball is at the same orientation in all the frames 1 through 4. In an embodiment, the virtual ball is at the same position in all the frames 1 through 4 but has different orientations in the frames. In one embodiment, in the multi-player game, the frame packets described inFIG.5Bthat are sent from the network communication devices208A and208B are broadcasted to multiple client devices, e.g., the client device104A1and another client device, etc. The frame packets sent by the nodes 1 and 2B do not need to be generated again for each different client device allowed to access the game engine A from the nodes 1 and 2B simultaneously. FIG.6Ais a diagram of an embodiment of a system600to illustrate yet another embodiment of the distributed game engine102ofFIG.1in which a video frame portion of a video frame from a node is used to generate a video frame portion of another node. The system600includes the nodes 1 and 2B, the computer network112, and the client device104A1. The CPU202A executes the game engine A to generate a video frame portion information 1 of the video frame 1 and a video frame portion information 3 of the video frame 1. For example, the CPU202A executes the game code A and the CPU202A and/or the GPU206A execute the physics engine A to generate positions, sizes, shapes, and orientations of a portion of a virtual scene of the game 1. Examples of video frame portion information, as used herein, include a position, a size, a shape, an orientation, or a combination thereof of a portion of a virtual scene. An example of a portion of a virtual scene is a virtual object within a video frame or a background of the video frame. Another example of a portion of a video scene is a pre-determined number of adjacent pixels within a video frame. Yet another example of a portion of a video scene is a pre-determined number of adjacent pixels within a quadrant of a video frame. Still another example of a portion of a video scene is a pre-determined number of adjacent lines within a video frame. Another example of a portion of a virtual scene is one or more virtual objects within a video frame. Yet another example of a portion of a virtual scene is a portion of a virtual object within a video frame or a portion of a background within the video frame. The GPU206A accesses the video frame portion information 1 from the memory device204A and applies the rendering code A to the video frame portion information 1 to generate a video frame portion 1 of the video frame 1. Similarly, the GPU206A accesses the video frame portion information 3 from the memory device204A and applies the rendering code A to the video frame portion information 3 to generate a video frame portion 3 of the video frame 1. For example, the GPU206A determines to apply a color, a shade, an intensity, and/or a texture to a first virtual object in the video frame 1 and to apply a color, a shade, an intensity, and/or a texture to a background within the video frame 3. Moreover, an audio frame portion 1 and an audio frame portion 3 are generated by one or more processors, e.g., the CPU202A, etc., of the node 1. The audio frame portion 1 is associated with the video frame portion 1 in that the audio frame portion 1 provides audio data for sound to be emitted by the video frame portion 1 within the video frame 1. For example, the audio frame portion 1 includes audio data for a sound to be emitted by a virtual object or a background or a portion within the video frame 1. The audio frame portion 3 is associated with the video frame portion 3 in that the audio frame portion 3 provides audio data for sound to be emitted by the video frame portion 3 within the video frame 1. For example, the audio frame portion 3 includes audio data for a sound to be emitted by a virtual object or a background or a portion within the video frame 1. Similarly, the CPU202B executes the game engine A to generate a video frame portion information 2 and a video frame portion information 4. For example, the CPU202B executes the game code A, and the CPU202B and/or the GPU206B execute the physics engine A to generate positions, sizes, shapes, and orientations of a portion of the virtual scene within the video frame 1, of the game 1, for which the video frame portion 1 and a video frame portion 3 are generated by the node 1. The GPU206B accesses the video frame portion information 2 from the memory device204B and applies the rendering code A to the video frame portion information 2 to generate a video frame portion 2 of the video frame 1. Similarly, the GPU206B accesses the video frame portion information 4 from the memory device204B and applies the rendering code A to the video frame portion information 4 to generate a video frame portion 4 of the video frame 1. For example, the GPU206B determines to apply a color, an intensity, and/or a texture to a second virtual object in the video frame 1 and to apply a color, an intensity, and/or a texture to a third virtual object within the video frame 1. The second virtual object is different from the first virtual object and the third virtual object is different from the first and second virtual objects. To illustrate, the first virtual object is an avatar of the user 1, the second virtual object is the virtual ball, and the third virtual object is a dog who is a pet of the avatar. Moreover, audio frame portions 2 and 4 are generated by one or more processors, e.g., the CPU202B, of the node 2B. The audio frame portion 2 includes audio data that is associated with the video frame portion 2 in that the audio data is to be emitted as sound by the video frame portion 2. Moreover, the audio frame portion 4 includes audio data that is associated with the video frame portion 4 in that the audio data is to be emitted as sound by the video frame portion 4. Furthermore, the video encoder404A encodes the video frame portions 1 and 3 to generate encoded video frame portions and sends the encoded video frame portions to the network communication device208A. Moreover, the audio encoder402A encodes the audio frame portions 1 and 3 to generate encoded audio frames and sends the encoded audio frame portions 1 and 3 to the network communication device208A. The network communication device208A applies the external communication protocol to the encoded video and encoded audio frame portions to generate multiple packets that include the encoded video and encoded audio frame portions. The packets are sent via the computer network112from the node 1 to the client device104A1. Also, the video encoder404B encodes the video frame portions 2 and 4 to generate encoded video frame portions and sends the encoded video frame portions to the network communication device208B. Moreover, the audio encoder402B encodes the audio frame portions 2 and 4 to generate encoded audio frame portions and sends the encoded audio frame portions to the network communication device208B. The network communication device208B applies the external communication protocol to the encoded video and encoded audio frame portions to generate multiple packets that include the encoded video and encoded audio frame portions. The packets are sent via the computer network112from the node 2B to the client device104A1. The communication device426of the client device104A1receives the packets from the node 1 via the computer network112and applies the external communication protocol to the packets to obtain the encoded video frame portions and the encoded audio frame portions sent from the node 1. Similarly, the communication device426receives the packets from the node 2B via the computer network112and applies the external communication protocol to the packets to obtain the encoded video frame portions and the encoded audio frame portions sent from the node 2B. The A/V sorter418differentiates among the encoded audio frame portions received from the node 1, the encoded audio frame portions received from the node 2B, the encoded video frame portions received from the node 1, and the encoded video frame portions received from the node 2B. The A/V sorter418sends the encoded audio frame portions received from the node 1 to the audio decoder 1, sends the encoded audio frame portions received from the node 2B to the audio decoder 2, sends the encoded video frame portions received from the node 1 to the video decoder 1, and sends the encoded video frame portions received from the node 2B to the video decoder 2. The audio decoder 1 decodes the encoded audio frame portions received from the node 1 and the video decoder 1 decodes the encoded video frame portions received from the node 1. Similarly, the video decoder 2 decodes the encoded video frame portions received from the node 2B and the audio decoder 2 decodes the encoded audio frame portions received from the node 2B. The video frame portions 1 through 4 are displayed on the display device424and the audio frame portions 1 through 4 are output as sound by the audio output device430. For example, the video frame portion 1 is displayed simultaneous with an output of a sound from the audio portion 1, the video frame portion 2 is displayed simultaneous with an output of a sound from the audio frame portion 2, the video frame portion 3 is displayed simultaneous with an output of a sound from the audio frame portion 3, and the video frame portion 4 is displayed simultaneous with an output of a sound from the audio frame portion 4. In one embodiment, before generating the video frame portions 1 and 3, the CPU202A and the GPU206A wait for receipt of the video frame portions 2 and 4 from the node 2B. For example, before generating the video frame portions 1 and 3, the CPU202A sends a request for the video frame portions 2 and 4 and for the audio frame portions 2 and 4 via the internal communication device206A, the communication link416, and the internal communication device206B to the CPU202B. Upon receipt of the video frame portions 2 and 4 and storage of the video frame portions 2 and 4 in the memory device204A, the CPU202B extracts the video frame portion information 4 from the video frame portion 4, extracts the video frame portion information 2 from the video frame portion 2, and applies the video frame portion information 4 and/or the video frame portion information 2 to generate the video frame portion information 3 and/or to generate the video frame portion information 1. For example, the video frame portion information 2 indicates that a second portion of a virtual tree trunk is at a position 2 and the video frame information 4 indicates that a fourth portion of the virtual tree trunk is at a position 4. The CPU202A determines from the positions 2 and 4 of the virtual tree trunk that a third portion of the virtual tree trunk is to be at a position 3 between the positions 2 and 4 of the virtual tree trunk. The video frame portion information 3 includes the position 3 of the third portion of the virtual tree trunk. As another example, the video frame portion information 2 indicates that the second portion of virtual tree trunk is at the position 2 and the physics engine code A indicates that the virtual tree trunk is to touch a virtual ground in the video frame 1. The CPU202A determines from the position 2 and the physics engine code A that the third portion of the virtual tree trunk is to be at the position 3, which is below the position 2 of the virtual tree trunk. The video frame portion information 1 is rendered by the GPU206A to generate the video frame portion 1 and the video frame portion information 3 is rendered by the GPU206A to generate the video frame portion 3. Similarly, in this embodiment, before generating the video frame portions 2 and 4, the CPU202B and the GPU206B wait for receipt of the video frame portions 1 and 3 from the node 1. For example, before generating the video frame portions 2 and 4, the CPU202B sends a request for the video frame portions 1 and 3 and for the audio frame portions 1 and 3 via the internal communication device206B, the communication link416, and the internal communication device206A to the CPU202A. Upon receipt of the request, the CPU202A sends the video frame portions 1 and 3 and the audio frame portions 1 and 3 to the internal communication device210A. The internal communication device210A applies the internal communication protocol to the video frame portions 1 and 3 and the audio frame portions 1 and 3 to generate packaged information and sends the packaged information via the communication link416to the internal communication device210B. The internal communication device210B applies the internal communication protocol to extract the video frame portions 1 and 3 and the audio frame portions 1 and 3 from the packaged information. The video frame portions 1 and 3 and the audio frame portions 1 and 3 are stored in the memory device204B. Upon receipt of the video frame portions 1 and 3 and storage of the video frame portions 1 and 3 in the memory device204B, the CPU202B extracts the video frame portion information 1 from the video frame portion 1, extracts the video frame portion information 3 from the video frame portion 3, and applies the video frame portion information 1 and/or the video frame portion information 3 to generate the video frame portion information 2 and/or to generate the video frame portion information 4. For example, the video frame portion information 1 indicates that a first portion of the virtual tree trunk is at a position 1 and the video frame portion information 3 indicates that the third portion of the virtual tree trunk is at the position 3. The CPU202B determines from the positions 1 and 3 of the virtual tree trunk that the second portion of the virtual tree trunk is to be at the position 2 between the position 1 of the first portion of the virtual tree trunk and the position 3 of the third portion of the virtual tree trunk. The video frame portion information 2 includes the position 2 of the second portion of the virtual tree trunk. As another example, the video frame portion information 3 indicates that the third portion of the virtual tree trunk is at the position 3 and the physics engine code A indicates that the virtual tree trunk is to touch the virtual ground in the video frame 1. The CPU202B determines from the position 3 of the third portion of the virtual tree trunk and the physics engine code A that the fourth portion of the virtual tree trunk is to be at the position 4, which is below the position 3 of the third portion of the virtual tree trunk. The video frame portion information 2 is rendered by the GPU206B to generate the video frame portion 2 and the video frame portion information 4 is rendered by the GPU206B to generate the video frame portion 4. In one embodiment, the video decoder 1 applies a different video file format or a streaming video format, e.g., H.264, than that applied by the video decoder 2, e.g., customized format, to decode encoded video frame portions. In an embodiment, the audio decoder 1 applies a different audio file format or a streaming audio format than that applied by the audio decoder 2 to decode encoded audio frame portions. In an embodiment, instead of using two different video decoders 1 and 2, a single video decoder is used to decode the encoded video frame portions received from the nodes 1 and 2B. Moreover, in one embodiment, instead of using two different audio decoders 1 and 2, a single audio decoder is used to decode the encoded audio frame portions received from the nodes 1 and 2B. In one embodiment, the video frame portions 1 and 3 are generated without using any information from the video frame portions 2 and 4 and the audio frame portions 1 and 3 are generated without using any information from the audio frame portions 2 and 4. In this case, there is no need to send the video frame portions 2 and 4 and the audio frame portions 2 and 4 from the node 2B to the node 1. Similarly, in an embodiment, the video frame portions 2 and 4 are generated without using any information from the video frame portions 1 and 3 and the audio frame portions 2 and 4 are generated without using any information from the audio frame portions 1 and 3. In this case, there is no need to send the video frame portions 1 and 3 and the audio frame portions 1 and 3 from the node 1 to the node 2B. FIG.6Bis a diagram to illustrate an arrangement of the video frame portions 1 through 4 in the video frame 1 displaying a virtual scene. The video frame portion 1 of the video frame 1 includes a first set of one or more top lines of pixels in the video frame 1. Moreover, the video frame portion 2 of the video frame 1 includes a second set of one or more lines of pixels in the video frame 1 and the second set is adjacent to and below the first set. Also, the video frame portion 3 of the video frame 1 includes a third set of one or more lines of pixels in the video frame 1 and the third set is adjacent to and below the second set. The video frame portion 4 of the video frame 1 includes a fourth set of one or more lines of pixels in the video frame 1 and the fourth set is adjacent to and below the third set. FIG.6Cis a diagram to illustrate another arrangement of the video frame portions 1 through 4 in the video frame 1 displaying a virtual scene. The video frame portion 1 of the video frame 1 includes a set of one or more adjacent picture elements in a top left quadrant in the video frame 1. Moreover, the video frame portion 2 of the video frame 1 includes a set of one or more adjacent picture elements in a top right quadrant of the video frame 1. Also, the video frame portion 3 of the video frame 1 includes a set of one or more adjacent picture elements in a bottom left quadrant of the video frame 1. The video frame portion 4 of the video frame 1 includes a set of one or more adjacent picture elements in a bottom right quadrant of the video frame 1. It should be noted that the node 1 generates video frame portions that lie in the same quadrant(s), such as the top left quadrant or the bottom left quadrant, across multiple video frames, e.g., the video frames 1 and 2, etc. Similarly, the node 2 generates video frame portions that lie in the same quadrant(s), such as the top right quadrant or the bottom right quadrant, across the multiple video frames. In an embodiment, the node 1 generates video frame portions that lie in the different quadrant(s), such as the top left quadrant and the bottom left quadrant, across multiple video frames, e.g., the video frames 1 and 2, etc. Similarly, the node 2 generates video frame portions that lie in the different quadrant(s), such as the top right quadrant and the bottom right quadrant, across the multiple video frames. FIG.7is a diagram of an embodiment of a system700to illustrate broadcasting of user inputs from one node to another node to determine relevancy of the user inputs to the other node for generation of video frames. The system700includes a node 1A and the node 2C. The node 1A is the same as the node 1 ofFIG.6Aexcept that the node 1A includes a relevancy determinator702A. Moreover, the node 2C is the same as the node 2B ofFIG.6Aexcept that the node 2C includes a relevancy determinator702B. Examples of a relevancy determinator include a hardware device, e.g., a processor, an integrated circuit, etc., or a software module, e.g., a computer program, etc., or a combination thereof, that determines a relevancy of user inputs for generation of video frames or video frame portions within a node. In one embodiment, a relevancy determinator of a node is integrated with a CPU and a GPU of the node. For example, a portion of the relevancy determinator of a node is integrated within a CPU of the node and the remaining portion of the relevancy determinator is integrated within a GPU of the node. The user 1 provides a plurality of user inputs 1 and 2 via the client device104A. A variety of ways in which the input 1 and 2 are captured are described above, e.g., using the camera or the hand-held controller. The inputs 1 and 2 are packetized and sent from the client device104A via the computer network112to the network communication device208A of the node 1A. The network communication device208A depacketizes the inputs 1 and 2 and provides the inputs 1 and 2 to the internal communication device210A. The internal communication device210A, under control of the CPU202A, applies the internal communication protocol to broadcast the video and audio data of the user inputs 1 and 2 via the communication link416to the internal communication device210B of the node 2C. The internal communication device210B applies the internal communication protocol to extract the user inputs 1 and 2 from packaged information received from the internal communication device210A, and provides the user inputs 1 and 2 to the relevancy determinator702B. The relevancy determinator702B determines which of the user inputs 1 and 2 are relevant to generate the video frames 2 and 4. For example, the relevancy determinator702B determines that the user input 1 changes a position, orientation, size, shape, intensity, color, texture, shading, or a combination thereof, of a virtual scene and the changed position, the changed orientation, the changed size, the changed shape, the changed intensity, the changed color, the changed texture, or the changed shade, or a combination thereof, are to be displayed in the video frame 2 or the video frame 4. In this example, the user input 1 is relevant to the video frame 2 or the video frame 4. To illustrate, the relevancy determinator702B determines that the user input 1 will facilitate achieving the position 2 of the virtual ball within the video frame 2 or the position 4 of the virtual ball within the frame 4. As another example, the relevancy determinator702B determines that the user inputs 1 and 2 both change a position, orientation, size, shape, intensity, color, texture, or a combination thereof, of a virtual scene and the changed position, the changed orientation, the changed size, the changed shape, the changed intensity, the changed color, the changed texture, or the changed shade, or a combination thereof, are to be displayed in the video frame 2 or the video frame 4. In this example, the user inputs 1 and 2 are relevant to the video frame 2 or the video frame 4. To illustrate, the relevancy determinator702B determines that the user inputs 1 and 2 will facilitate achieving the position 2 of the virtual ball within the video frame 2 or the position 4 of the virtual ball within the frame 4. The relevancy determinator702B provides one or both of the user inputs 1 and 2 that are determined to be relevant to the video frames 2 and/or 4 to the CPU202B and the GPU206B. The relevancy determinator702B ignores one or both of the user inputs 1 and 2 that are determined not to be relevant to the video frames 2 and 4. For example, the relevancy determinator702B does not apply a user input that is not relevant to generate the video frames 2 and/or 4. The CPU202B applies the user inputs 1 and/or 2 that are determined to be relevant to generate the video frame information 2 and/or 4. Similarly, the GPU206B applies the user inputs 1 and/or 2 that are determined to be relevant to generate the video frames 2 and/or 4. In one embodiment, the video frame 1, the video frame 2, the video frame 3, the video frame 4, the audio frame 1, the audio frame 2, the audio frame 3, the audio frame 4, the video frame portion 1, the video frame portion 2, the video frame portion 3, the video frame portion 4, the audio frame portion 1, the audio frame portion 2, the audio frame portion 3, the audio frame portion 4, or a combination thereof, is referred to herein as state information. In one embodiment, the video encoder404A of the node 1, illustrated inFIG.6A, decodes video data of the user inputs 1 and 2 when the user inputs 1 and 2 are encoded by the video decoder 1 or the video decoder 2 of the client device104A1, illustrated inFIG.6A. The video data of user inputs 1 and 2 is decoded by the video encoder404A before broadcasting the user inputs 1 and 2 to the other nodes. In an embodiment, the audio encoder402A of the node 1, illustrated inFIG.6A, decodes audio data of the user inputs 1 and 2 when the user inputs 1 and 2 are encoded by the audio decoder 1 or the audio decoder 2 of the client device104A1. The audio data of user inputs 1 and 2 is decoded by the audio encoder402A before broadcasting the user inputs 1 and 2 to the other nodes. In an embodiment, the relevancy determinator702A determines whether one or both of the user inputs 1 and 2 are relevant to generating the video frames 1 and 3. For example, the relevancy determinator702A determines that the user input 1 changes a position, orientation, size, shape, intensity, color, texture, or a combination thereof, of a virtual scene and the changed position, the changed orientation, the changed size, the changed shape, the changed intensity, the changed color, the changed texture, or the changed shade, or a combination thereof, are to be displayed in the video frame 1 or the video frame 3. In this example, the user input 1 is relevant. Upon determining that the user input 1 is relevant, the CPU202A applies the user input 1 to generate the video frame information 1 and/or 3 and/or the GPU206A applies the user input 1 to generate the video frames 1 and/or 3. Upon determining that the user inputs 1 and 2 are relevant to generating the video frames 1 and 3, the node 1A does not broadcast the user inputs 1 and 2 to the node 2C. On the other hand, upon determining that one or both of the user inputs 1 and 2 are not relevant to generating the video frames 1 and 3, the node 1A broadcasts the user inputs 1 and/or 2 to the node 2C. In this embodiment, the video encoder404A of the node 1, illustrated inFIG.6A, decodes the video data of the user input 1 when the user input 1 is encoded by the video decoder 1 or the video decoder 2 of the client device104A1, illustrated inFIG.6A. The video data of the user input 2 that is sent to the node 2C from the node 1A is decoded by the video encoder404B when the user input 2 is encoded by the video decoder 1 or the video decoder 2 of the client device104A1. Also, the audio encoder402A of the node 1, illustrated inFIG.6A, decodes the audio data of the user input 1 when the user input 1 is encoded by the audio decoder 1 or the audio decoder 2 of the client device104A1. The audio data of the user input 2 that is sent to the node 2C from the node 1A is decoded by the audio encoder402B when the user input 2 is encoded by the audio decoder 1 or the audio decoder 2 of the client device104A1. In an embodiment, instead of the client device104A, the client device104A illustrated inFIG.4Aor the client device104A1illustrated inFIG.6Ais used. FIG.8is a diagram of an embodiment of the system700to illustrate broadcasting of user inputs from one node to another node to determine relevancy of the user inputs to the other node for generation of video frame portions. The relevancy determinator702B determines which of the user inputs 1 and 2 are relevant to generate the video frame portions 2 and 4. For example, the relevancy determinator702B determines that the user input 1 changes a position, orientation, size, shape, intensity, color, texture, shading, or a combination thereof, of a portion of a virtual scene in the video frame 1 and the changed position, the changed orientation, the changed size, the changed shape, the changed intensity, the changed color, the changed texture, or the changed shade, or a combination thereof, are to be displayed in the video frame portion 2 or the video frame portion 4. The user input 1 is relevant to the video frame portion 2 or the video frame portion 4. To illustrate, the relevancy determinator702B determines that the user input 1 will facilitate displaying the position 2 of the second portion of the virtual tree trunk within the video frame 1 or the position 4 of the fourth portion of the virtual tree trunk within the video frame 1. As another example, the relevancy determinator702B determines that the user inputs 1 and 2 both changes a position, orientation, size, shape, intensity, color, texture, or a combination thereof, of a portion of a virtual scene and the changed position, the changed orientation, the changed size, the changed shape, the changed intensity, the changed color, the changed texture, or the changed shade, or a combination thereof, are to be displayed in the video frame portion 2 or the video frame portion 4. The user inputs 1 and 2 are relevant to the video frame portion 2 or the video frame portion 4. To illustrate, the relevancy determinator702B determines that the user inputs 1 and 2 will facilitate achieving the position 2 of the second portion of the virtual tree trunk within the video frame portion 2 or the position 4 of the fourth portion of the virtual tree trunk within the frame portion 4. The relevancy determinator702B provides one or both of the user inputs 1 and 2 that are determined to be relevant to the video frame portions 2 and/or 4 to the CPU202B and the GPU206B. The CPU202B applies the user inputs 1 and/or 2 that are determined to be relevant to generate the video frame portion information 2 and/or 4. Similarly, the GPU206B applies the user inputs 1 and/or 2 that are determined to be relevant to generate the video frame portions 2 and/or 4. In an embodiment, the relevancy determinator702A determines whether one or both of the user inputs 1 and 2 are relevant to generating the video frame portions 1 and 3. For example, the relevancy determinator702A determines that the user input 1 changes a position, orientation, size, shape, intensity, color, texture, or a combination thereof, of a virtual object and the changed position, the changed orientation, the changed size, the changed shape, the changed intensity, the changed color, the changed texture, or the changed shade, or a combination thereof, are to be displayed in a portion of the video frame 1. The user input 1 is relevant. Upon determining that the user input 1 is relevant, the CPU202A applies the user input 1 to generate the video frame portion information 1 and/or 3 and/or the GPU206A applies the user input 1 to generate the video frame portions 1 and/or 3. Upon determining that the user inputs 1 and 2 are relevant to generating the video frame portions 1 and 3, the node 1A does not broadcast the user inputs 1 and 2 to the node 2C. On the other hand, upon determining that one or both of the user inputs 1 and 2 are not relevant to generating the video frame portions 1 and 3, the node 1A broadcasts the user inputs 1 and/or 2 to the node 2C. FIG.9is a diagram of an embodiment of a system900to illustrate a dynamic change in a number of nodes that are selected to execute the distributed game engine102ofFIG.1. The system900includes a plurality of nodes A, B, C, and D. Each node B and D is specialized. For example, the node B includes a greater amount of processing power, e.g., a greater number of CPUs and GPUs, than the nodes A and C. As another example, the node B includes a higher amount of memory, e.g., a larger number of memory devices, than that in the nodes A and C. As yet another example, the node B is the same as the node A or the node C except that the node B includes the greater amount of processing power and the higher amount of memory than the node A or the node C. As another example, the node D is the same as the node A or the node C except that the node D includes the greater amount of processing power and the higher amount of memory than the node A or the node C. Examples of the node A include the node 1 and the node 1A. Examples of the node B include the node 2, the node 2A, the node 2B and the node 2C. The processor350, illustrated inFIG.3C, of the node assembly server108determines whether one or more of the nodes A and C have malfunctioned or are not functional. For example, the processor350sends a message via the communication device354of the node assembly server108and the computer network112to the network communication device208A of the node 1 to respond to the message. Upon determining that the network communication208A device does not respond to the message within a predetermined time, the processor350determines that the node 1 has malfunctioned or is not functioning. Upon determining that the nodes A and/or C have malfunctioned, the processor350selects the specialized node B via the switch system106to perform the functions being performed by the nodes A and/or C. For example, when the node A is nonfunctional or is malfunctioning, the processor350sends the information regarding the internal communication protocol to the specialized node B to allow the specialized node B to communicate with the node C that is functional. In one embodiment, when the user 1 selects the game 2 that has a lower graphics level than the game 1 or in case of a device handover, the processor350reduces, in real-time, a number of nodes that execute the distributed game engine102for the game 1 to a lower number for executing a distributed game engine for the game 2. For example, the processor350selects the lower number of nodes via the switch system106for communicating internally with each other via the internal communication protocol and for executing the distributed game engine for the game 2. Similarly, when the user 1 selects the game 1 that has a higher graphics level than the game 2, the processor350increases, in real-time, a number of nodes are used to execute the distributed game engine102for the game 2 to a higher number for executing a distributed game engine for the game 1. For example, the processor350selects the higher number of nodes via the switch system106for communicating internally with each other via the internal communication protocol and for executing the distributed game engine for the game 1. As another example, a user is playing the game on his/her PlayStation™ game console. When the user steps out of his/her home, the user wishes to transfer the game to his/her smartphone or tablet. The smartphone or tablet displays the game in a low resolution and a low frame rate but the PlayStation™ game console in conjunction with the 4K or the 8K television applies a higher resolution and a higher frame rate. To satisfy the lower resolution and the lower frame rate, there is a decrease in the number of nodes. In an embodiment, when a number of client devices facilitating a play of the game 1 via different user accounts reduces, the processor350reduces, in real-time, a number of nodes that execute the distributed game engine102for the game 1 to a lower number for executing a distributed game engine102for the game 1. On the other hand, when a number of client devices facilitating a play of the game 1 via different user accounts increases, the processor350increases, in real-time, a number of nodes are used to execute the distributed game engine102for the game 1 to a higher number for executing a distributed game engine102for the game 1. FIG.10is a flow diagram conceptually illustrating various operations which are performed for streaming a cloud video game to a client device, in accordance with implementations of the disclosure. A game server1002executes a video game and generates raw (uncompressed) video1004and audio1006. The video1004and audio1006are captured and encoded for streaming purposes, as indicated at reference1008in the illustrated diagram. The encoding provides for compression of the video and audio streams to reduce bandwidth usage and optimize the gaming experience. Examples of encoding formats include H.265/MPEG-H, H.264/MPEG-4, H.263/MPEG-4, H.262/MPEG-2, WMV, VP6/7/8/9, etc. Encoded audio1010and encoded video1012are further packetized into network packets, as indicated at reference numeral1014, for purposes of transmission over the computer network such as the Internet. In some embodiments, the network packet encoding process also employs a data encryption process, thereby providing enhanced data security. In the illustrated implementation, audio packets1016and video packets1018are generated for transport over a computer network1020. The game server1002additionally generates haptic feedback data1022, which is also packetized into network packets for network transmission. In the illustrated implementation, haptic feedback packets1024are generated for transport over the computer network1020. The foregoing operations of generating the raw video and audio and the haptic feedback data are performed on the game server1002of a data center, and the operations of encoding the video and audio, and packetizing the encoded audio/video and haptic feedback data for transport are performed by the streaming engine of the data center. As indicated at reference1020, the audio, video, and haptic feedback packets are transported over the computer network. As indicated at reference1026, the audio packets1016, video packets1018, and haptic feedback packets1024, are disintegrated, e.g., parsed, etc., by a client device to extract encoded audio1028, encoded video1030, and haptic feedback data1032at the client device from the network packets. If data has been encrypted, then the data is also decrypted. The encoded audio1028and encoded video1030are then decoded by the client device, as indicated at reference1034, to generate client-side raw audio and video data for rendering on a display device1040of the client device. The haptic feedback data1032is processed by the processor of the client device to produce a haptic feedback effect at a controller device1042or other interface device, e.g., the HMD, etc., through which haptic effects can be rendered. One example of a haptic effect is a vibration or rumble of the controller device1042. It will be appreciated that a video game is responsive to user inputs, and thus, a similar procedural flow to that described above for transmission and processing of user input, but in the reverse direction from client device to server, is performed. As shown, a controller device1042or another input device, e.g., the body part of the user 1, etc., or a combination thereof generates input data1044. This input data1044is packetized at the client device for transport over the computer network to the data center. Input data packets1046are unpacked and reassembled by the game server1002to define input data1048on the data center side. The input data1048is fed to the game server1002, which processes the input data1048to update save data for a game state of the game. During transport via the computer network1020of the audio packets1016, the video packets1018, and haptic feedback packets1024, in some embodiments, the transmission of data over the computer network1020is monitored to ensure a quality of service. For example, network conditions of the computer network1020are monitored as indicated by reference1050, including both upstream and downstream network bandwidth, and the game streaming is adjusted in response to changes in available bandwidth. That is, the encoding and decoding of network packets is controlled based on present network conditions, as indicated by reference1052. FIG.11is a block diagram of an embodiment of a game console1100that is compatible for interfacing with the display device of the client device and is capable of communicating via the computer network1020with the game hosting system. The game console1100is located within a data center A or is located at a location at which the user 1 is located. In some embodiments, the game console1100is used to execute a game that is displayed on the HMD. The game console1100is provided with various peripheral devices connectable to the game console1100. The game console1100has a cell processor1128, a dynamic random access memory (XDRAM) unit1126, a Reality Synthesizer graphics processor unit1130with a dedicated video random access memory (VRAM) unit1132, and an input/output (I/O) bridge1134. The game console1100also has a Blu Ray® Disk read-only memory (BD-ROM) optical disk reader1140for reading from a disk1140aand a removable slot-in hard disk drive (HDD)1136, accessible through the I/O bridge1134. Optionally, the game console1100also includes a memory card reader1138for reading compact flash memory cards, memory Stick® memory cards and the like, which is similarly accessible through the I/O bridge1134. The I/O bridge1134also connects to Universal Serial Bus (USB) 2.0 ports1124, a gigabit Ethernet port1122, an IEEE 802.11b/g wireless network (Wi-Fi) port1120, and a Bluetooth® wireless link port1118capable of supporting Bluetooth connections. In operation, the I/O bridge1134handles all wireless, USB and Ethernet data, including data from game controllers842and/or1103and from the HMD1105. For example, when the user A is playing a game generated by execution of a portion of a game code, the I/O bridge1134receives input data from the game controllers842and/or1103and/or from the HMD1105via a Bluetooth link and directs the input data to the cell processor1128, which updates a current state of the game accordingly. As an example, a camera within the HMD1105captures a gesture of the user 1 to generate an image representing the gesture. The image is an example of the input data. Each game controller842and1103is an example of a hand-held controller (HHC). The wireless, USB and Ethernet ports also provide connectivity for other peripheral devices in addition to game controllers842and1103and the HMD1105, such as, for example, a remote control1104, a keyboard1106, a mouse1108, a portable entertainment device1110, such as, e.g., a Sony Playstation Portable® entertainment device, etc., a video camera, such as, e.g., an EyeToy® video camera1112, etc., a microphone headset1114, and a microphone1115. In some embodiments, such peripheral devices are connected to the game console1100wirelessly, for example, the portable entertainment device1110communicates via a Wi-Fi ad-hoc connection, whilst the microphone headset1114communicates via a Bluetooth link. The provision of these interfaces means that the game console1100is also potentially compatible with other peripheral devices such as digital video recorders (DVRs), set-top boxes, digital cameras, portable media players, Voice over Internet protocol (IP) telephones, mobile telephones, printers and scanners. In addition, a legacy memory card reader1116is connected to the game console1100via the USB port1124, enabling the reading of memory cards1148of a kind used by the game console1100. The game controllers842and1103and the HMD1105are operable to communicate wirelessly with the game console1100via the Bluetooth link1118, or to be connected to the USB port1124, thereby also receiving power by which to charge batteries of the game controller842and1103and the HMD1105. In some embodiments, each of the game controllers842and1103and the HMD1105includes a memory, a processor, a memory card reader, permanent memory, such as, e.g., flash memory, etc., light emitters such as, e.g., an illuminated spherical section, light emitting diodes (LEDs), or infrared lights, etc., microphone and speaker for ultrasound communications, an acoustic chamber, a digital camera, an internal clock, a recognizable shape, such as, e.g., a spherical section facing the game console1100, and wireless devices using protocols, such as, e.g., Bluetooth, Wi-Fi, etc. The game controller842is a controller designed to be used with two hands of the user 1, and game controller1103is a single-hand controller with an attachment. The HMD1105is designed to fit on top of a head and/or in front of eyes of the user 1. In addition to one or more analog joysticks and conventional control buttons, each game controller842and1103is susceptible to three-dimensional location determination. Similarly, the HMD1105is susceptible to three-dimensional location determination. Consequently, in some embodiments, gestures and movements by the user 1 of the game controller842and1103and of the HMD1105are translated as inputs to a game in addition to or instead of conventional button or joystick commands Optionally, other wirelessly enabled peripheral devices, such as, e.g., the Playstation™ Portable device, etc., are used as a controller. In the case of the Playstation™ Portable device, additional game or control information, e.g., control instructions or number of lives, etc., is provided on a display screen of the device. In some embodiments, other alternative or supplementary control devices are used, such as, e.g., a dance mat (not shown), a light gun (not shown), a steering wheel and pedals (not shown), bespoke controllers, etc. Examples of bespoke controllers include a single or several large buttons for a rapid-response quiz game (also not shown). The remote control1104is also operable to communicate wirelessly with the game console1100via the Bluetooth link1118. The remote control1104includes controls suitable for the operation of the Blu Ray™ Disk BD-ROM reader1140and for navigation of disk content. The Blu Ray™ Disk BD-ROM reader1140is operable to read CD-ROMs compatible with the game console1100, in addition to conventional pre-recorded and recordable CDs, and so-called Super Audio CDs. The Blu Ray™ Disk BD-ROM reader1140is also operable to read digital video disk-ROMs (DVD-ROMs) compatible with the game console1100, in addition to conventional pre-recorded and recordable DVDs. The Blu Ray™ Disk BD-ROM reader1140is further operable to read BD-ROMs compatible with the game console1100, as well as conventional pre-recorded and recordable Blu-Ray Disks. The game console1100is operable to supply audio and video, either generated or decoded via the Reality Synthesizer graphics unit1130, through audio connectors1150and video connectors1152to a display and sound output device1142, such as, e.g., a monitor or television set, etc., having a display screen1144and one or more loudspeakers1146, or to supply the audio and video via the Bluetooth® wireless link port1118to the display device of the HMD1105. The audio connectors1150, in various embodiments, include conventional analogue and digital outputs whilst the video connectors1152variously include component video, S-video, composite video, and one or more High Definition Multimedia Interface (HDMI) outputs. Consequently, video output may be in formats such as phase alternating line (PAL) or National Television System Committee (NTSC), or in 2220p, 1080i or 1080p high definition. Audio processing, e.g., generation, decoding, etc., is performed by the cell processor1108. An operating system of the game console1100supports Dolby® 5.1 surround sound, Dolby® Theatre Surround (DTS), and the decoding of 7.1 surround sound from Blu-Ray® disks. In some embodiments, a video camera, e.g., the video camera1112, etc., comprises a single charge coupled device (CCD), an LED indicator, and hardware-based real-time data compression and encoding apparatus so that compressed video data is transmitted in an appropriate format such as an intra-image based motion picture expert group (MPEG) standard for decoding by the game console1100. An LED indicator of the video camera1112is arranged to illuminate in response to appropriate control data from the game console1100, for example, to signify adverse lighting conditions, etc. Some embodiments of the video camera1112variously connect to the game console1100via a USB, Bluetooth or Wi-Fi communication port. Various embodiments of a video camera include one or more associated microphones and also are capable of transmitting audio data. In several embodiments of a video camera, the CCD has a resolution suitable for high-definition video capture. In use, images captured by the video camera are incorporated within a game or interpreted as game control inputs. In another embodiment, a video camera is an infrared camera suitable for detecting infrared light. In various embodiments, for successful data communication to occur with a peripheral device, such as, for example, a video camera or remote control via one of the communication ports of the game console1100, an appropriate piece of software, such as, a device driver, etc., is provided. In some embodiments, the aforementioned system devices, including the game console1100, the HHC, and the HMD1105enable the HMD1105to display and capture video of an interactive session of a game. The system devices initiate an interactive session of a game, the interactive session defining interactivity between the user 1 and the game. The system devices further determine an initial position and orientation of the HHC and/or the HMD1105operated by the user 1. The game console1100determines a current state of a game based on the interactivity between the user 1 and the game. The system devices track a position and orientation of the HHC and/or the HMD1105during an interactive session of the user 1 with a game. The system devices generate a spectator video stream of the interactive session based on a current state of a game and the tracked position and orientation of the HHC and/or the HMD1105. In some embodiments, the HHC renders the spectator video stream on a display screen of the HHC. In various embodiments, the HMD1105renders the spectator video stream on a display screen of the HMD1105. With reference toFIG.12, a diagram illustrating components of an HMD1202is shown. The HMD1202is an example of the HMD1105(FIG.11). The HMD1202includes a processor1200for executing program instructions. A memory device1202is provided for storage purposes. Examples of the memory device1202include a volatile memory, a non-volatile memory, or a combination thereof. A display device1204is included which provides a visual interface, e.g., display of image frames generated from save data, etc., that the user 1 (FIG.1) views. A battery1206is provided as a power source for the HMD1202. A motion detection module1208includes any of various kinds of motion sensitive hardware, such as a magnetometer1210, an accelerometer1212, and a gyroscope1214. An accelerometer is a device for measuring acceleration and gravity induced reaction forces. Single and multiple axis models are available to detect magnitude and direction of the acceleration in different directions. The accelerometer is used to sense inclination, vibration, and shock. In one embodiment, three accelerometers1212are used to provide the direction of gravity, which gives an absolute reference for two angles, e.g., world-space pitch and world-space roll, etc. A magnetometer measures a strength and a direction of a magnetic field in a vicinity of the HMD1202. In some embodiments, three magnetometers1210are used within the HMD1202, ensuring an absolute reference for the world-space yaw angle. In various embodiments, the magnetometer is designed to span the earth magnetic field, which is ±80 microtesla. Magnetometers are affected by metal, and provide a yaw measurement that is monotonic with actual yaw. In some embodiments, a magnetic field is warped due to metal in the real-world environment, which causes a warp in the yaw measurement. In various embodiments, this warp is calibrated using information from other sensors, e.g., the gyroscope1214, a camera1216, etc. In one embodiment, the accelerometer1212is used together with magnetometer1210to obtain the inclination and azimuth of the HMD1202. A gyroscope is a device for measuring or maintaining orientation, based on the principles of angular momentum. In one embodiment, instead of the gyroscope1214, three gyroscopes provide information about movement across the respective axis (x, y and z) based on inertial sensing. The gyroscopes help in detecting fast rotations. However, the gyroscopes, in some embodiments, drift overtime without the existence of an absolute reference. This triggers resetting the gyroscopes periodically, which can be done using other available information, such as positional/orientation determination based on visual tracking of an object, accelerometer, magnetometer, etc. The camera1216is provided for capturing images and image streams of a real-world environment, e.g., room, cabin, natural environment, etc., surrounding the user 1. In various embodiments, more than one camera is included in the HMD1202, including a camera that is rear-facing, e.g., directed away from the user 1 when the user 1 is viewing the display of the HMD1202, etc., and a camera that is front-facing, e.g., directed towards the user 1 when the user 1 is viewing the display of the HMD1202, etc. Additionally, in several embodiments, a depth camera1218is included in the HMD1202for sensing depth information of objects in the real-world environment. The HMD1202includes speakers1220for providing audio output. Also, a microphone1222is included, in some embodiments, for capturing audio from the real-world environment, including sounds from an ambient environment, and speech made by the user 1, etc. The HMD1202includes a tactile feedback module1224, e.g., a vibration device, etc., for providing tactile feedback to the user 1. In one embodiment, the tactile feedback module1224is capable of causing movement and/or vibration of the HMD1202to provide tactile feedback to the user 1. LEDs1226are provided as visual indicators of statuses of the HMD1202. For example, an LED may indicate battery level, power on, etc. A card reader1228is provided to enable the HMD1202to read and write information to and from a memory card. A USB interface1230is included as one example of an interface for enabling connection of peripheral devices, or connection to other devices, such as other portable devices, computers, etc. In various embodiments of the HMD1202, any of various kinds of interfaces may be included to enable greater connectivity of the HMD1202. A Wi-Fi module1232is included for enabling connection to the Internet via wireless networking technologies. Also, the HMD1202includes a Bluetooth module1234for enabling wireless connection to other devices. A communications link1236is also included, in some embodiments, for connection to other devices. In one embodiment, the communications link1236utilizes infrared transmission for wireless communication. In other embodiments, the communications link1236utilizes any of various wireless or wired transmission protocols for communication with other devices. Input buttons/sensors1238are included to provide an input interface for the user 1 (FIG.1). Any of various kinds of input interfaces are included, such as buttons, touchpad, joystick, trackball, etc. An ultra-sonic communication module1240is included, in various embodiments, in the HMD1202for facilitating communication with other devices via ultra-sonic technologies. Bio-sensors1242are included to enable detection of physiological data from a user. In one embodiment, the bio-sensors1242include one or more dry electrodes for detecting bio-electric signals of the user through the user's skin. The foregoing components of HMD1202have been described as merely exemplary components that may be included in HMD1202. In various embodiments, the HMD1202include or do not include some of the various aforementioned components. FIG.13illustrates an embodiment of an Information Service Provider (INSP) architecture. INSPs1302delivers a multitude of information services to users A, B, C, and D geographically dispersed and connected via a computer network1306, e.g., a LAN, a WAN, or a combination thereof, etc. An example of the WAN includes the Internet and an example of the LAN includes an Intranet. The user 1 operates a client device1320-1, the user B operates another client device1320-2, the user C operates yet another client device1320-3, and the user D operates another client device1320-4. In some embodiments, each client device1320-1,1320-2,1320-3, and1320-4includes a central processing unit (CPU), a display, and an input/output (I/O) interface. Examples of each client device1320-1,1320-2,1320-3, and1320-4include a personal computer (PC), a mobile phone, a netbook, a tablet, a gaming system, a personal digital assistant (PDA), the game console1100and a display device, the HMD1202(FIG.11), the game console1100and the HMD1202, a desktop computer, a laptop computer, a smart television, etc. In some embodiments, the INSP1302recognizes a type of a client device and adjusts a communication method employed. In some embodiments, an INSP delivers one type of service, such as stock price updates, or a variety of services such as broadcast media, news, sports, gaming, etc. Additionally, the services offered by each INSP are dynamic, that is, services can be added or taken away at any point in time. Thus, an INSP providing a particular type of service to a particular individual can change over time. For example, the client device1320-1is served by an INSP in near proximity to the client device1320-1while the client device1320-1is in a home town of the user 1, and client device1320-1is served by a different INSP when the user 1 travels to a different city. The home-town INSP will transfer requested information and data to the new INSP, such that the information “follows” the client device1320-1to the new city making the data closer to the client device1320-1and easier to access. In various embodiments, a master-server relationship is established between a master INSP, which manages the information for the client device1320-1, and a server INSP that interfaces directly with the client device1320-1under control from the master INSP. In some embodiments, data is transferred from one ISP to another ISP as the client device1320-1moves around the world to make the INSP in better position to service client device1320-1be the one that delivers these services. The INSP1302includes an Application Service Provider (ASP)1308, which provides computer-based services to customers over the computer network1306. Software offered using an ASP model is also sometimes called on-demand software or software as a service (SaaS). A simple form of providing access to a computer-based service, e.g., customer relationship management, etc., is by using a standard protocol, e.g., a hypertext transfer protocol (HTTP), etc. The application software resides on a vendor's server and is accessed by each client device1320-1,1320-2,1320-3, and1320-4through a web browser using a hypertext markup language (HTML), etc., by a special purpose client software provided by the vendor, and/or other remote interface, e.g., a thin client, etc. Services delivered over a wide geographical area often use cloud computing. Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the computer network1306. The users A, B, C, and D do not need to be an expert in the technology infrastructure in the “cloud” that supports them. Cloud computing is divided, in some embodiments, in different services, such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Cloud computing services often provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers. The term cloud is used as a metaphor for the computer network1306, e.g., using servers, storage and logic, etc., based on how the computer network1306is depicted in computer network diagrams and is an abstraction for the complex infrastructure it conceals. Further, the INSP1302includes a game processing provider (GPP)1310, also sometime referred to herein as a game processing server, which is used by the client devices1320-1,1320-2,1320-3, and1320-4to play single and multiplayer video games. Most video games played over the computer network1306operate via a connection to a game server. Typically, games use a dedicated server application that collects data from the client devices1320-1,1320-2,1320-3, and1320-4and distributes it to other clients that are operated by other users. This is more efficient and effective than a peer-to-peer arrangement, but a separate server is used to host the server application. In some embodiments, the GPP1310establishes communication between the client devices1320-1,1320-2,1320-3, and1320-4, which exchange information without further relying on the centralized GPP1310. Dedicated GPPs are servers which run independently of a client. Such servers are usually run on dedicated hardware located in data centers, providing more bandwidth and dedicated processing power. Dedicated servers are a method of hosting game servers for most PC-based multiplayer games. Massively multiplayer online games run on dedicated servers usually hosted by the software company that owns the game title, allowing them to control and update content. A broadcast processing server (BPS)1312, sometimes referred to herein as a broadcast processing provider, distributes audio or video signals to an audience. Broadcasting to a very narrow range of audience is sometimes called narrowcasting. A final leg of broadcast distribution is how a signal gets to the client devices1320-1,1320-2,1320-3, and1320-4, and the signal, in some embodiments, is distributed over the air as with a radio station or a television station to an antenna and receiver, or through a cable television or cable radio or “wireless cable” via the station. The computer network1306also brings, in various embodiments, either radio or television signals to the client devices1320-1,1320-2,1320-3, and1320-4, especially with multicasting allowing the signals and bandwidth to be shared. Historically, broadcasts are delimited, in several embodiments, by a geographic region, e.g., national broadcasts, regional broadcasts, etc. However, with the proliferation of high-speed Internet, broadcasts are not defined by geographies as content can reach almost any country in the world. A storage service provider (SSP)1314provides computer storage space and related management services. The SSP1314also offers periodic backup and archiving. By offering storage as a service, the client devices1320-1,1320-2,1320-3, and1320-4use more storage compared to when storage is not used as a service. Another major advantage is that the SSP1314includes backup services and the client devices1320-1,1320-2,1320-3, and1320-4will not lose data if their hard drives fail. Further, a plurality of SSPs, in some embodiments, have total or partial copies of the data received from the client devices1320-1,1320-2,1320-3, and1320-4, allowing the client devices1320-1,1320-2,1320-3, and1320-4to access data in an efficient way independently of where the client devices1320-1,1320-2,1320-3, and1320-4are located or of types of the clients. For example, the user 1 accesses personal files via a home computer, as well as via a mobile phone while the user 1 is on the move. A communications provider1316provides connectivity to the client devices1320-1,1320-2,1320-3, and1320-4. One kind of the communications provider1316is an Internet service provider (ISP) which offers access to the computer network1306. The ISP connects the client devices1320-1,1320-2,1320-3, and1320-4using a data transmission technology appropriate for delivering Internet Protocol datagrams, such as dial-up, digital subscriber line (DSL), cable modem, fiber, wireless or dedicated high-speed interconnects. The communications provider1316also provides, in some embodiments, messaging services, such as e-mail, instant messaging, and short message service (SMS) texting. Another type of a communications Provider is a network service provider (NSP), which sells bandwidth or network access by providing direct backbone access to the computer network1306. Examples of network service providers include telecommunications companies, data carriers, wireless communications providers, Internet service providers, cable television operators offering high-speed Internet access, etc. A data exchange1318interconnects the several modules inside INSP1302and connects these modules to the client devices1320-1,1320-2,1320-3, and1320-4via computer network1306. The data exchange1318covers, in various embodiments, a small area where all the modules of INSP1302are in close proximity, or covers a large geographic area when the different modules are geographically dispersed. For example, the data exchange1302includes a fast Gigabit Ethernet within a cabinet of a data center, or an intercontinental virtual LAN. It should be noted that in various embodiments, one or more features of some embodiments described herein are combined with one or more features of one or more of remaining embodiments described herein. Embodiments described in the present disclosure may be practiced with various computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. In one implementation, the embodiments described in the present disclosure are practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wire-based or wireless network. With the above embodiments in mind, it should be understood that, in one implementation, the embodiments described in the present disclosure employ various computer-implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Any of the operations described herein that form part of the embodiments described in the present disclosure are useful machine operations. Some embodiments described in the present disclosure also relate to a device or an apparatus for performing these operations. The apparatus is specially constructed for the required purpose, or the apparatus is a general-purpose computer selectively activated or configured by a computer program stored in the computer. In particular, in one embodiment, various general-purpose machines are used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations. In an implementation, some embodiments described in the present disclosure are embodied as computer-readable code on a computer-readable medium. The computer-readable medium is any data storage device that stores data, which is thereafter read by a computer system. Examples of the computer-readable medium include a hard drive, a network-attached storage (NAS), a ROM, a RAM, a compact disc ROM (CD-ROM), a CD-recordable (CD-R), a CD-rewritable (CD-RW), a magnetic tape, an optical data storage device, a non-optical data storage device, etc. As an example, a computer-readable medium includes computer-readable tangible medium distributed over a network-coupled computer system so that the computer-readable code is stored and executed in a distributed fashion. Moreover, although some of the above-described embodiments are described with respect to a gaming environment, in some embodiments, instead of a game, other environments, e.g., a video conferencing environment, etc., is used. Although the method operations were described in a specific order, it should be understood that other housekeeping operations may be performed in between operations, or operations may be adjusted so that they occur at slightly different times, or may be distributed in a system which allows the occurrence of the processing operations at various intervals associated with the processing, as long as the processing of the overlay operations are performed in the desired way. Although the foregoing embodiments described in the present disclosure have been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications can be practiced within the scope of the appended claims. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the embodiments are not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims. | 134,695 |
11943282 | DETAILED DESCRIPTION The present invention is directed to a system including an augmented reality (AR) platform providing synchronized sharing of AR content in real time and across multiple AR-capable devices within a controlled, physical environment or space. In particular, the system of the present invention includes a mesh network of technologies integrated with one another and used to ultimately establish alignment of digital content, including rendering thereof, against a physical environment or space. Such a system allows for multiple users to experience the same AR content rendering in real time and within a live, physical environment or space, wherein such rendering of AR content is adapted to each user's point of view. The AR platform, for example, is accessible to users via associated AR-capable computing devices, including certain personal computing devices (i.e., smartphones and tablets) as well as AR-specific computing devices, including wearable headsets and eyewear, for example. The system includes the use of a controlled, real-world environment or space. The given space is controlled, meaning the space itself and real-world objects and articles, and other components within said space, are controlled, such as control over the appearance of walls, flooring, ceiling, placement of objects, lighting, temperature, and sounds, and the like. In other words, many, if not all, aspects of the given space may be controlled to provide a specific environment in which to provide an AR experience in that given space to users (i.e., guests, patrons, participants, or the like). By controlling the space, the system of the present invention is able to provide a persistently shared experience dedicated to the specific space. For any given controlled space, a shared point is initially established (also referred to herein as “world origin point” or “world origin”). Establishing a world origin point within the controlled space allows for the AR platform to place digital content relative to the world origin point for subsequent rendering across multiple AR-capable devices. The controlled space is digitally mapped, such that digital data associated with the controlled space, including the world-origin point coordinate data, is stored for subsequent retrieval and use during rendering of AR content. In addition, the system further relies on image tracking for alignment purposes. For example, the physical space can be decorated using image marker technology. Use of image marker technology allows for canonically established images to represent coordinates associated with the world origin point. For example, at the start of a given AR session or experience, devices with image tracking technology can utilize one or more image trackers (i.e. physical markers) within a given space to localize into the space and align the AR session to the world origin point. The localized coordinates of each image marker along with a unique image marker identifier is stored for each image for subsequent retrieval and use by each device, thereby allowing devices to understand the space without requiring any individual device setup. The AR platform further coordinates the world origin point of a given controlled space with anchor-based localization to thereby align the multiple devices. In particular, each device may be running an anchor-based software algorithm unique to that device's given platform. Each participating AR-capable device (i.e., AR-headset, smartphone, tablet, or other computing device that is AR-capable) within the controlled space essentially agrees upon the established world origin point, thereby allowing for digital content (e.g., images) to consistently appear in the same, real world location in the controlled space for each individual device as a result of one or more localization and subsequent re-localization processes for each device, as described in greater detail herein. Upon a set of devices localizing into the controlled space using at least one of the image tracking and cloud anchoring techniques, the AR platform allows for dynamic, real-time localization across all devices in the given space. Each device will determine, through a series of checks, whether to start generating temporary cloud anchors for more accurately sharing an AR experience with new devices that enter the space. As image tracking can require positioning devices in close proximity to image markers, temporary cloud anchors provide an advantage of allowing more devices to arbitrarily localize into the space without having a multitude of viewers try to crowd into the same vantage point. The system of the present invention further accounts for drift by providing for automatic and repeated localization (i.e., re-localization) for any device. One or more locations within a given controlled space may be designated as re-localization points, in which any given user's proximity may be detected via a proximity sensor, such as a near-field communication-based device. For example, proximity sensors may include Bluetooth Low-Energy (BLE) sensors. Upon being detected, a near-field communication-based sensor may communicate with the AR platform and/or device and subsequently initiate a re-localization process, in which the device will automatically attempt to re-localize (requiring no direct input or interaction from the user). Accordingly, the system of the present invention provides for continuous re-alignment of the dynamic world origin point through a combination of the use of the physical image markers as well as disparate cloud services of each device to maintain the associated coordinates consistently across device software systems throughout the duration of each AR session/experience. Accordingly, the system of the present invention addresses the drawbacks of current augmented reality systems by recognizing the potential of how experiential augmented reality can be when experiencing such content together by many at the same time. The AR platform provides for synchronized sharing of AR content in real time and across multiple AR-capable devices, thereby allowing multiple users to experience the same AR content rendering in real time and within a live, physical environment or space, wherein such rendering of AR content is adapted to each user's point of view. The synchronization of content allows for multiple users within the given space to more naturally interface with the shared AR content as well as observe an identical combination of digital and physical reality, thereby simultaneously experiencing and interacting with augmented reality environments. The AR platform allows for the display of AR content within the same physical location and orientation across multiple AR-capable devices, regardless of the devices being from identical or different manufactures. By combining different device types together, the system of the present invention is accessible by most device owners, providing similar AR experiences to both the handheld mobile market (i.e., smartphones or tablets) and the more expensive lightweight eyewear market. Additionally, by integrating and leveraging multiple technologies (i.e., image tracking technology, cloud-based anchor systems, local persistent anchoring systems, and re-localization proximity sensors), the system of the present invention is able to ensure constant re-localization that does not depend solely on a single technology. Based on the communication capabilities (e.g., network communications), reliability can be shared across the different platforms, thereby improving the overall AR experience for all users. For the sake of clarity and ease of description, the systems described herein and AR experiences provided by such systems may be implemented in an indoor environment, such as within a room or multiple rooms within a building or enclosed space, such as an indoor attraction. More specifically, the following embodiments describe the use of multiple controlled spaces that are part of an overall AR experience to be provided to the users (i.e., multiple rooms or spaces at a particular venue, such as a multiple spaces representing multiple exhibits at an AR-based zoo). However, it should be noted that systems of the present invention may be used to provide AR experiences in outdoor environments (i.e., such as military training or outdoor entertainment venues and attractions). FIG.1illustrates one embodiment of an exemplary system10consistent with the present disclosure. As shown, system10includes an augmented reality (AR) platform12. The AR platform12may be embodied on an internet-based computing system/service. For example, the AR platform12may be embodied on a cloud-based service, for example. The AR platform12is configured to communicate and share data with one or more users15(a)-15(n) via computing devices16(a)-16(n) over a network18, for example. The system10further includes one or more remote server systems14, which may be associated with one or more backend platforms or systems for one or more of the computing devices16. For example, as will be described in greater detail herein, each of the computing devices may run platform-specific anchor-based localization processes, including, but not limited to, cloud anchoring processes, such as Apple's ARKit, Google's ARCore, or Microsoft's Hololens & Azure systems. Accordingly, the remote server systems14may be associated with such platform-specific anchor-based localization processes. In the present context, depending on the specific AR experience to be provided and the particular use of the system, the users may include guests, patrons, participants, students, or the like. For example, in one example, the system of the present invention may be particularly useful in the entertainment industry in which a given venue provides entertainment to multiple guests or patrons at once, such as a zoo, theme park, sporting event, or the like. Similarly, the systems of the present invention may be useful for educational purposes (i.e., classroom environment in which the instructor and associated course lesson is provided to multiple students via an AR experience provided on each student's AR-capable device) or military and/or law enforcement exercises (i.e., soldiers, military personnel, police officers, etc.) can train via customized training scenarios provided via an AR experience, including multi-user combat situations). The network18may represent, for example, a private or non-private local area network (LAN), personal area network (PAN), storage area network (SAN), backbone network, global area network (GAN), wide area network (WAN), or collection of any such computer networks such as an intranet, extranet or the Internet (i.e., a global system of interconnected network upon which various applications or service run including, for example, the World Wide Web). In alternative embodiments, the communication path between the computing devices16, and/or between the computing devices16and AR platform12, and/or between the computing devices16and remote server system(s)14, and/or between the AR platform12and remote server system(s)14, may be, in whole or in part, a wired connection. The network18may be any network that carries data. Non-limiting examples of suitable networks that may be used as network18include Wi-Fi wireless data communication technology, the internet, private networks, virtual private networks (VPN), public switch telephone networks (PSTN), integrated services digital networks (ISDN), digital subscriber link networks (DSL), various second generation (2G), third generation (3G), fourth generation (4G), fifth-generation (5G) cellular-based data communication technologies, Bluetooth radio, Near Field Communication (NFC), the most recently published versions of IEEE 802.11 transmission protocol standards, other networks capable of carrying data, and combinations thereof. In some embodiments, network18is chosen from the internet, at least one wireless network, at least one cellular telephone network, and combinations thereof. As such, the network18may include any number of additional devices, such as additional computers, routers, and switches, to facilitate communications. In some embodiments, the network18may be or include a single network, and in other embodiments the network18may be or include a collection of networks. The AR platform12is configured to communicate and share data with the computing devices16associated with one or more users15as well as the remote server system(s). Accordingly, the computing device16may be embodied as any type of device for communicating with the AR platform12and remote server system(s)14, and/or other user devices over the network18. For example, at least one of the user devices may be embodied as, without limitation, any form of computing device capable of rendering the intended AR experience provided, in part, via the AR platform12, such as a smartphone or tablet, which include camera hardware and associated display for providing a view of the real-world environment (via a viewfinder on the display when a camera is capturing a live view of the real-world environment) and further rendering digital content provided by the AR platform12overlaying the real-world environment. In addition to the use of smartphones and/or tablets, the user devices16may include AR-capable wearable headsets, such as, for example, Microsoft® Hololens®, or other augmented reality and/or mixed reality headsets. The AR platform12includes a mesh network of technologies integrated with one another and used to ultimately establish alignment of digital AR content, including rendering thereof, against the controlled physical environment or space. The AR platform12ultimately allows for multiple users to experience the same AR content rendering in real time, wherein such rendering of AR content is adapted to each user's point of view within the controlled, real-world space, as will be described in greater detail herein. It should be noted that embodiments of the system10of the present disclosure include computer systems, computer operated methods, computer products, systems including computer-readable memory, systems including a processor and a tangible, non-transitory memory configured to communicate with the processor, the tangible, non-transitory memory having stored instructions that, in response to execution by the processor, cause the system to perform steps in accordance with the disclosed principles, systems including non-transitory computer-readable storage medium configured to store instructions that when executed cause a processor to follow a process in accordance with the disclosed principles, etc. FIG.2is a block diagram illustrating the augmented reality (AR) platform12in greater detail. As shown, the AR platform12may include an interface20, a data collection and management module22, a localization/re-localization module24, an AR content creation, management, and distribution module26, and various databases28for storage of data. As will be described in greater detail herein, the AR platform12is configured to communicate and share data with one or more users15(a)-15(n) via computing devices16(a)-16(n) over a network18, for example. FIG.3is a block diagram illustrating the various databases in greater detail. In particular, the various databases for storage of data include, but are not limited to, a user database30for storing profiles of users and their associated devices, for example, a physical space database32for storing data associated with controlled physical spaces for one or more associated AR experiences, an image marker database34for storing image marker data associated with one or more controlled physical spaces, an anchor database36for storing anchor data of a given device16during an AR experience, a localization/re-localization database38for storing localization (and re-localization) data of a given device16during an AR experience, and an AR content database40for storing AR content (i.e., digital images or other media) to be transmitted to the devices16as part of an AR experience of a given controlled space, such as images including one or more objects, composed by the AR platform12or provided thereto from an external source, to be displayed as overlays on views of the controlled, real-world space via the device16. The data collection and management module22may be configured to communicate and exchange data with each of the databases, as well as the other modules provided. The interface20may generally allow a user to gain access to one or more features of the AR services, which may include an interactive interface in which users may select certain inputs may adjust, or otherwise result in interaction with, a given AR experience. The interface20may also provide general information regarding the AR experience (i.e., guidance in the form of a map or layout providing directions to the next exhibit or previous exhibit, requests prompting the user to take certain actions, such as actively initiating a localization process, alerts indicating to the user that certain AR experiences are available and or ready, etc.). FIG.4is a block diagram illustrating at least one embodiment of a computing device (i.e., smartphone or tablet)16afor communicating with the AR platform12and remote server system(s)14and for subsequently conveying an AR experience to an associated user15based on communication with at least the AR platform12. The mobile device16generally includes a computing system100. As shown, the computing system100includes one or more processors, such as processor102. Processor102is operably connected to communication infrastructure304(e.g., a communications bus, cross-over bar, or network). The processor102may be embodied as any type of processor capable of performing the functions described herein. For example, the processor may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. The computing system100further includes a display interface106that forwards graphics, text, sounds, and other data from communication infrastructure104(or from a frame buffer not shown) for display on display unit108. The computing system further includes input devices110. The input devices110may include one or more devices for interacting with the mobile device16, such as a keypad, microphone, camera, as well as other input components, including motion sensors, and the like. For example, the mobile device16may include any variety of sensors for capturing data related to at least one of a location of the user within the controlled, physical space, a point of gaze of the user within the given space, a field of view of the user within the given space, as well as a physical setting and objects within the given space. The sensors may include one or more of a camera, motion sensor, and global positioning satellite (GPS) sensor. The motion sensor may be embodied as any type of sensor configured to capture motion data and produce sensory signals. For example, the motion sensor may be configured to capture data corresponding to the movement of the device or lack thereof. The motion sensor may include, for example, an accelerometer, an altimeter, one or more gyroscopes, or other motion or movement sensor to produce sensory signals corresponding to motion or movement of the device16and/or a magnetometer to produce sensory signals from which direction of travel or orientation can be determined. The one or more motion sensors may further include, or be coupled to, an inertial measurement unit (IMU) module for example. The motion sensors may also be embodied as a combination of sensors, each of which is configured to capture a specific characteristic of the motion of the device16, or a specific characteristic of user movement. A motion sensor embodied as a combination of sensors may use algorithms, such as, for example, fusion algorithms, to correct and compensate the data from individual sensors and provide more robust motion sensing and detection context than each individual sensor can provide alone. In one embodiment, the display unit108may include a touch-sensitive display (also known as “touch screens” or “touchscreens”), in addition to, or as an alternative to, physical push-button keyboard or the like. The touch screen may generally display graphics and text, as well as provides a user interface (e.g., but not limited to graphical user interface (GUI)) through which a user may interact with the mobile device16, such as accessing and interacting with applications executed on the device16, including an app for communicating and exchanging data with the AR platform12, as well as rendering digital AR content provided by the AR platform12. The computing system100further includes main memory112, such as random access memory (RAM), and may also include secondary memory114. The main memory112and secondary memory114may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. Similarly, the memory112,114may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In the illustrative embodiment, the mobile device16may maintain one or more application programs, databases, media and/or other information in the main and/or secondary memory112,114. The secondary memory114may include, for example, a hard disk drive116and/or removable storage drive118, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, etc. Removable storage drive118reads from and/or writes to removable storage unit120in any known manner. The removable storage unit120may represent a floppy disk, magnetic tape, optical disk, etc. which is read by and written to by removable storage drive118. As will be appreciated, removable storage unit120includes a computer usable storage medium having stored therein computer software and/or data. In alternative embodiments, the secondary memory114may include other similar devices for allowing computer programs or other instructions to be loaded into the computing system100. Such devices may include, for example, a removable storage unit124and interface122. Examples of such may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an erasable programmable read only memory (EPROM), or programmable read only memory (PROM)) and associated socket, and other removable storage units124and interfaces122, which allow software and data to be transferred from removable storage unit124to the computing system100. The computing system100further includes one or more application programs126directly stored thereon. The application program(s)126may include any number of different software application programs, each configured to execute a specific task. The computing system100further includes a communications interface128. The communications interface128may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the mobile device16external devices (other mobile devices16, the AR platform12and/or remote server system(s)14). The communications interface128may be configured to use any one or more communication technology and associated protocols, as described above, to effect such communication. For example, the communications interface128may be configured to communicate and exchange data with the digital content management platform12, and/or one other mobile device16, via a wireless transmission protocol including, but not limited to, Bluetooth communication, infrared communication, near field communication (NFC), radio-frequency identification (RFID) communication, cellular network communication, the most recently published versions of IEEE 802.11 transmission protocol standards, and a combination thereof. Examples of communications interface128may include a modem, a network interface (such as an Ethernet card), a communications port, a Personal Computer Memory Card International Association (PCMCIA) slot and card, wireless communication circuitry, etc. Computer programs (also referred to as computer control logic) may be stored in main memory112and/or secondary memory114or a local database on the mobile device16. Computer programs may also be received via communications interface128. Such computer programs, when executed, enable the computing system100to perform the features of the present invention, as discussed herein. In particular, the computer programs, including application programs126, when executed, enable processor102to perform the features of the present invention. Accordingly, such computer programs represent controllers of computer system100. In one embodiment where the invention is implemented using software, the software may be stored in a computer program product and loaded into the computing system100using removable storage drive118, hard drive116or communications interface128. The control logic (software), when executed by processor102, causes processor102to perform the functions of the invention as described herein. In another embodiment, the invention is implemented primarily in hardware using, for example, hardware components such as application specific integrated circuits (ASICs). Implementation of the hardware state machine so as to perform the functions described herein will be apparent to persons skilled in the relevant art(s). In yet another embodiment, the invention is implemented using a combination of both hardware and software. FIG.5is a block diagram illustrating at least one embodiment of a computing device (i.e., wearable headset)16bfor communicating with the AR platform12and for subsequently conveying an AR experience to an associated user15based on communication with at least the AR platform12. The headset16bincludes a display unit200positioned to be within a field of view of a person wearing the headset (i.e., the “wearer”) and a processing subsystem202built into the headset16band configured to communicate with the AR platform12and remote server system(s)14to exchange various sensor data to be used for at least one of localization, re-localization, and eventual receipt of augmented reality (AR) content to be displayed on the display unit100. The processing subsystem202includes, for example, a hardware processor coupled to non-transitory, computer-readable memory containing instructions executable by the processor to cause the processing subsystem200to communicate with the AR platform12and remote server system(s)14over the network18and exchange data therewith. As shown, the headset16bmay include a variety of sensors204for capturing data related to at least one of a location of the wearer within the controlled, physical space, a point of gaze of the wearer within the physical space, a field of view of the wearer within the physical space, and a physical setting and objects within the space. The sensors204may include one or more of a camera206, motion sensor208, and global positioning satellite (GPS) sensor210. The camera206is operable to capture one or more images (or a series of images) of the given, controlled space in which the AR experience is taking place. The motion sensor208may include an accelerometer, an altimeter, one or more gyroscopes, other motion or movement sensors to produce sensory signals corresponding to motion or movement of the headset16band the wearer, and a magnetometer to produce sensory signals from which direction of travel or orientation of the headset16b(i.e., the orientation of the wearer) can be determined. The motion sensor208, for example, may be embodied as any type of sensor configured to capture motion data and produce sensory signals. For example, the motion sensor may be configured to capture data corresponding to the movement of the device or lack thereof. The motion sensor may include, for example, an accelerometer, an altimeter, one or more gyroscopes, or other motion or movement sensor to produce sensory signals corresponding to motion or movement of the headset16band/or a magnetometer to produce sensory signals from which direction of travel or orientation can be determined. The one or more motion sensors may further include, or be coupled to, an inertial measurement unit (IMU) module for example. The motion sensors may also be embodied as a combination of sensors, each of which is configured to capture a specific characteristic of the motion of the headset16b, or a specific characteristic of user movement. A motion sensor embodied as a combination of sensors may use algorithms, such as, for example, fusion algorithms, to correct and compensate the data from individual sensors and provide more robust motion sensing and detection context than each individual sensor can provide alone. FIG.6shows a perspective view of an exemplary wearable headset16bof the system of the present invention. As illustrated, the headset16bis generally in form of a pair of eyewear. The headset16bincludes a frame member216including a right earpiece218and a left earpiece220, which may be fixedly or hingedly attached to the frame member216. The frame member216further includes a center bridge222. The headset16bincludes a first lens224(e.g., as a right lens) and also includes a second lens226(e.g., as a left lens) to provide binocular vision. The right lens224and left lens226are mounted to the frame member216. The headset16bmay be dimensioned to be worn on a human head, with each earpiece extending over a respective ear such that a portion of the frame member216extends across the human face. The right lens224and left lens226may be mounted to the frame member216such that, when the headset16bis worn, each of the right lens and left lens224,226is disposed in front of a the respective eyes of the wearer. As previously described, the headset16bmay include one or more sensors232,234,236, and238, such as camera(s), microphone(s), motion sensor(s), GPS sensor(s), and the like, for capturing/sensing data associated with the location, orientation, or field-of-view information of the person wearing the headset16bto compose the augmented reality content in real-time. Furthermore, in certain embodiments, the headset16bincludes one or more of electronic displays or projectors228,230for each of the right lens and left lens224,226, as previously described herein. FIG.7is a block diagram illustrating communication between multiple AR-capable devices16aand16band the AR platform12for localization and re-localization thereof based on at least one of image tracking technology, anchor-based technology, and proximity sensing. As previously described, the system12includes the use of a physical, real-world environment, preferably a controlled space (i.e., a room or at least partially enclosed space) in which the AR content is to be presented to the multiple users (via each user's AR-capable device). The use of a controlled space allows for the system of the present invention to provide a persistently shared experience dedicated to the specific space. In some embodiments, the environment may include multiple controlled spaces that are part of an overall AR experience to be provided to the users (i.e., multiple rooms or spaces at a particular venue, such as a multiple spaces representing multiple exhibits at an AR-based zoo). For any given controlled space, a shared point is initially established (also referred to herein as “world origin point” or “world origin”). The world origin point is generally defined as a specific position and orientation within the given space, which may be based on coordinate data (e.g., a coordinate axis system, including an x,y,z position and x,y,z orientation). Once established, all digital content will be subsequently placed relative to that world origin point. In layman's terms, the world origin point on a canonical world map would be the latitude and longitude of (0,0) with an orientation of north pointing to the north pole. All location coordinates specified with latitude and longitude values can be reasonably understood by any map program that respects this world origin point, with the latitude and longitude coordinates considered as being relative to that known world origin point. Establishing a world origin point within the controlled space allows for the AR platform12to place digital content relative to the world origin point for subsequent rendering across multiple AR-capable devices. The controlled space is digitally mapped, such that digital data associated with the controlled space, including the world-origin point coordinate data, is stored within the physical space database32, for example, for subsequent retrieval and use during rendering of AR content. The system10further relies on image tracking for alignment purposes. For example, the physical space can be decorated using image marker technology. Use of image marker technology allows for canonically established images to represent coordinates associated with the world origin point. For example, at the start of a given AR session or experience, devices with image tracking technology can utilize one or more image trackers (i.e. physical markers) within a given space to localize into the space and align the AR session to the world origin point. The localized coordinates of each image marker along with a unique image marker identifier data is stored for each image within the image marked database34, for example, for subsequent retrieval and use by each device16, thereby allowing devices to understand the space without requiring any individual device setup. The AR platform12further coordinates the world origin point of a given controlled space with anchor-based localization to thereby align the multiple devices. In particular, each device16may be running an anchor-based software algorithm unique to that device's given platform. Anchors are understood to include generated locations that represent a physical location of the associated device in the real world and stored as serialized data (e.g., in the form of coordinate data), and may be stored within the anchor database36, for example. In some embodiments, the devices16may be running respective cloud anchoring systems. Additionally, some devices16may be running respective persistent anchoring systems. Accordingly, each of the devices16may run platform-specific anchor-based localization processes, including, but not limited to, cloud anchoring processes, such as Apple's ARKit, Google's ARCore, or Microsoft's Hololens & Azure systems. As an anchor represents a physical point in the real world, anchors use localization to identify their relative location to world origin coordinates for each individual AR session, and thus those coordinates will vary with each session while their location and orientation would be identical across sessions (with a small margin of error depending on platform accuracy). Each participating device16within the controlled space essentially agrees upon the established world origin point, thereby allowing for digital content (e.g., images) to consistently appear in the same, real world location in the controlled space for each individual device as a result of one or more localization and subsequent re-localization processes for each device16. For each cloud anchoring system, for example, anchors will be established for each integrated platform in a similar manner to image markers. However, in the present system, cloud anchors are established using a computer vision-based mesh understanding of the physical world. As previously described, each device within the controlled space essentially agrees upon the established world origin point, such that each device localizes into the space based, at least in part, on established anchors for that device (i.e., correlation of anchor data with world origin point data). Upon the devices16localizing into the controlled space using at least one of the image tracking and cloud anchoring techniques, the AR platform12allows for dynamic, real-time localization across all devices in the given space, as carried out via the localization/re-localization module24in some instances. In some embodiments, each device16will determine, through a series of checks, whether to start generating temporary cloud anchors for more accurately sharing an AR experience with new devices that enter the space. As image tracking can require positioning devices in close proximity to image markers, temporary cloud anchors provide an advantage of allowing more devices to arbitrarily localize into the space without having a multitude of viewers try to crowd into the same vantage point. The system10further accounts for drift. For example, devices may be continuously re-localizing into the real world through a series of sensors, which may include an RGB camera, Lidar sensors, inertial measurement unit (IMU), motion sensors, infrared, or other tracking system. Such sensors are all subject to disruption, which can interfere with the device's understanding of its position and orientation in the real world environment. Accordingly, as a result of such disruption, the digital AR content provided may shift from its originally localized world origin, resulting in a phenomenon known as drift, which can cause digitally placed objects to shift to incorrect locations as a result. To counter the effects of drift and to make the system easy to use for each user, the system of the present invention provides for automatic and repeated localization (i.e., re-localization) for any device. In particular, for a given AR experience that may include multiple controlled spaces (e.g., multiple exhibits in an AR-based zoo, for example), multiple locations within the real world environment may be designated as re-localization points, in which any given user's proximity may be detected via a proximity sensor, such as a near-field communication-based device. For example, proximity sensors may include Bluetooth Low-Energy (BLE) sensors13. Upon being detected, a near-field communication-based sensor may communicate with the AR platform12and/or device16and subsequently initiate a re-localization process, in which the device16will automatically attempt to re-localize (requiring no direct input or interaction from the user), wherein re-localization data can be stored within the localization/re-localization database38. Such re-localization points can be placed throughout a given AR experience at regular intervals that users (i.e., guests or participants) must necessarily pass through and are encouraged to come closer as part of the attraction(s). Accordingly, the system10of the present invention provides for continuous re-alignment of the dynamic world origin point through a combination of the use of the physical image markers as well as disparate cloud services of each device to maintain the associated coordinates consistently across device software systems throughout the duration of each AR session/experience. As previously described, each device16transmits data, including sensor data and images or other information related to the user, to the AR platform12. In turn, the AR platform12processes the data (via the AR content creation, management, and distribution module26) in accordance with AR-based processes and in accordance with AR software, such as AutoCad3D, StudioMax or Cinema4D programs. The AR processing may be recognition-based augmented reality or location-based augmented reality, or a combination of both, as generally understood. The AR platform12may then obtain and/or create AR content, which may be in the form of one or more images including one or more objects, to be displayed as overlays on views of the physical, real-world space. In particular, platform12may use the location, orientation, or field-of-view information of the user, as well as other data associated with the device16(image marked data, anchor data, localization (re-localization) data, etc.) to compose the AR content in real, or near-real, time. Accordingly, the sensor data is important and is relied upon by the platform12, which is able to generate and reposition AR content according to a location of the user (and associated device) within the physical space, as well as a position of the wearer's head with regards to objects within the given space. The devices effectively immerse the user in the augmented reality experience, because elements of the augmented reality scene are updated and received on-the-fly. FIG.8is an exemplary layout of a venue comprised of multiple controlled spaces, each having space an established world origin point for a given attraction, at least one image tracking marker, and re-localization zones. As shown, guests may enter a first space (Space 1), in which each guest is equipped with an AR-capable device. Their devices will undergo an initial localization process, in which the device localizes to the map using at least one of the image tracking and cloud anchoring techniques previously described herein, depending on which technique is available. If both are available, the cloud anchoring localization takes priority. The next six spaces (Space 2 through Space 6) consist of exhibits, each including a controlled physical space providing a respective AR experience to multiple guests. Each of the exhibit spaces include re-localization areas or zones. As previously described, such zones may generally be implemented as re-localization points, in which any given user's proximity may be detected via a proximity sensor. Upon being detected, a re-localization process is initiated behind the scenes (the user is not aware). The docent may give an introduction to the area (space), thereby providing a bit more time for the re-localization process to complete, before moving into the room to receive the AR experience for that given space. It should be noted that, as a fallback option, image tracking markers may be placed throughout each space if needed. Further, as shown, re-localization points can be placed throughout a given AR experience at regular intervals that the guests must necessarily pass through and are encouraged to come closer as part of the attraction(s). Accordingly, the system10of the present invention provides for continuous re-alignment of the dynamic world origin point through a combination of the use of the physical image markers as well as disparate cloud services of each device to maintain the associated coordinates consistently across device software systems throughout the duration of each AR session/experience. The venue may further include a couple final spaces in which the guests unload and remove the devices (space 7) once the AR experience is complete (once the guest has passed through all exhibits) and the guest can further and enter the gift shop (space 8) to purchase items or leave the venue. FIGS.9A-9Fshow a continuous flow diagram illustrating a method300for initial localization of one or more AR-capable devices within a controlled environment or space prior to commencing an AR experience or session. The method includes starting up the device (i.e., turning on the power) within the given space in which the AR experience will take place. Upon starting up the device, the user must wait for localization processes to begin within the given environment or space (operation302), in which the device will be communicating with at least one of the AR platform12and remote server system(s)14, exchanging data therewith. Upon localizing with the physical environment (operation304), a determination is then made in operation306as to whether there is any saved spatial anchor data available or present. At this point, the databases28are analyzed to determine if any saved spatial anchor data is available. If it is determined in operation306that spatial anchor data is available/present, then a first anchor (presumably a first anchor tied or associated with the saved spatial anchor data) is loaded (operation308). A determination is then made in operation312as to whether the first anchor is local or a cloud anchor. If it is determined that the first anchor is local, then a determination is made in operation314as to whether the local anchor is able to be localized. If it is determined that the local anchor is able to be localized, then an AR experience is localized (operation316) and the device is then connected to the multiplayer/multi-user network (operation320). If it is determined that the local anchor is unable to be localized, then a determination is made in operation322as to whether there are additional anchors (presumably tied to or associated with the saved spatial anchor data) to check. If it is determined that there are no additional anchors to check, then image localization (utilizing image tracking technology described herein) is attempted (operation310). If it is determined that there are additional anchors to check, then the determination in operation312(as to whether the first anchor is local or a cloud anchor) is repeated. If it is determined (in operation312) that the first anchor is cloud-based, then a determination is made in operation316as to whether it is possible to localize the cloud-based anchor with the associated cloud-based server. If it is determined that the cloud-based anchor is able to be localized with the cloud, then an AR experience is localized (operation316) and the device is then connected to the multiplayer/multi-user network (operation320). If it is determined that the cloud-based anchor is unable to be localized with the cloud, then a determination is made in operation322as to whether there are additional anchors (presumably tied to or associated with the saved spatial anchor data) to check. If it is determined that there are no additional anchors to check, then image localization (utilizing image tracking technology described herein) is attempted (operation310). If it is determined that there are additional anchors to check, then the determination in operation312(as to whether the first anchor is local or a cloud anchor) is repeated. Reverting back to operation306, if it is determined that there spatial anchor data is not available or present, then image localization is attempted (operation310). Upon attempting image localization, the device enters an image scanning mode (operation324). A determination is then made in operation326as to whether any image tracking targets or markers are found/detected. If it is determined that image tracker targets or markers are found/detected, then then AR experience is localized (operation316) and the device is then connected to the multiplayer/multi-user network (operation320). If it is determined that image tracker targets or markers are not found/detected, then temporary localization is created (operation328) and the device is then connected to the multiplayer/multi-user network (operation320). Upon connecting to the multiplayer/multi-user network, a determination is then made in operation330as to whether the AR experience is localized. If it is determined that the AR experience is localized, then a subsequent determination is made in operation332as to whether there are any currently shared network anchors. If it is determined that there are currently shared network anchors, then the AR experience is started (operation334). If it is determined that there are no currently shared network anchors, then networked anchors are created and shared (operation336) and the AR experience is then started (operation334). If it is determined in operation330that the AR experience is not localized, then a determination is made in operation338as to whether there are any currently shared network anchors. If it is determined that there are currently shared network anchors available, then a first anchor is loaded (operation340) and a subsequent determination is made in operation342as to whether the anchor can be localized with a cloud. If it is determined that the first anchor can be localized with a cloud, then an AR experience is localized (operation346) and the AR experience is started (operation334). If it is determined that the first anchor is unable to be localized with a cloud, then a determination is made in operation348as to whether there are additional anchors to check. If it is determined that there are additional anchors to check, then the determination in operation342is repeated. Reverting back to operation338, if it is determined that there are no currently shared network anchors, then networked anchors are created and shared (operation350), and then the AR experience is started (operation334). FIG.10is a block diagram of one embodiment of a method for initiating a re-localization session of an AR-enabled device.FIG.11is a block diagram of another embodiment of a method for initiating a re-localization session of an AR-enabled device. The method illustrated inFIG.10generally corresponds to a scenario in which a guest is at a venue comprised of multiple controlled spaces, such as an AR-based zoo with multiple exhibits. In the first scenario (seeFIG.10) a guest may generally enter a controlled space with the associated AR-capable device. Upon entering the space, a proximity sensor (such as a BLE sensor) detects that the guest entered a new zone (seeFIG.8), and triggers a re-localization scanning event. The guest stops at an entryway while the docent introduces the room, encouraging guests to look around the room. As the guest is stopped at the entryway, the device scans the geometry data and attempts to find re-localization anchors through local or cloud correlation/comparison. Upon re-localization, the device indicates to the guest that the device is now “ready”. Once all of the devices from each of the guests within that space are “ready”, then the docent is notified to continue the tour/experience. Accordingly, upon receiving the “ready” notification, the guest then proceeds into the given space to continue the AR experience. The method illustrated inFIG.11generally corresponds to a scenario in which a guest is having difficulties with their AR experience, such as poor feedback or visibility of AR content. Accordingly, the guest may utilize an interface to select a help or assistance feature. In turn, the guest may be presented with a mini-map or layout of a specific zone or space and they may be directed to the nearest image tracking marker associated with that given zone or space so as to re-localize to the specific scene based on image tracking localization techniques described herein. In turn, the device turns on the re-localization mode and beings scanning the given space for re-localization while the guest uses image tracking to attempt to re-localize as well. Upon successful re-localization, the device turns off the re-localization mode and the AR experience resumes. By providing a truly immersive and shared AR experience, systems of the present invention can be particularly beneficial in various industries that cater to, or otherwise rely on, multiple guests, participants, patrons, or the like. In the present context, depending on the specific AR experience to be provided and the particular use of the system, the users may include guests, patrons, participants, students, or the like. For example, in one example, the system of the present invention may be particularly useful in the entertainment industry in which a given venue provides entertainment to multiple guests or patrons at once, such as a zoo, theme park, sporting event, or the like. Similarly, the systems of the present invention may be useful for educational purposes (i.e., classroom environment in which the instructor and associated course lesson is provided to multiple students via an AR experience provided on each student's AR-capable device) or military and/or law enforcement exercises (i.e., soldiers, military personnel, police officers, etc.) can train via customized training scenarios provided via an AR experience, including multi-user combat situations). FIGS.12-20are images depicting various implementations and uses of the system of the present invention. FIGS.12-16are images depicting an AR-based zoo experience for multiple guests within controlled spaces (i.e., specific “exhibits”), in which systems of the present invention provide synchronized sharing of zoo-based AR content (i.e., zoo-related animals) in real time and across multiple AR-capable devices (i.e., wearable headsets and/or personal computing devices, such as a smartphone or tablet). FIG.17is an exemplary layout or map of an AR-based zoo experience, illustrating the various “exhibits”. FIG.18is an image depicting an AR-based classroom experience, in which the instructor and associated course content is provided to multiple students via an AR experience. FIG.19is an image depicting another embodiment of an AR-based classroom experience, in which the instructor and associated course content/lesson is provided to multiple students via an AR experience, wherein each student is viewing and interacting with the course content/lesson and instructor via a tablet computing device, further illustrating the multiple point of views for each student, adding to the realism and feel. Such an experience is particularly useful for distance education, such as remote learning or the like. FIG.20is an image depicting an AR-based military experience, in which multiple soldiers are provided with a military training scenario. Accordingly, the system of the present invention addresses the drawbacks of current augmented reality systems by recognizing the potential of how experiential augmented reality can be when experiencing such content together by many at the same time. The AR platform provides for synchronized sharing of AR content in real time and across multiple AR-capable devices, thereby allowing multiple users to experience the same AR content rendering in real time and within a live, physical environment or space, wherein such rendering of AR content is adapted to each user's point of view. The synchronization of content allows for multiple users within the given space to more naturally interface with the shared AR content as well as observe an identical combination of digital and physical reality, thereby simultaneously experiencing and interacting with augmented reality environments. The AR platform allows for the display AR content within the same physical location and orientation across multiple AR-capable devices, regardless of the devices being from identical or different manufactures. Furthermore, by combining different device types together, the system of the present invention is accessible by most device owners, providing handheld mobile AR (i.e., by way of smartphones or tablets) to more expensive lightweight eyewear. Furthermore, by integrating and leveraging multiple technologies (i.e., image tracking technology, cloud-based anchor systems, local persistent anchoring systems, and re-localization proximity sensors), the system of the present invention is able to ensure constant re-localization that does not depend solely on a single technology. Based on the communication capabilities, via the AR platform, reliability can be shared across the different platforms, thereby improving the overall AR experience by countering drift. As used in any embodiment herein, the term “module” may refer to software, firmware and/or circuitry configured to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage medium. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. “Circuitry”, as used in any embodiment herein, may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The modules may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), system on-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smartphones, etc. Any of the operations described herein may be implemented in a system that includes one or more storage mediums having stored thereon, individually or in combination, instructions that when executed by one or more processors perform the methods. Here, the processor may include, for example, a server CPU, a mobile device CPU, and/or other programmable circuitry. Also, it is intended that operations described herein may be distributed across a plurality of physical devices, such as processing structures at more than one different physical location. The storage medium may include any type of tangible medium, for example, any type of disk including hard disks, floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, Solid State Disks (SSDs), magnetic or optical cards, or any type of media suitable for storing electronic instructions. Other embodiments may be implemented as software modules executed by a programmable control device. The storage medium may be non-transitory. As described herein, various embodiments may be implemented using hardware elements, software elements, or any combination thereof. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. The term “non-transitory” is to be understood to remove only propagating transitory signals per se from the claim scope and does not relinquish rights to all standard computer-readable media that are not only propagating transitory signals per se. Stated another way, the meaning of the term “non-transitory computer-readable medium” and “non-transitory computer-readable storage medium” should be construed to exclude only those types of transitory computer-readable media which were found in In Re Nuijten to fall outside the scope of patentable subject matter under 35 U.S.C. § 101. The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents. INCORPORATION BY REFERENCE References and citations to other documents, such as patents, patent applications, patent publications, journals, books, papers, web contents, have been made throughout this disclosure. All such documents are hereby incorporated herein by reference in their entirety for all purposes. EQUIVALENTS Various modifications of the invention and many further embodiments thereof, in addition to those shown and described herein, will become apparent to those skilled in the art from the full contents of this document, including references to the scientific and patent literature cited herein. The subject matter herein contains important information, exemplification and guidance that can be adapted to the practice of this invention in its various embodiments and equivalents thereof. | 61,032 |
11943283 | DETAILED DESCRIPTION The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. To facilitate the sharing of the large amount of media content items being exchanged between a network of individuals on the messaging system, the system is faced with challenges when dealing with highly mutable and largely ephemeral media content items being sent and received. Among other things, embodiments of the present disclosure improve the functionality of the messaging system by identifying and assigning a storage location for data associated with the users in the messaging system as well as a single storage location for data that is shared between multiple users, for example, in a messaging conversation. The data associated with each of the users can include user data such as profile data, preferences, subscriptions, user connections on the messaging system, etc. The shared data between multiple users in a messaging conversation can media content items (e.g., text messages, images, videos, animations, webpage links, etc.) that were shared with each of the users in the messaging conversation via a messaging interface. The single storage to store this shared data is selected to be optimal when taking into account each of the users in the messaging conversation. For example, the single storage location can be selected based on the home location associated with each of the multiple users. The single storage location can also be selected based on the latency that is experienced or perceived by each of the multiple users during the messaging conversation. The cost or the performance of storage locations can also be considered when selecting the single storage location. The selection of the single storage location can also be reevaluated periodically or upon detection of changes including, for example, a change in the home location of any of the users, a change in the number of available storage locations, a change in the cost or performance of the available storage locations, etc. FIG.1is a block diagram showing an example messaging system100for exchanging data (e.g., messages and associated content) over a network. The messaging system100includes multiple instances of a client device102, each of which hosts a number of applications including a messaging client application104and a data storage client controller124. Each messaging client application104is communicatively coupled to other instances of the messaging client application104and a messaging server system108via a network106(e.g., the Internet). Each data storage client controller124can be communicatively coupled to other instances of the data storage client controller124and a data storage server controller126in the messaging server system108via the network106. A messaging client application104is able to communicate and exchange data with another messaging client application104and with the messaging server system108via the network106. The data exchanged between messaging client application104, and between a messaging client application104and the messaging server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video or other multimedia data). The messaging server system108provides server-side functionality via the network106to a particular messaging client application104. While certain functions of the messaging system100are described herein as being performed by either a messaging client application104or by the messaging server system108, the location of certain functionality either within the messaging client application104or the messaging server system108is a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system108, but to later migrate this technology and functionality to the messaging client application104where a client device102has a sufficient processing capacity. The messaging server system108supports various services and operations that are provided to the messaging client application104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client application104. This data may include, message content, client device information, geolocation information, media annotation and overlays, message content persistence conditions, social network information, and live event information, as examples. Data exchanges within the messaging system100are invoked and controlled through functions available via user interfaces (UIs) of the messaging client application104. The data storage client controller124is able to communicate and exchange data with another data storage client controller124and with the data storage server controller126via the network106. The data exchanged between the plurality of data storage client controller124, and between the data storage client controller124and the data storage server controller126can include the home data location associated with the user of the client device102, the current location of the client device102, a history of the previously recorded current locations of the client device102, functions (e.g., commands to invoke functions) as well as other payload data (e.g., text, audio, video or other multimedia data). Turning now specifically to the messaging server system108, an Application Program Interface (API) server110is coupled to, and provides a programmatic interface to, an application server112. The application server112is communicatively coupled to a database server118, which facilitates access to a database120in which is stored data associated with messages processed by the application server112. The Application Program Interface (API) server110receives and transmits message data (e.g., commands and message payloads) between the client device102and the application server112. Specifically, the Application Program Interface (API) server110provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client application104in order to invoke functionality of the application server112. The Application Program Interface (API) server110exposes various functions supported by the application server112, including account registration, login functionality, the sending of messages, via the application server112, from a particular messaging client application104to another messaging client application104, the sending of media files (e.g., images or video) from a messaging client application104to the messaging server application114, and for possible access by another messaging client application104, the setting of a collection of media data (e.g., story), the retrieval of a list of friends of a user of a client device102, the retrieval of such collections, the retrieval of messages and content, the adding and deletion of friends to a social graph, the location of friends within a social graph, and opening an application event (e.g., relating to the messaging client application104). The application server112hosts a number of applications and subsystems, including a messaging server application114, an image processing system116and a social network system122. The messaging server application114implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content) included in messages received from multiple instances of the messaging client application104. As will be described in further detail, the text and media content from multiple sources may be aggregated into collections of content (e.g., called stories or galleries). These collections are then made available, by the messaging server application114, to the messaging client application104. Other Processor and memory intensive processing of data may also be performed server-side by the messaging server application114, in view of the hardware requirements for such processing. The application server112also includes an image processing system116that is dedicated to performing various image processing operations, typically with respect to images or video received within the payload of a message at the messaging server application114. The social network system122supports various social networking functions services and makes these functions and services available to the messaging server application114. To this end, the social network system122maintains and accesses an entity graph304(as shown inFIG.3) within the database120. Examples of functions and services supported by the social network system122include the identification of other users of the messaging system100with which a particular user has relationships or is “following”, and also the identification of other entities and interests of a particular user. The application server112also includes the data storage server controller126that can communicate with the data storage client controller124in the client device102to exchange data used identify and assign storage locations to data associated with a user and data associated with a communication session including a plurality of users. The data storage server controller126can also be coupled to the messaging server application114to establish an electronic group communication session (e.g., group chat, instant messaging) for the client devices in a communication session. The application server112is communicatively coupled to a database server118, which facilitates access to a database120in which is stored data associated with messages processed by the messaging server application114. FIG.2is block diagram illustrating further details regarding the messaging system100, according to example embodiments. Specifically, the messaging system100is shown to comprise the messaging client application104and the application server112, which in turn embody a number of some subsystems, namely an ephemeral timer system202, a collection management system204and an annotation system206. The ephemeral timer system202is responsible for enforcing the temporary access to content permitted by the messaging client application104and the messaging server application114. To this end, the ephemeral timer system202incorporates a number of timers that, based on duration and display parameters associated with a message, or collection of messages (e.g., a story), selectively display and enable access to messages and associated content via the messaging client application104. Further details regarding the operation of the ephemeral timer system202are provided below. The collection management system204is responsible for managing collections of media (e.g., collections of text, image video and audio data). In some examples, a collection of content (e.g., messages, including images, video, text and audio) may be organized into an “event gallery” or an “event story.” Such a collection may be made available for a specified time period, such as the duration of an event to which the content relates. For example, content relating to a music concert may be made available as a “story” for the duration of that music concert. The collection management system204may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of the messaging client application104. The collection management system204furthermore includes a curation interface208that allows a collection manager to manage and curate a particular collection of content. For example, the curation interface208enables an event organizer to curate a collection of content relating to a specific event (e.g., delete inappropriate content or redundant messages). Additionally, the collection management system204employs machine vision (or image recognition technology) and content rules to automatically curate a content collection. In certain embodiments, compensation may be paid to a user for inclusion of user-generated content into a collection. In such cases, the curation interface208operates to automatically make payments to such users for the use of their content. The annotation system206provides various functions that enable a user to annotate or otherwise modify or edit media content associated with a message. For example, the annotation system206provides functions related to the generation and publishing of media overlays for messages processed by the messaging system100. The annotation system206operatively supplies a media overlay or supplementation (e.g., an image filter) to the messaging client application104based on a geolocation of the client device102. In another example, the annotation system206operatively supplies a media overlay to the messaging client application104based on other information, such as social network information of the user of the client device102. A media overlay may include audio and visual content and visual effects. Examples of audio and visual content include pictures, texts, logos, animations, and sound effects. An example of a visual effect includes color overlaying. The audio and visual content or the visual effects can be applied to a media content item (e.g., a photo) at the client device102. For example, the media overlay may include text that can be overlaid on top of a photograph taken by the client device102. In another example, the media overlay includes an identification of a location overlay (e.g., Venice beach), a name of a live event, or a name of a merchant overlay (e.g., Beach Coffee House). In another example, the annotation system206uses the geolocation of the client device102to identify a media overlay that includes the name of a merchant at the geolocation of the client device102. The media overlay may include other indicia associated with the merchant. The media overlays may be stored in the database120and accessed through the database server118. In one example embodiment, the annotation system206provides a user-based publication platform that enables users to select a geolocation on a map, and upload content associated with the selected geolocation. The user may also specify circumstances under which a particular media overlay should be offered to other users. The annotation system206generates a media overlay that includes the uploaded content and associates the uploaded content with the selected geolocation. In another example embodiment, the annotation system206provides a merchant-based publication platform that enables merchants to select a particular media overlay associated with a geolocation via a bidding process. For example, the annotation system206associates the media overlay of a highest bidding merchant with a corresponding geolocation for a predefined amount of time. FIG.3is a schematic diagram illustrating data structures300which may be stored in the database120of the messaging server system108, according to certain example embodiments. While the content of the database120is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database). The database120includes message data stored within a message table314. The entity table302stores entity data, including an entity graph304. Entities for which records are maintained within the entity table302may include individuals, corporate entities, organizations, objects, places, events, etc. Regardless of type, any entity regarding which the messaging server system108stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown). The entity graph304furthermore stores information regarding relationships and associations between entities. Such relationships may be social, professional (e.g., work at a common corporation or organization) interested-based or activity-based, merely for example. The database120also stores annotation data, in the example form of filters, in an annotation table312. Filters for which data is stored within the annotation table312are associated with and applied to videos (for which data is stored in a video table310) and/or images (for which data is stored in an image table308). Filters, in one example, are overlays that are displayed as overlaid on an image or video during presentation to a recipient user. Filters may be of varies types, including user-selected filters from a gallery of filters presented to a sending user by the messaging client application104when the sending user is composing a message. Other types of filters include geolocation filters (also known as geo-filters) which may be presented to a sending user based on geographic location. For example, geolocation filters specific to a neighborhood or special location may be presented within a user interface by the messaging client application104, based on geolocation information determined by a GPS unit of the client device102. Another type of filter is a data filter, which may be selectively presented to a sending user by the messaging client application104, based on other inputs or information gathered by the client device102during the message creation process. Example of data filters include current temperature at a specific location, a current speed at which a sending user is traveling, battery life for a client device102, or the current time. Other annotation data that may be stored within the image table308are augmented reality content items (e.g., corresponding to applying Lenses or augmented reality experiences). An augmented reality content item may be a real-time special effect and sound that may be added to an image or a video. As described above, augmented reality content items, overlays, image transformations, AR images and similar terms refer to modifications that may be made to videos or images. This includes real-time modification which modifies an image as it is captured using a device sensor and then displayed on a screen of the device with the modifications. This also includes modifications to stored content, such as video clips in a gallery that may be modified. For example, in a device with access to multiple augmented reality content items, a user can use a single video clip with multiple augmented reality content items to see how the different augmented reality content items will modify the stored clip. For example, multiple augmented reality content items that apply different pseudorandom movement models can be applied to the same content by selecting different augmented reality content items for the content. Similarly, real-time video capture may be used with an illustrated modification to show how video images currently being captured by sensors of a device would modify the captured data. Such data may simply be displayed on the screen and not stored in memory, or the content captured by the device sensors may be recorded and stored in memory with or without the modifications (or both). In some systems, a preview feature can show how different augmented reality content items will look within different windows in a display at the same time. This can, for example, enable multiple windows with different pseudorandom animations to be viewed on a display at the same time. Data and various systems using augmented reality content items or other such transform systems to modify content using this data can thus involve detection of objects (e.g., faces, hands, bodies, cats, dogs, surfaces, objects, etc.), tracking of such objects as they leave, enter, and move around the field of view in video frames, and the modification or transformation of such objects as they are tracked. In various embodiments, different methods for achieving such transformations may be used. For example, some embodiments may involve generating a three-dimensional mesh model of the object or objects and using transformations and animated textures of the model within the video to achieve the transformation. In other embodiments, tracking of points on an object may be used to place an image or texture (which may be two dimensional or three dimensional) at the tracked position. In still further embodiments, neural network analysis of video frames may be used to place images, models, or textures in content (e.g., images or frames of video). Augmented reality content items thus refer both to the images, models, and textures used to create transformations in content, as well as to additional modeling and analysis information needed to achieve such transformations with object detection, tracking, and placement. Real-time video processing can be performed with any kind of video data (e.g., video streams, video files, etc.) saved in a memory of a computerized system of any kind. For example, a user can load video files and save them in a memory of a device or can generate a video stream using sensors of the device. Additionally, any objects can be processed using a computer animation model, such as a human's face and parts of a human body, animals, or non-living things such as chairs, cars, or other objects. In some embodiments, when a particular modification is selected along with content to be transformed, elements to be transformed are identified by the computing device, and then detected and tracked if they are present in the frames of the video. The elements of the object are modified according to the request for modification, thus transforming the frames of the video stream. Transformation of frames of a video stream can be performed by different methods for different kinds of transformation. For example, for transformations of frames mostly referring to changing forms of object's elements characteristic points for each of element of an object are calculated (e.g., using an Active Shape Model (ASM) or other known methods). Then, a mesh based on the characteristic points is generated for each of the at least one element of the object. This mesh used in the following stage of tracking the elements of the object in the video stream. In the process of tracking, the mentioned mesh for each element is aligned with a position of each element. Then, additional points are generated on the mesh. A first set of first points is generated for each element based on a request for modification, and a set of second points is generated for each element based on the set of first points and the request for modification. Then, the frames of the video stream can be transformed by modifying the elements of the object on the basis of the sets of first and second points and the mesh. In such method, a background of the modified object can be changed or distorted as well by tracking and modifying the background. In one or more embodiments, transformations changing some areas of an object using its elements can be performed by calculating of characteristic points for each element of an object and generating a mesh based on the calculated characteristic points. Points are generated on the mesh, and then various areas based on the points are generated. The elements of the object are then tracked by aligning the area for each element with a position for each of the at least one element, and properties of the areas can be modified based on the request for modification, thus transforming the frames of the video stream. Depending on the specific request for modification properties of the mentioned areas can be transformed in different ways. Such modifications may involve changing color of areas; removing at least some part of areas from the frames of the video stream; including one or more new objects into areas which are based on a request for modification; and modifying or distorting the elements of an area or object. In various embodiments, any combination of such modifications or other similar modifications may be used. For certain models to be animated, some characteristic points can be selected as control points to be used in determining the entire state-space of options for the model animation. In some embodiments of a computer animation model to transform image data using face detection, the face is detected on an image with use of a specific face detection algorithm (e.g., Viola-Jones). Then, an Active Shape Model (ASM) algorithm is applied to the face region of an image to detect facial feature reference points. In other embodiments, other methods and algorithms suitable for face detection can be used. For example, in some embodiments, features are located using a landmark which represents a distinguishable point present in most of the images under consideration. For facial landmarks, for example, the location of the left eye pupil may be used. In an initial landmark is not identifiable (e.g., if a person has an eyepatch), secondary landmarks may be used. Such landmark identification procedures may be used for any such objects. In some embodiments, a set of landmarks forms a shape. Shapes can be represented as vectors using the coordinates of the points in the shape. One shape is aligned to another with a similarity transform (allowing translation, scaling, and rotation) that minimizes the average Euclidean distance between shape points. The mean shape is the mean of the aligned training shapes. In some embodiments, a search for landmarks from the mean shape aligned to the position and size of the face determined by a global face detector is started. Such a search then repeats the steps of suggesting a tentative shape by adjusting the locations of shape points by template matching of the image texture around each point and then conforming the tentative shape to a global shape model until convergence occurs. In some systems, individual template matches are unreliable, and the shape model pools the results of the weak template matchers to form a stronger overall classifier. The entire search is repeated at each level in an image pyramid, from coarse to fine resolution. Embodiments of a transformation system can capture an image or video stream on a client device (e.g., the client device102) and perform complex image manipulations locally on the client device102while maintaining a suitable user experience, computation time, and power consumption. The complex image manipulations may include size and shape changes, emotion transfers (e.g., changing a face from a frown to a smile), state transfers (e.g., aging a subject, reducing apparent age, changing gender), style transfers, graphical element application, and any other suitable image or video manipulation implemented by a convolutional neural network that has been configured to execute efficiently on the client device102. In some example embodiments, a computer animation model to transform image data can be used by a system where a user may capture an image or video stream of the user (e.g., a selfie) using a client device102having a neural network operating as part of a messaging client application104operating on the client device102. The transform system operating within the messaging client application104determines the presence of a face within the image or video stream and provides modification icons associated with a computer animation model to transform image data, or the computer animation model can be present as associated with an interface described herein. The modification icons include changes which may be the basis for modifying the user's face within the image or video stream as part of the modification operation. Once a modification icon is selected, the transform system initiates a process to convert the image of the user to reflect the selected modification icon (e.g., generate a smiling face on the user). In some embodiments, a modified image or video stream may be presented in a graphical user interface displayed on the mobile client device as soon as the image or video stream is captured, and a specified modification is selected. The transform system may implement a complex convolutional neural network on a portion of the image or video stream to generate and apply the selected modification. That is, the user may capture the image or video stream and be presented with a modified result in real time or near real time once a modification icon has been selected. Further, the modification may be persistent while the video stream is being captured and the selected modification icon remains toggled. Machine taught neural networks may be used to enable such modifications. In some embodiments, the graphical user interface, presenting the modification performed by the transform system, may supply the user with additional interaction options. Such options may be based on the interface used to initiate the content capture and selection of a particular computer animation model (e.g., initiation from a content creator user interface). In various embodiments, a modification may be persistent after an initial selection of a modification icon. The user may toggle the modification on or off by tapping or otherwise selecting the face being modified by the transformation system and store it for later viewing or browse to other areas of the imaging application. Where multiple faces are modified by the transformation system, the user may toggle the modification on or off globally by tapping or selecting a single face modified and displayed within a graphical user interface. In some embodiments, individual faces, among a group of multiple faces, may be individually modified or such modifications may be individually toggled by tapping or selecting the individual face or a series of individual faces displayed within the graphical user interface. As mentioned above, the video table310stores video data which, in one embodiment, is associated with messages for which records are maintained within the message table314. Similarly, the image table308stores image data associated with messages for which message data is stored in the entity table302. The entity table302may associate various annotations from the annotation table312with various images and videos stored in the image table308and the video table310. A story table306stores data regarding collections of messages and associated image, video, or audio data, which are compiled into a collection (e.g., a story or a gallery). The creation of a particular collection may be initiated by a particular user (e.g., each user for which a record is maintained in the entity table302). A user may create a “personal story” in the form of a collection of content that has been created and sent/broadcast by that user. To this end, the user interface of the messaging client application104may include an icon that is user-selectable to enable a sending user to add specific content to his or her personal story. A collection may also constitute a “live story,” which is a collection of content from multiple users that is created manually, automatically, or using a combination of manual and automatic techniques. For example, a “live story” may constitute a curated stream of user-submitted content from varies locations and events. Users whose client devices have location services enabled and are at a common location event at a particular time may, for example, be presented with an option, via a user interface of the messaging client application104, to contribute content to a particular live story. The live story may be identified to the user by the messaging client application104, based on his or her location. The end result is a “live story” told from a community perspective. A further type of content collection is known as a “location story”, which enables a user whose client device102is located within a specific geographic location (e.g., on a college or university campus) to contribute to a particular collection. In some embodiments, a contribution to a location story may require a second degree of authentication to verify that the end user belongs to a specific organization or other entity (e.g., is a student on the university campus). The database120can also store location information data pertaining to each of the users in the messaging system100as well as location information data pertaining to each communication session between multiple users in the messaging system100in the data location table316. The data location table316can include a historical database for each of the users in the messaging system100to store the location information data pertaining to the users. For example, the historical database can include a history of the locations that are previously recorded in association with the users. For example, data associated with the users can comprise the home location data including a region location (e.g., New York or East Coast United States) and country location (e.g., United States). The location information data pertaining to each communication session can include the identification of the storage location selected to store the shared data associated with the communication session (e.g., communication session data). The identification of the storage location can include a name, an identification number, a network address, a region location, or a country location. FIG.4is a schematic diagram illustrating a structure of a message400, according to some in some embodiments, generated by a messaging client application104for communication to a further messaging client application104or the messaging server application114. The content of a particular message400is used to populate the message table314stored within the database120, accessible by the messaging server application114. Similarly, the content of a message400is stored in memory as “in-transit” or “in-flight” data of the client device102or the application server112. The message400is shown to include the following components:A message identifier402: a unique identifier that identifies the message400.A message text payload404: text, to be generated by a user via a user interface of the client device102and that is included in the message400.A message image payload406: image data, captured by a camera component of a client device102or retrieved from a memory component of a client device102, and that is included in the message400.A message video payload408: video data, captured by a camera component or retrieved from a memory component of the client device102and that is included in the message400.A message audio payload410: audio data, captured by a microphone or retrieved from a memory component of the client device102, and that is included in the message400.A message annotations412: annotation data (e.g., filters, stickers or other enhancements) that represents annotations to be applied to message image payload406, message video payload408, or message audio payload410of the message400.A message duration parameter414: parameter value indicating, in seconds, the amount of time for which content of the message (e.g., the message image payload406, message video payload408, message audio payload410) is to be presented or made accessible to a user via the messaging client application104.A message geolocation parameter416: geolocation data (e.g., latitudinal and longitudinal coordinates) associated with the content payload of the message. Multiple message geolocation parameter416values may be included in the payload, each of these parameter values being associated with respect to content items included in the content (e.g., a specific image into within the message image payload406, or a specific video in the message video payload408).A message story identifier418: identifier values identifying one or more content collections (e.g., “stories”) with which a particular content item in the message image payload406of the message400is associated. For example, multiple images within the message image payload406may each be associated with multiple content collections using identifier values.A message tag420: each message400may be tagged with multiple tags, each of which is indicative of the subject matter of content included in the message payload. For example, where a particular image included in the message image payload406depicts an animal (e.g., a lion), a tag value may be included within the message tag420that is indicative of the relevant animal. Tag values may be generated manually, based on user input, or may be automatically generated using, for example, image recognition.A message sender identifier422: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the client device102on which the message400was generated and from which the message400was sentA message receiver identifier424: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the client device102to which the message400is addressed. The contents (e.g., values) of the various components of message400may be pointers to locations in tables within which content data values are stored. For example, an image value in the message image payload406may be a pointer to (or address of) a location within an image table308. Similarly, values within the message video payload408may point to data stored within a video table310, values stored within the message annotations412may point to data stored in an annotation table312, values stored within the message story identifier418may point to data stored in a story table306, and values stored within the message sender identifier422and the message receiver identifier424may point to user records stored within an entity table302. Although the following flowcharts can describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a procedure, etc. The steps of methods may be performed in whole or in part, may be performed in conjunction with some or all of the steps in other methods, and may be performed by any number of different systems, such as the systems described inFIG.1, and/orFIG.8, or any portion thereof, such as a processor included in any of the systems. FIG.5illustrates a process500of dynamically assigning a storage location for a user data in accordance with one embodiment. The data associated with each of the users can include user data such as profile data, preferences, subscriptions, user connections on the messaging system, etc. In one embodiment, the process500can be performed by the data storage server controller126in the messaging server system108. In process500, the data storage server controller126receives a signal from a first client device102that is associated with a first user, at operation502. In one embodiment, the signal can be generated when the first user via the first client device102signs up for the messaging service maintained by the messaging server system108. The signal can also be generated when the first user using the first client device102logs into the messaging server system108. The signal can include the location information that is provided by the first user or by the first client device102. For example, the first user can input his home location when signing up with the messaging server system108. The location information can also be generated by a Global Positioning System (GPS) location service that has been enabled on the client device102such that the location information that is provided to the data storage server controller126is the client device102's GPS recorded location. In another embodiment, using the signal from the first client device102, the data storage server controller126can also perform Internet Protocol (IP) tracing to determine the location of the first client device102. The signal can also be a network signal from which the data storage server controller126can determine the location of the first client device102. At operation504, the data storage server controller126stores a current location of the first client device102in a historical database associated with the first user. The data storage server controller126determines the current location of the first client device102using the signal that is received. The historical database can be stored in the data location table316. The historical database associated with the first user can include a history of the locations that are previously recorded in association with the first user. For example, the history of the locations can include a region location and a country location. In one embodiment, the historical database also stores the home location data associated with the first user. The home location data indicates the location the first user spends the majority of his time. The home location data, in one embodiment, is the location in the historical database that appears the most frequently in the historical database. The home location data can also include a region location (e.g., New York or East Coast United States) and country location (e.g., United States). The home location data is used by data storage server controller126(and data storage client controller124) to optimize the storage location of the user's data. To optimize access to the user's data, in one embodiment, the storage location (e.g., data centers) that is selected to store the user's data is proximate to the user's home location identified by the home location data. At operation506, the data storage server controller126determines whether no data has been associated with the first user as a home location data. For example, when the first user signs up for the messaging server system108for the first time, the messaging server system108does not have any previous signals from which it can interpret the location the first user spends the majority of his time. If no data has been associated with the first user as a home location data in operation506, at operation508, the data storage server controller126stores the current location as the home location data associated with the first user. In this example, when the first user signs up for the first time for the messaging server system108, the data storage server controller126will establish that his current location is his home location for the purposes of setting up the home location data. If data has been associated with the first user as a home location data in operation506, at operation510, the data storage server controller126determines whether the home location data matches the current location. At operation512, the data storage server controller126maintains the home location data associated with the first user unchanged when the data storage server controller126determines that the home location data matches the current location. For example, when current location of the first client device102(e.g., New York, USA) matches home location identified by the home location data (e.g., New York, USA), the data storage server controller126establishes that the home location that is set and stored in data location table316is still current and valid. When the data storage server controller126determines that the home location data does not match the current location, at operation514, the data storage server controller126determines whether the first user has been associated with the current location at a greater frequency than the home location data. For example, the first user can be on vacation in Rome, Italy such that the current location of the first client device102(e.g., Rome, Italy) does not match home location identified by the home location data (e.g., New York, USA). In this example, the data storage server controller126needs to determine whether Rome, Italy could be the actual home location or just a one-time or infrequently visited location for the first user. To make this determination, the data storage server controller126can assess the historical database associated with the first user in data location table316to determine the number of times Rome, Italy (e.g., current location) appears in the historical database compared to the number of times New York, USA (e.g., home location indicated in the home location data). When data storage server controller126determines that the first user has been associated with the current location at a greater frequency than the home location data, the data storage server controller126updates the home location data associated with the first user to the current location at operation516. For example, if Rome, Italy occurs more frequently than New York, USA, the data storage server controller126can determine that New York was the vacation location and Rome is the home location. The data storage server controller126can update the home location data associated with the first user to Rome, Italy and thereby, replacing New York, USA as the home location. In one embodiment, when the home location data is updated to the current location, the data storage server controller126updates the user data storage location from a first storage location to a second storage location based on the updated home location data. The first and second storage location are data centers at different geographic locations. The first storage location can be located closer to the previous home location (e.g., New York) whereas the second storage location can be located closer to the updated home location (e.g., Rome, Italy). The data storage server controller126can then determine whether to transfer the user data associated with the first user to the second location based on, for example, a size of the user data, a usage frequency associated with the first user, a latency cost associated with storing the user data associated with the first user in the first location and the second location, a financial cost of storing the user data associated with the first user in the first location and the second location, etc. The data storage server controller126uses these factors to assess the ease and cost of the transfer. For example, if the first user's data is a small amount of data, the data storage server controller126can determine that the transfer would be simple such that it is worth performing the transfer. If the first user is a very frequent user of the messaging server system108, the data storage server controller126can determine that it is worth transferring the user's data because the first user accesses the data frequently. In another example, the data storage server controller126may determine that further assessments are needed before transferring data if the first user is a frequent user of the messaging system100because the size of the first user's data and the number of communication sessions that include this user is significant. The latency costs of the transfer (e.g., how intrusive it would be to the first user or other users) can thus also affect the data storage server controller126's decision to transfer the first user's data. The data storage server controller126can also assess a financial cost (e.g., price of storage) of the first storage in comparison to the second storage. The data storage server controller126can also consider the performance capabilities of each of the storages in determining whether to transfer the user's data. The data storage server controller126can also use these factors to weigh the ease and cost of the transfer against the benefit to the first user (e.g., perceived latency decrease, performance increase, etc.). In one embodiment, based on the determination that the transfer of the first user's data to the second location is desirable (or optimal), the data storage server controller126can then cause the user data associated with the first user to be transferred to the second location. At operation514, when the data storage server controller126determines that the first user has not been associated with the current location at a greater frequency than the home location data, the data storage server controller126maintains the home location data associated with the first user unchanged, at operation512. In this example, the data storage server controller126determines that Rome, Italy does not occur more frequently than New York, USA, such that Rome was only a vacation location such that New York remains the home location. FIG.6illustrates a process600dynamically assigning a storage location for a communication session data in accordance with one embodiment. The shared data between multiple users in a messaging conversation can media content items (e.g., text messages, images, videos, animations, webpage links, etc.) that were shared with each of the users in the messaging conversation via a messaging interface. In one embodiment, the process600can be performed by the data storage server controller126in the messaging server system108. In process600, the data storage server controller126updates, at operation602, a home location data of a first user. The first user is associated with a first client device102and the home location data indicates a home location associated with the first user. In one embodiment, the historical database in the data location table316stores the home location data associated with each of the users in the messaging server system108including the first user. The first user's home location data indicates the location the first user spends the majority of his time. The home location data, in one embodiment, is the location in the historical database that appears the most frequently in the historical database associated with the first user. The home location data can also include a region location (e.g., New York or East Coast United States) and country location (e.g., United States). When a home location of a first user is updated, this update can signal to the data storage server controller126that a reevaluation of the decision to select a given storage location (e.g., data center) to store the communication session data between the first user and other users in the messaging system100may be needed. At operation604, the data storage server controller126selects a communication session in a messaging system100. The communication session can be a messaging conversation or an electronic group communication session such as a group chat, group instant messaging between a plurality of users via client devices102(not shown). The communication session can comprise a plurality of users exchanging media content items. In one embodiment, the communication session comprises the first user and a second user associated with a second client device102. In another embodiment, the communication session comprises the first user, the second user and a third user associated with a third client device102. At operation606, the data storage server controller126determines a home location data of the second user. In one embodiment, the home location data of the second user is stored in the data location table316in a historical database associated with the second user. The home location data of the second user indicates the home location of the second user (e.g., Los Angeles, California, or USA). At operation608, the data storage server controller126determines a session location data associated with the communication session. The session location data indicates a current storage location (e.g., a data center) storing data of the communication session received from the client devices102of the users included in the communication session. For example, the communication session data can be the shared data received from the first client device102, the second client device102, the third client device102or any combination thereof. The communication session data can be media content items (e.g., text messages, images, videos, animations, webpage links, etc.) that are shared with each of the users in the messaging conversation via a messaging interface. The session location data can include the identification of the storage location selected to store the communication session data. The identification of the storage location can include a name, an identification number, a network address, a region location, or a country location. At operation610, the data storage server controller126identifies a plurality of available data storage locations based on the home location data of the first user and the home location data of the second user. The messaging server system108can have access to a number of data centers geographically located around the World to store communication session data. Among these data centers, the data storage server controller126can identify the data centers that are available and that would optimize the access to shared data between the users in a given communication session. For example, if the home location of the first user is in New York and the home location of the second user is in Los Angeles, data storage locations (e.g., data centers) located in the middle of the United States can be identified as available data storage locations that could be used. At operation612, the data storage server controller126determines whether to update the session location data. In one embodiment, the data storage server controller126determines whether to update the session location data is based on an average of a distance over network fiber using the home location of each of the users in the communication session (e.g., first user, second user, third user, etc.), the current storage location, and the available storage locations. Since network fiber is not laid uniformly across the globe, the distance over network fiber (e.g., distance travelled by the electrons of the shared data) between the home location of each of the users is considered to determine where the shared data should optimally be stored. For example, if the first user's home location is updated from Rome to New York and the second user's home location is in Tokyo, the data storage server controller126can determine based on the average distance over network fiber using the home locations of the first and second user that the storage location for the communication session between the first and second user should be changed from the data center in Eastern Europe to a data center in Western United States. In one embodiment, the data storage server controller126can also use an average of the latency in the communication session experienced by each of the users in the communication session to determine whether to update the session location data. In one example, if the home location of the first user is updated from London in New York and the home location of the two other users in the communication session are in Europe (e.g., Paris), the data storage server controller126may determine not to update the session location data which indicates a storage location in Europe, even though the first user's updated home location is in New York because the net benefit for the group of users in this communication session is greater if the storage location for the communication session data (as identified by the session location data) remains unchanged. In another embodiment, the data storage server controller126determines whether a ratio of the average distance over network fiber and the average of the latency in the communication session is less than a predetermined threshold. Given that the distance over network fiber is associated with a given cost per distance, the data storage server controller126can establish a cost amount for improvement in perceived latency using this ratio. The predetermined threshold can be a cost per improvement in perceived latency (e.g., $ per millisecond of improvement). The data storage server controller126can also assess the size of the data of the communication session, a cost or a performance of the current storage location and the plurality of available data storage locations, and the frequency of usage of the messaging system100by the users in the communication session, or any combination thereof to determine whether to update the session location data. For example, the data storage server controller126can determine that a smaller sized communication session data is easier and less costly to transfer such that the transfer is desirable. The data storage server controller126can also assess a financial cost (e.g., price of storage) of the transfer from a first storage (e.g., current storage location) to a second storage (e.g., one of the available storage locations). The data storage server controller126can also consider the performance capabilities of each of the storages in determining whether to transfer the communication session data. The data storage server controller126can also use these factors to weigh the ease and cost of the transfer against the benefit to the users in the communication session (e.g., perceived latency decrease, performance increase, etc.). The cost or the performance of storage locations can also be considered when selecting the single storage location. The selection of the single storage location can also be reevaluated periodically or upon detection of changes including, for example, a change in the home location of any of the users, a change in the number of available storage locations, a change in the cost or performance of the available storage locations, etc. At operation614, in response to determining to update the session location data, the data storage server controller126updates the session location data to indicate one of the available storage locations. In one embodiment, the one of the available storage location is selected based on the location, cost, performance, etc. At operation616, the data storage server controller126causes a transfer of the data of the communication session to the available storage location indicated in the session location data. In one embodiment, a reevaluation of the decision to select a given storage location (e.g., data center) to store the communication session data between the first user and other users in the messaging system100may also be triggered when the data storage server controller126detects a change in the plurality of available data storage locations. For example, the change can include an addition of an available storage location or a closing of a storage location, or a change in the cost or performance of the available data storage locations. In this embodiment, the data storage server controller126can determine whether to update the session location data based on the change in the plurality of available storage locations. The data storage server controller126can also reevaluate the decision to select a given storage location when the data storage server controller126detects a change in the current storage location. The change can be that the current storage location is closing, or that the cost or performance of the current storage location is changing. In this embodiment, the data storage server controller126determines whether to update the session location data based on the change in the current storage location. FIG.7is a schematic diagram illustrating an access-limiting process700, in terms of which access to content (e.g., an ephemeral message702, and associated multimedia payload of data) or a content collection (e.g., an ephemeral message group704) may be time-limited (e.g., made ephemeral). An ephemeral message702is shown to be associated with a message duration parameter706, the value of which determines an amount of time that the ephemeral message702will be displayed to a receiving user of the ephemeral message702by the messaging client application104. In one embodiment, an ephemeral message702is viewable by a receiving user for up to a maximum of 10 seconds, depending on the amount of time that the sending user specifies using the message duration parameter706. The message duration parameter706and the message receiver identifier424are shown to be inputs to a message timer712, which is responsible for determining the amount of time that the Ephemeral message702is shown to a particular receiving user identified by the message receiver identifier424. In particular, the ephemeral message702will only be shown to the relevant receiving user for a time period determined by the value of the message duration parameter706. The message timer712is shown to provide output to a more generalized ephemeral timer system202, which is responsible for the overall timing of display of content (e.g., an ephemeral message702) to a receiving user. The ephemeral message7022is shown inFIG.7to be included within an ephemeral message group704(e.g., a collection of messages in a personal story, or an event story). The ephemeral message group704has an associated group duration parameter708, a value of which determines a time-duration for which the ephemeral message group704is presented and accessible to users of the messaging system100. The group duration parameter708, for example, may be the duration of a music concert, where the ephemeral message group704is a collection of content pertaining to that concert. Alternatively, a user (either the owning user or a curator user) may specify the value for the group duration parameter708when performing the setup and creation of the ephemeral message group704. Additionally, each ephemeral message702within the ephemeral message group704has an associated group participation parameter710, a value of which determines the duration of time for which the ephemeral message702will be accessible within the context of the ephemeral message group704. Accordingly, a particular ephemeral message group704may “expire” and become inaccessible within the context of the ephemeral message group704, prior to the ephemeral message group704itself expiring in terms of the group duration parameter708. The group duration parameter708, group participation parameter710, and message receiver identifier424each provide input to a group timer714, which operationally determines, firstly, whether a particular ephemeral message702of the ephemeral message group704will be displayed to a particular receiving user and, if so, for how long. Note that the ephemeral message group704is also aware of the identity of the particular receiving user as a result of the message receiver identifier424. Accordingly, the group timer714operationally controls the overall lifespan of an associated ephemeral message group704, as well as an individual ephemeral message702included in the ephemeral message group704. In one embodiment, each and every ephemeral message702within the ephemeral message group704remains viewable and accessible for a time-period specified by the group duration parameter708. In a further embodiment, a certain ephemeral message702may expire, within the context of ephemeral message group704, based on a group participation parameter710. Note that a message duration parameter706may still determine the duration of time for which a particular ephemeral message702is displayed to a receiving user, even within the context of the ephemeral message group704. Accordingly, the message duration parameter706determines the duration of time that a particular ephemeral message702is displayed to a receiving user, regardless of whether the receiving user is viewing that ephemeral message702inside or outside the context of an ephemeral message group704. The ephemeral timer system202may furthermore operationally remove a particular ephemeral message702from the ephemeral message group704based on a determination that it has exceeded an associated group participation parameter710. For example, when a sending user has established a group participation parameter710of 24 hours from posting, the ephemeral timer system202will remove the relevant ephemeral message702from the ephemeral message group704after the specified 24 hours. The ephemeral timer system202also operates to remove an ephemeral message group704either when the group participation parameter710for each and every ephemeral message702within the ephemeral message group704has expired, or when the ephemeral message group704itself has expired in terms of the group duration parameter708. In certain use cases, a creator of a particular ephemeral message group704may specify an indefinite group duration parameter708. In this case, the expiration of the group participation parameter710for the last remaining ephemeral message702within the ephemeral message group704will determine when the ephemeral message group704itself expires. In this case, a new ephemeral message702, added to the ephemeral message group704, with a new group participation parameter710, effectively extends the life of an ephemeral message group704to equal the value of the group participation parameter710. Responsive to the ephemeral timer system202determining that an ephemeral message group704has expired (e.g., is no longer accessible), the ephemeral timer system202communicates with the messaging system100(and, for example, specifically the messaging client application104) to cause an indicium (e.g., an icon) associated with the relevant ephemeral message group704to no longer be displayed within a user interface of the messaging client application104. Similarly, when the ephemeral timer system202determines that the message duration parameter706for a particular ephemeral message702has expired, the ephemeral timer system202causes the messaging client application104to no longer display an indicium (e.g., an icon or textual identification) associated with the ephemeral message702. FIG.8is a diagrammatic representation of the machine800within which instructions808(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine800to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions808may cause the machine800to execute any one or more of the methods described herein. The instructions808transform the general, non-programmed machine800into a particular machine800programmed to carry out the described and illustrated functions in the manner described. The machine800may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine800may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine800may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions808, sequentially or otherwise, that specify actions to be taken by the machine800. Further, while only a single machine800is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions808to perform any one or more of the methodologies discussed herein. The machine800may include processors802, memory804, and I/O components838, which may be configured to communicate with each other via a bus840. In an example embodiment, the processors802(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another Processor, or any suitable combination thereof) may include, for example, a Processor806and a Processor810that execute the instructions808. The term “Processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.8shows multiple processors802, the machine800may include a single Processor with a single core, a single Processor with multiple cores (e.g., a multi-core Processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory804includes a main memory812, a static memory814, and a storage unit816, both accessible to the processors802via the bus840. The main memory804, the static memory814, and storage unit816store the instructions808embodying any one or more of the methodologies or functions described herein. The instructions808may also reside, completely or partially, within the main memory812, within the static memory814, within machine-readable medium818within the storage unit816, within at least one of the processors802(e.g., within the Processor's cache memory), or any suitable combination thereof, during execution thereof by the machine800. The I/O components838may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components838that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components838may include many other components that are not shown inFIG.8. In various example embodiments, the I/O components838may include user output components824and user input components826. The user output components824may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The user input components826may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further example embodiments, the I/O components838may include biometric components828, motion components830, environmental components832, or position components834, among a wide array of other components. For example, the biometric components828include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye-tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components830include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope). The environmental components832include, for example, one or cameras (with still image/photograph and video capabilities), illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components834include location sensor components (e.g., a GPS receiver Component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components838further include communication components836operable to couple the machine800to a network820or devices822via respective coupling or connections. For example, the communication components836may include a network interface Component or another suitable device to interface with the network820. In further examples, the communication components836may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices822may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components836may detect identifiers or include components operable to detect identifiers. For example, the communication components836may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components836, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (e.g., main memory812, static memory814, and/or memory of the processors802) and/or storage unit816may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions808), when executed by processors802, cause various operations to implement the disclosed embodiments. The instructions808may be transmitted or received over the network820, using a transmission medium, via a network interface device (e.g., a network interface Component included in the communication components836) and using any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions808may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to the devices822. FIG.9is a block diagram900illustrating a software architecture904, which can be installed on any one or more of the devices described herein. The software architecture904is supported by hardware such as a machine902that includes processors920, memory926, and I/O components938. In this example, the software architecture904can be conceptualized as a stack of layers, where each layer provides a particular functionality. The software architecture904includes layers such as an operating system912, libraries910, frameworks908, and applications906. Operationally, the applications906invoke API calls950through the software stack and receive messages952in response to the API calls950. The operating system912manages hardware resources and provides common services. The operating system912includes, for example, a kernel914, services916, and drivers922. The kernel914acts as an abstraction layer between the hardware and the other software layers. For example, the kernel914provides memory management, Processor management (e.g., scheduling), Component management, networking, and security settings, among other functionality. The services916can provide other common services for the other software layers. The drivers922are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers922can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. The libraries910provide a low-level common infrastructure used by the applications906. The libraries910can include system libraries918(e.g., C standard library) that provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries910can include API libraries924such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries910can also include a wide variety of other libraries928to provide many other APIs to the applications906. The frameworks908provide a high-level common infrastructure that is used by the applications906. For example, the frameworks908provide various graphical user interface (GUI) functions, high-level resource management, and high-level location services. The frameworks908can provide a broad spectrum of other APIs that can be used by the applications906, some of which may be specific to a particular operating system or platform. In an example embodiment, the applications906may include a home application936, a contacts application930, a browser application932, a book reader application934, a location application942, a media application944, a messaging application946, a game application948, and a broad assortment of other applications such as a third-party application940. The e applications906are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications906, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application940(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application940can invoke the API calls950provided by the operating system912to facilitate functionality described herein. | 80,464 |
11943284 | DETAILED DESCRIPTION Referring now to the drawings,FIG.1illustrates an exemplary edge deployment for a service or application in a wide area communication network (WAN)10. Two edge node clusters15are shown and denoted as Cluster X and Cluster-Y. Cluster-X includes edge nodes20denoted ENX1-ENXM. Cluster-Y includes edge nodes20denoted as ENY1-ENYN. Cluster-X and Cluster-Y are deployed at different geographic locations denoted as Location A and Location B. The edge nodes20within a cluster15can be implemented as containers or virtual machines in a virtualization environment. Those skilled in the art will appreciate that the edge nodes20could also be implemented by standalone computers and servers. In the following description, it is assumed that the edge nodes20are implemented by a cloud platform provided by a cloud platform provider. Each edge node cluster15is connected via the WAN10to a Virtual Central Management Office (VCMO)30for the cloud platform. The VCMO30orchestrates resource management across different clusters15at different locations. The VCMO30is operated by an administrator associated with the cloud platform provider. The VCMO30is able to deploy or remove applications as needed in the edge node clusters15depending on the traffic demands for the applications. Application service providers can enter into Service Level Agreements (SLAs) with the cloud platform provider to deploy their applications25in the edge node clusters15. In the embodiment shown inFIG.1, a first application25denoted as App1 is deployed to ENX2, ENXM, ENY2and ENYN. A second application25denoted as App2 is deployed to ENX1, ENX2, ENY1and ENYN. The edge node clusters15provide high availability for the applications25. If an edge node20for App1 or App2 fails, client devices50served by the failed edge node can be served by one of the remaining edge nodes in the same cluster15without interruption of service. For example, if ENX1happens to fail, then ENX2could continue to serve client devices50using App2. Alternatively, the VCMO30could deploy App2 on another edge node20, e.g., ENX3, which could serve client devices50formerly served by ENX1. The deployment of the same applications25at different locations as shown inFIG.1also provides geo-redundancy. In the event of a catastrophic event, such as an earthquake or hurricane affected a particular location, client devices50served by a cluster15of edge nodes20at the affected location can be served by edge nodes20in an edge node cluster15at a different geographic location without interruption of service. In the example shown inFIG.1, client devices50served by Cluster-X at Location A can moved to Cluster-Y at Location B in the event of an outage. Client devices50access the services or application offered by the edge node clusters via access networks35operated by different Internet Service Providers (ISPs), such as BELL, ROGERS, VIDEOTRON, etc. These access networks are referred to as “last-mile access” networks. The access networks can be radio access networks (RANs), cable network, fiber optic network, or any other type of communication network. The ISPs operating the access network can enter into Service Level Agreements (SLAs) with the cloud platform provider to provide a guaranteed Quality of Service (QoS), such as latency and throughput, to the client devices50for the particular applications25. In some cases, access to the edge node clusters15may be provided for free but only with best effort on the network performance, such as latency or throughput. One problem encountered in edge cluster deployments is overloading of an edge node20by a large number of client requests in a short period of time as shown inFIG.2A. There are two common scenarios where overloading may occur. One overloading scenario is associated with live events, such as sporting events or concerts, where a large number of people are gathered in the same place. During such events, the edge node20may receive a large number of client requests for a service or application25at the same time. A second scenario is when an edge node20receives a large number of requests for different applications25at the same time. In both cases, a massive influx of client requests at about the same time can cause the edge node to fail. The failure of an edge node20reduces the available resources in the cluster15for the services or applications25provided by the cluster15. The loss of capacity for the service or application25is a problem for the cloud platform operator as well as end users who are using the service or application25. From the perspective of the cloud platform provider, the failure of the edge node20makes it more difficult to meet QoS guarantees provided by an SLA. From the end user perspective, the failure of the edge node20may mean that the service is no longer available to all end users, long latencies and poor user experience. Where multiple services/applications25are provided by the edge node20or cluster15, the failure of one edge node20due to client requests for one service/application25can impact other services/applications25provided by the edge node20or cluster15. One approach to overload protection solution configures the edge node20to stop serving new client requests for all the applications25when it is in an “overload” mode, e.g. when Central Processing Unit (CPU) usage has reached to a predetermined level, e.g., 90%, and to begin accepting new client requests when CPU usage down to a “normal” mode (e.g., under 70% CPU usage). A variation of this approach, shown inFIG.2B, is to gradually decrease the number of new client requests that are accepted once a predetermined threshold is reached instead of rejecting all new client requests. An edge node20can apply a so-called “graceful rejection approach” on new client requests for a service or application25. In one example of this approach, two thresholds denoted highMark and lowMark are used for overload protection. The highMark threshold is the upper limit for CPU load on the edge node20. The lowMark threshold is the low limit for CPU load on the edge node20. The thresholds are chosen to provide stable overload protection performance for an edge node20. At each checking point, indicated by shaded circles, the measurement of CPU load is made based on the number of samples in the given past timeslot. The CPU load increases while the incoming client traffic increases. When the highMark is exceeded, the edge node20enters an overload protection state and starts to redirect/proxy the incoming client traffic. At each verification or checking point for the traffic overload, when the measured CPU load exceeds highMark, the percentage of the client traffics to be redirected or proxied is increased in order to bring down the corresponding CPU load. When the measured CPU load is below the lowMark, the edge node20returns to a normal operating state. In the normal operating state, the edge node20isn't required to redirect or proxy client requests, but can handle each incoming individual client request according to normal operator policy. The overload protection mechanism described above can be viewed as a “passive overload protection” approach or “reactive overload protection” approach that is triggered when the highMark is exceeded. If the measured CPU load exceeds the highMark significantly within a very short period, leading to large fluctuations in CPU load with excursion both above the highMark and below the lowMark. In some cases, a sudden influx of a large number of client requests can still overload the edge node20before the overload protection mechanism has time to work and lead to an outage or node failure. In addition, the mentioned mechanism is applied at node level, not at application or service level. According to one aspect of the present disclosure, a proactive overload protection is provided to provide more stable operation in terms of CPU load. The proactive overload protection approach herein described is based on two-tier Reinforcement Learning (RL) models: one at the edge node level and one at the cluster level. Key performance indicators (KPIs) are collected for each of the edge nodes20in a cluster15. The KPIs can include parameters such as CPU load, read input/output (I/O), write I/O, storage usage, number of client requests, etc. The node level RL model uses the KPIs for a particular edge node to optimize a routing policy for a service/application25to determine whether an edge node20receiving a client request shall handle the traffic for the service/application25. The cluster level RL model uses the KPIs for multiple edge nodes20in the cluster to optimize a policy for an application25that determines which neighboring edge node20shall be considered to handle a client request for the given application25in case of redirection or proxy by the edge node receiving the client request. An edge node20receiving a new client request applies the routing policy optimized by the node-level RL model to determine whether to handle the client request itself, or to redirect or proxy the client request. If the edge node20determines to redirect or proxy the client request, the edge node20applies a redirection or proxy policy optimized by the cluster-level RL model to select another edge node20in the cluster15to handle the client request. In addition to applying two-tier RL models, the overload protection approach is designed to optimize policies for applications25separately. That is, the routing policy applied by an edge node20is optimized by a node-level RL model for each application25served by the edge node20. Similarly, the redirect policy and/or proxy policy applied by the edge node20in a cluster15is optimized separately for each application25by a cluster-level RL model. FIG.3illustrates expected performance of the overload protection mechanism. One objective of the two tier RL approach for overload protection is to reduce fluctuation in the load and to avoid excursions of the CPU load above the highMark and below the lowMark. This stable CPU load achieves the best performance from each edge node20in terms of the capacity. In order to provide more granular control at the application level, the number of the client requests for each application25is considered in RL modeling as an example. Of course, the other co-relation between node level KPI and corresponding applications25deployed in the edge cloud platform can also be taken into account. FIG.4illustrates a procedure100for two tier overload protection applied by an edge node20implementing the two RL models. As shown inFIG.4, the decision by an edge node20whether to handle a client request for an application25is controlled by a routing policy output by a node-level RL model. When the edge node20receives a client request for a particular application (105), the edge node20checks the node-level routing policy for the application25(110) and determines whether to handle the client request itself or to allow another edge node20in the duster15to handle the client request (115). The node-level routing policy is optimized by a node-level RL model as previously described based on KPIs associated with the edge node20receiving the request. If the edge node20determines that it can handle the request, the edge node20then determines whether to queue the client request or immediately forward it to the application25(120). In the former case, the edge node20queues the request and sends a response (e.g., 202 Accepted) to the client device50(125). In the later case, the edge node20forwards the client request to the application (130). Upon receipt of the client request, the application25processes the client request (135) and sends a response (e.g., 200 Success) to the client device50(140). If the edge node20determines that it is unable to handle the new client request (115), the edge node20determines whether to proxy or redirect or proxy the request to another edge node in the cluster (145,150). The determination whether to proxy the client request (145) or to redirect the client request (150) is based on the routing policy set by the node-level RL model. If the edge node20decides not to proxy or redirect the client request, it sends a response (e.g. 5xx Server Error) to the client device50(155) and the procedure ends. If the edge node20determines to proxy the client request, the edge node20checks the cluster-level proxy policy for the requested application25(160). The proxy policy is created by the cluster level RL model based on the collected KPI dataset for the edge node cluster15. The KPI dataset for the cluster15may comprise the KPIs for all edge nodes20in the cluster15, or the KPIs for a representative sample of the edge nodes20in the edge node cluster15. Based on the duster-level proxy policy, the edge node20selects a neighboring edge node20within the duster15as the target and proxies the request to the selected edge node25(165). Thereafter, when the edge node25, now acting as a proxy, receives a response, it proxies the response to the client device50(170) and the procedure ends. If the edge node20determines to redirect the client request, the edge node20checks the duster-level redirect policy for the requested application25(175). The redirect policy is created by the cluster level RL model based on the collected KPI dataset for the edge node cluster15. The KPI dataset for the cluster15may comprise the KPIs for all edge nodes20in the cluster15, or the KPIs for a representative sample of the edge nodes20in the edge node cluster15. Based on the cluster-level proxy policy, the edge node20selects a neighboring edge node20within the duster15as the target for the redirect and sends a redirection response to the client device50(180). The end-to-end control flow for the overload protection approach can be divided into four main phases or steps as follows:Phase 1: Collecting the KPI data set for each edge node20and cluster15;Phase 2: Processing KPI dataset for RL model training;Phase 3: Executing RL model to get optimal policy for best rewards (including both exploitation and exploration to get the best performance); andPhase 4: Applying the routing/proxy/redirect policy by the edge nodes20. The node-level RL model and cluster-level RL model are implemented respectively by an Intelligent Node Overload Protector (INOP)60and Intelligent Cluster Overload Protector (ICOP)70respectively. The INOP60and ICOP70are described in more detail below. FIG.5illustrates an exemplary node-level architecture for a INOP60. The INOP60includes five main components as follows:1. KPI-Collector module: This module is responsible for collecting the KPI matrix, such as CPU usage, I/O operation (read/write), storage usage, the number of client requests for the application etc., at the node level. The data collection can be achieved through PUSH/PULL mechanisms. With the PUSH mechanism, the external data agent can push the KPI dataset towards the KPI-collector module. With the PULL mechanism, the KPI-collector module fetches the dataset from the edge node20. The dataset can be transferred through either “streaming” or a batch of files”.2. Data-Sampling module: This module is responsible for cleaning up the KPI dataset as well as converting the collected dataset into a format that is required by the RL training model. It shall raise an alarm when the dataset is missing or contaminated.3. Node Agent RL module: This module is responsible for training the RL model to get the optimal policy based on the input of the KPI dataset from Data-Sampling module, reward and transition matrix model if required.4. Policy Output module: This module is responsible for creating the node-level routing policies based on the outcome of Node Agent RL module. It outputs the routing policy that are applied by the applications25running in the edge node20. The notification for any policy change is sent to the Sub/Notify module.5. Sub/Notify module: This module is responsible for setting up the communication between the edge node20and the INOP60subsystem. Each application25in the edge node20shall be able to subscribe to the policy created for the application25. Once the application25has subscribed to receive notifications, the application25will receive a notification whenever a change of the subscribed policy is made by the RL model. FIG.6illustrates an exemplary cluster-level architecture for ICOP70. The ICOP70subsystem includes five main components and relies on two components installed in the edge nodes20within the cluster15.1. Data-Collector/Assembler module: This module is responsible for collecting the KPI dataset from all the edge nodes20within the same cluster15. It also assembles the samples from different edge nodes20together. Those edge nodes20can reside at different locations.2. Data Sampling module: This module is responsible for cleaning the KPI dataset as well as converting the collected KPI dataset into a format that is required by the cluster RL training model. This module can be configured to raise an alarm when the KPI dataset from single or multiple edge nodes20are missing or contaminated. The location might be considered as an extra factor in the KPI dataset.3. Cluster RL module: This module is responsible for training the RL model to get a list of the optimal proxy/redirect policies based on the input of the dataset from Data-Sampling module, reward and transition matrix model if required. In the optimal policy list, each policy linked to the application25on the given edge node20(location). At one specific time step, the change can occur for a single policy or multiple policies in the list. The change for those impacted policies is recorded and stored. The storage for these policies might be co-located with ICOP70or deployed in a separate node20.4. Policy Output module: This module is responsible for creating the proxy/redirect policies based on the outcome of the Cluster RL module. It outputs the policies that are expected to be changed at next timestep. The notification for any policy change is sent to Sub/Notify module.5. Sub/Notify module: This module is responsible for setting up the communication between the edge node20and ICOP70via a Data Collect Agent (DCA)80and Policy Management Agent (PMA)90. The deployment of the DCA80and PMA90in an edge node20is shown inFIG.7. the interaction between the ICOP70, DCA80and PMA90is as follows:a. Step 1: The Data-Collector/Assembler collects the KPI dataset from the involved edge nodes20deployed at different locations. These data transactions are done through DCA80periodically.b. Step 2: the PMAs90for the edge nodes20at different locations subscribe with the Sub/Notify module in order to receive notifications regarding changes of its proxy policy or redirect policy.c. Step 3: The edge nodes20at location A get their updated policies from ICOP70after receiving the notification that the applicable policy has changed.d. Step 4: The edge nodes20at location B get their updated policies from ICOP70after receiving a notification that the applicable policy has changed. In some embodiments, each application25in the edge node20can subscribe to its own policy. The application25will then receive a notification whenever the change of the corresponding policy for the application25is made by the cluster RL model. FIG.8illustrates an exemplary deployment of INOP60and ICOP70within a cluster15of edge nodes20. Two edge nodes20are shown. The edge node20on the left inFIG.8hosts multiple applications25that might be in a microservice form. INOP60is installed for node level RL and ICOP70is installed for cluster level RL. The DCA80retrieves the KPI dataset from the edge node20and feeds it into either INOP60for the node level RL training model (step 1) and ICOP70for the cluster-level RL training model (step 1a). INOP60provides the updated policy to the corresponding applications directly (step 2). The PMA90is also installed in the edge node20to receive the change notification of the corresponding policy. When a change notification is received, the PMA90fetches the new policy and feeds the policy to the corresponding application25at the edge node20(steps 2a and 3a). In the edge node20on the right, only INOP60, DCA80and PMA90are installed. The policy for redirect or proxy within a cluster15is given by ICOP70that is deployed in the neighbor edge node20(on the left inFIG.8). The communication with the ICOP70is through a network as shown inFIG.8. The DCA80sends the KPI dataset to ICOP70(step 1b). The PMA90receives the policy for the application25running on the edge node20from ICOP70(flow step 2b). The PMA90feeds those updated policies into the corresponding applications (flow step 3b). Because ICOP70is a central control component for routing the traffic within a cluster15, high availability (HA) should be considered. The example of ICOP70HA deployment is shown inFIG.9. As indicated inFIG.9, three edge nodes20are picked to form the base for ICOP70HA services. If one ICOP70crashes, one of the remaining edge nodes20will be elected to run ICOP70to replace the crashed edge node20. FIG.10illustrates an exemplary signaling flow for INOP60. The following is a detailed explanation of the signaling flow.1. An application25running on the edge node20registers with the Sub/Notify module for certain application policies at node level by sending an “subscribe request”.2. The Sub/Notify module sends a confirmation back to the application25with the information about the location and identification of KPI-Collector, access token and data transferring method, such as PUSH/PULL, etc.3. The DCA80at the edge node20locates the KPI-Collector module and builds-up the data transfer session between edge node20and the KPI-Collector module. Although the edge node20is considered to be a hardware as an example, it can be a virtual environment, such as VM or container, etc.4. The KPI-Collector module sends the confirmation on the data transfer session setup.5. The DCA80at the edge node20transfers the data to KPI-Collector through the established data transfer session.6. KPI-Collector stores the raw data in the edge node20.7. The Data Sampling module send a request to fetch the raw dataset.8. The KPI-Collector module returns the response with the requested dataset.9. After receiving the dataset, the Data Sampling module performs clean-up on the KPI dataset and converts the KPI dataset for INOP60, which is ready for delivery to the Node Agent RL module.10. Node Agent RL module fetches the dataset from Data Sampling module.11. The Data Sampling module provides the dataset in the response. In some implementations, the data transfer can be done through local storage.12. The Node Agent RL module does the training based on the collected dataset and predicts the optimal policy for the applications to be taken at the next time step.13. The Node Agent RL module output the optimal policy for applications to Policy output module.14. The Policy Output module converts and stores the optimal policy for applications at next timestep.15. The Policy Output module sends a confirmation on the updated policy back to Node agent RL module.16. The Policy output module informs the Sub/Notify module about the updated policy.17. The Sub/Notify module sends the confirmation on receiving the notification back to Policy Output module.18. The Sub/Notify module sends the notification to the corresponding edge node about the change of its policy.19. After receiving the notification, the edge node20sends the confirmation to Sub/Notify module.20. The edge node20sends the request for getting the updated policies to Policy output module.21. The Policy Output module sends the response with the updated policies to the edge node20. FIG.11illustrates an exemplary signaling flow for ICOP70. For purposes of illustration, Node2inFIG.11is used to illustrate the control flow for ICOP70. The following is a detailed explanation of the signaling flow.1. The KPI-collector module collects the dataset from the edge node20.2. The KPI-collector module passes the dataset to Data Sampling module.3. The Data Sampling module passes the dataset to Cluster Agent RL module after it converts and normalizes the KPI dataset.4. The Cluster Agent RL module gets the optimal policy for the applications deployed in the edge cluster after training RL model and sends out the policy to Policy Output module.5. The Policy Output module send the notification to the impacted edge node20for the updated application policy. The impacted edge node20fetches the updated application policy accordingly. The overload protection approach described herein based on a two tier RL models provides benefits to service providers, cloud platform operators and end users. For service providers and cloud platform operators, the overload protection based on a two tier RL models reduces the cost for operating the edge node20or platform. It also increases the efficiency of utilizing the network and computing resources. For an end user, the overload protection based on a two tier RL models provides increased reliability so that the end user can obtain the service or application without compromising on the service quality and availability. FIG.12illustrates an exemplary method200implemented by a network node (e.g. an edge node20) in an edge node cluster15of overload protection using two tier reinforcement learning models for overload protection. The network node receives, from a client device, a client request for an application or service25provided by the network node (block210). The network node determines, based on a node-level routing policy, to redirect or proxy the request (block220). The method further comprises selecting, based on a cluster-level redirection policy or cluster-level proxy policy applicable to the network nodes in the cluster, a target network node in the edge cluster to handle the client request (block230). In some embodiments of the method200, the node-level routing policy is determined by a node-level policy control function implementing a reinforcement learning model based on key performance indicators, usage data and parameters for the network node. In some embodiments of the method200, the node-level routing policy is one of two or more application-specific policies for the network node. Some embodiments of the method200further comprise sending a subscription request to a node-level policy control function to receive notification of changes to the node-level routing policy for the application, the request including an application identifier and receiving, from the node-level policy control function, the node-level routing policy for the application. In some embodiments of the method200, the node-level policy control function is co-located with the network node. In some embodiments of the method200, the cluster-level redirection policy or cluster-level proxy policy is determined by a cluster-level policy control function applying a reinforcement learning model based on key performance indicators, usage data and parameters for two or more network nodes in the edge duster. In some embodiments of the method200, the cluster-level redirection policy or cluster-level proxy policy is one of two or more application-specific policies for the edge cluster. Some embodiments of the method200further comprise sending a subscription request to a cluster-level policy control function to receive notification of changes to the cluster-level redirection policy or cluster-level proxy policy for the application, the request including an application identifier and receiving, from the cluster-level policy control function, the cluster-level overload protection policy for the application. In some embodiments of the method200, the cluster-level policy control function is co-located with network node. In some embodiments of the method200, the cluster-level policy control function is co-located with another network node in the edge cluster. Some embodiments of the method200further comprise collecting data for an input dataset, the data comprising key performance indicators, usage data and parameters for the network node, sending the input dataset to a node-level reinforcement learning model to train the node-level reinforcement learning model and receiving, from the node-level reinforcement learning model, the node-level routing policy. Some embodiments of the method200further comprise collecting data for an input dataset, the data comprising key performance indicators, usage data and parameters for the network node, sending the input dataset to a cluster-level reinforcement learning model to train the cluster-level reinforcement learning model and receiving, from the cluster-level reinforcement learning model, the cluster-level redirection policy or cluster-level proxy policy. Some embodiments of the method200further comprise proxying the client request by sending the client request to the target network node. Some embodiments of the method200further comprise redirecting the client request by sending a redirection response to the client device. An apparatus can perform any of the methods herein described by implementing any functional means, modules, units, or circuitry. In one embodiment, for example, the apparatuses comprise respective circuits or circuitry configured to perform the steps shown in the method figures. The circuits or circuitry in this regard may comprise circuits dedicated to performing certain functional processing and/or one or more microprocessors in conjunction with memory. For instance, the circuitry may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include Digital Signal Processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory may include program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments. In embodiments that employ memory, the memory stores program code that, when executed by the one or more processors, carries out the techniques described herein. FIG.13illustrates an exemplary network node300in an edge node cluster15configured to use two tier RL models for overload protection. The network node300comprises a receiving unit310), a redirect/proxy unit320and selecting unit330. The various units310-330can be implemented by one or more microprocessors, digital signal processors, CPUs, hardware circuits, software or a combination thereof. The receiving unit310is configured to receive, from a client device, a client request for an application or service provided by the network node. The redirect/proxy unit320is configured to determine, based on a node-level routing policy for the network node, to redirect or proxy the request. The selecting unit330is further configured to select, based on a cluster-level redirection policy or cluster-level proxy policy applicable to the network nodes in the cluster, a target network node in the edge cluster to handle the client request. FIG.14illustrates the main functional components of another network node400in an edge node cluster15configured to use two tier RL models for overload protection. The network node400includes communication circuitry420for communicating with client devices50over a communication network, processing circuitry430and memory440. The communication circuitry420comprises network interface circuitry for communicating with client devices50and other network nodes over a communication network, such as an Internet Protocol (IP) network. Processing circuitry430controls the overall operation of the network device400and is configured to implement the methods shown and described herein. The processing circuitry430may comprise one or more microprocessors, hardware, firmware, or a combination thereof configured to perform methods and procedures herein described including the method100shown inFIG.4and the method200shown inFIG.12. In one embodiment, the processing circuitry430is configured to receive, from a client device, a client request for an application or service provided by the network node. The processing circuitry430is further configured to determine, based on a node-level routing policy for the network node, to redirect or proxy the request. The processing circuitry430is further configured to select, based on a cluster-level redirection policy or cluster-level proxy policy applicable to the network nodes in the cluster, a target network node in the edge cluster to handle the client request. Memory440comprises both volatile and non-volatile memory for storing computer program450and data needed by the processing circuitry430for operation. Memory440may comprise any tangible, non-transitory computer-readable storage medium for storing data including electronic, magnetic, optical, electromagnetic, or semiconductor data storage. Memory440stores a computer program450comprising executable instructions that configure the processing circuitry430to implement the method200shown inFIG.12. A computer program450in this regard may comprise one or more code modules corresponding to the means or units described above. In general, computer program instructions and configuration information are stored in a non-volatile memory, such as a ROM, erasable programmable read only memory (EPROM) or flash memory. Temporary data generated during operation may be stored in a volatile memory, such as a random access memory (RAM). In some embodiments, computer program450for configuring the processing circuitry430as herein described may be stored in a removable memory, such as a portable compact disc, portable digital video disc, or other removable media. The computer program450may also be embodied in a carrier such as an electronic signal, optical signal, radio signal, or computer readable storage medium. Those skilled in the art will also appreciate that embodiments herein further include corresponding computer programs. A computer program comprises instructions which, when executed on at least one processor of an apparatus, cause the apparatus to carry out any of the respective processing described above. A computer program in this regard may comprise one or more code modules corresponding to the means or units described above. Embodiments further include a carrier containing such a computer program. This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium. In this regard, embodiments herein also include a computer program product stored on a non-transitory computer readable (storage or recording) medium and comprising instructions that, when executed by a processor of an apparatus, cause the apparatus to perform as described above. Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device. This computer program product may be stored on a computer readable recording medium. | 35,468 |
11943285 | DETAILED DESCRIPTION The present invention generally relates to computing devices and, more particularly, to methods and systems for metering computing resources in cloud computing environments. In embodiments, cloud computing resources are decoupled and monetized for exchange or trade, and a precise metering system is provided that tracks and quantifies use of the cloud computing resources using standard units. In embodiments, standard units are a common unit of measure (e.g., joules) used by a plurality of different cloud computing providers to quantify usage of computing resources. As described herein, aspects of the invention include a method and system for transforming cloud computing resources to a general equivalent for precise metering and flexible resource exchange. Aspects of the invention also include a method and system for providing precise metering and billing without enforcing virtual machine models. Aspects of the invention also include a method and system for optimizing resource exchange and trading in decentralized cloud computing. Aspects of the invention also include a method and system for providing a high level of abstraction of cloud resources and decoupling the cloud resources for flexible resource allocation and reservation. Different cloud computing providers conventionally use different metering systems and rules and different billing methods, which makes it difficult for cloud computing users to exchange or trade cloud computing resources that are hosted by different cloud computing providers. This problem may be particularly acute in the case of edge computing. Additionally, because of the different metering systems and rules used by different cloud computing providers, it may be difficult for cloud computing users to understand the different metering rules when they want to use cloud computing resources from different cloud computing providers. Furthermore, when cloud computing users have cloud computing resources that are released from a workload for a short period of time, they may keep the cloud computing resources unused, which is an inefficient use of the cloud computing resources and money. Additionally, conventional metering and billing systems used by cloud computing providers may be imprecise. Accordingly, in certain cases, these systems may fail to capture certain cloud computing resource usage. In other cases, cloud computing users may be overbilled for cloud computing resource usage. Embodiments address the above-mentioned problems associated with conventional systems used by cloud computing providers for metering and billing for usage of cloud computing resources. Accordingly, embodiments improve the functioning of a computer by providing methods and systems for more efficient metering and billing for usage of cloud computing resources. In particular, embodiments improve software by providing a method and system for transforming cloud computing resources to a general equivalent for precise metering and flexible resource exchange (e.g., exchanging unused cloud computing resources). Furthermore, embodiments improve software by providing a method and system for precise metering and billing without enforcing virtual machine models. Embodiments also improve software by providing a method and system for optimizing resource exchange and trading in decentralized cloud computing. Embodiments also improve software by providing a method and system for providing a high level of abstraction of cloud resources and decoupling the cloud resources for flexible resource allocation and reservation. Additionally, implementations of the invention use techniques that are, by definition, rooted in computer technology (e.g., cloud computing, edge computing, computing/processing resources, bandwidth resources, and cloud applications). The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. It is understood in advance that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes. Referring now toFIG.1, a schematic of an example of a cloud computing node is shown. Cloud computing node10is only one example of a suitable cloud computing node and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the invention described herein. Regardless, cloud computing node10is capable of being implemented and/or performing any of the functionality set forth hereinabove. In cloud computing node10there is a computer system/server12, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server12include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Computer system/server12may be described in the general context of computer system executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server12may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As shown inFIG.1, computer system/server12in cloud computing node10is shown in the form of a general-purpose computing device. The components of computer system/server12may include, but are not limited to, one or more processors or processing units16, a system memory28, and a bus18that couples various system components including system memory28to processor16. Bus18represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. Computer system/server12typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server12, and it includes both volatile and non-volatile media, removable and non-removable media. System memory28can include computer system readable media in the form of volatile memory, such as random access memory (RAM)30and/or cache memory32. Computer system/server12may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system34can be provided for reading from and writing to a nonremovable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus18by one or more data media interfaces. As will be further depicted and described below, memory28may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention. Program/utility40, having a set (at least one) of program modules42, may be stored in memory28by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules42generally carry out the functions and/or methodologies of embodiments of the invention as described herein. Computer system/server12may also communicate with one or more external devices14such as a keyboard, a pointing device, a display24, etc.; one or more devices that enable a user to interact with computer system/server12; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server12to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces22. Still yet, computer system/server12can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter20. As depicted, network adapter20communicates with the other components of computer system/server12via bus18. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server12. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. Referring now toFIG.2, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50comprises one or more cloud computing nodes10with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes10may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.2are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.3, a set of functional abstraction layers provided by cloud computing environment50(FIG.2) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.3are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Cloud computing resource metering/billing82provides metering as cloud computing resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and mobile desktop96. Referring back toFIG.1, the program/utility40may include one or more program modules42that generally carry out the functions and/or methodologies of embodiments of the invention as described herein (e.g., such as the functionality provided by cloud computing resource metering/billing82). Specifically, the program modules42may transform cloud computing resources to a general equivalent for precise metering and flexible resource exchange. Other functionalities of the program modules42are described further herein such that the program modules42are not limited to the functions described above. Moreover, it is noted that some of the modules42can be implemented within the infrastructure shown inFIGS.1-3. For example, the modules42may be representative of a cloud computing resource metering and billing program module410as shown inFIG.4. FIG.4depicts an illustrative environment400in accordance with aspects of the invention. As shown, the environment400comprises a plurality of cloud computing nodes10-1,10-2, . . . ,10-nand a plurality of user computing devices430-1,430-2, . . . ,430-mwhich are in communication via a computer network440. In embodiments, the computer network440is any suitable network including any combination of a LAN, WAN, or the Internet. In embodiments, the plurality of cloud computing nodes10-1,10-2, . . . ,10-nand the plurality of user computing devices430-1,430-2, . . . ,430-mare physically collocated, or, more typically, are situated in separate physical locations. The quantity of devices and/or networks in the environment400is not limited to what is shown inFIG.4. In practice, the environment400may include additional devices and/or networks; fewer devices and/or networks; different devices and/or networks; or differently arranged devices and/or networks than illustrated inFIG.4. Also, in some implementations, one or more of the devices of the environment400may perform one or more functions described as being performed by another one or more of the devices of the environment400. In embodiments, each of the cloud computing nodes10-1,10-2, . . . ,10-nmay be implemented as hardware and/or software using components such as mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; networks and networking components66; virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75shown inFIG.3. In embodiments, each of the cloud computing nodes10-1,10-2, . . . ,10-nincludes the cloud computing resource metering and billing program module410and cloud computing resources420, which may include storage, computing/processing, bandwidth, and cloud applications, among others. Still referring toFIG.4, in embodiments, each of the user computing devices430-1,430-2, . . . ,430-mis a computer device comprising one or more elements of the computer system/server12(as shown inFIG.1). In particular, each of the user computing devices430-1,430-2, . . . ,430-mis implemented as hardware and/or software using components such as mainframes; RISC (Reduced Instruction Set Computer) architecture based servers; servers; blade servers; storage devices; networks and networking components; virtual servers; virtual storage; virtual networks, including virtual private networks; virtual applications and operating systems; and virtual clients. In other embodiments, each of the user computing devices430-1,430-2, . . . ,430-mis a desktop computer, a laptop computer, a mobile device such as a cellular phone, tablet, personal digital assistant (PDA), an edge computing device, or other computing device. FIG.5depicts a flowchart of an exemplary method performed by the cloud computing resource metering and billing program module410of the cloud computing nodes10-1,10-2, . . . ,10-nin accordance with aspects of the invention. The steps of the method are performed in the environment ofFIG.4and are described with reference to the elements shown inFIG.4. At step500, each of the cloud computing nodes10-1,10-2, . . . ,10-nreceives a selection of cloud computing resources to run tasks from a user. In embodiments, the cloud computing resource metering and billing program module410receives a selection of particular cloud computing resources from the cloud computing resources420on the cloud computing node10-1,10-2, . . . ,10-nto be used to run specified tasks for the user. Still referring to step500, this selection of particular cloud computing resources of the cloud computing resources420to be used to run the user's tasks that is received at step500may be received by the cloud computing resource metering and billing program module410in the form of a specification or a service request and may be received directly from one of the user computing devices430-1,430-2, . . . ,430-mor may be received from the management layer80of the cloud computing environment50ofFIG.2(e.g., in response to instructions received at the management layer80from one of the user computing devices430-1,430-2, . . . ,430-m). For example, the selection of particular cloud computing resources of the cloud computing resources420to be used to run the user's tasks may be received by the cloud computing resource metering and billing program module410from the resource provisioning81, the user portal83, the service level management84, and/or the SLA planning and fulfillment85of the management layer80of the cloud computing environment50ofFIG.2. Still referring toFIG.5, at step510, each of the cloud computing nodes10-1,10-2, . . . ,10-nruns the user's tasks on the selected cloud computing resources. In embodiments, the cloud computing resource metering and billing program module410communicates with the management layer80of the cloud computing environment50ofFIG.2to initiate the running of the user's tasks on the selected cloud computing resources of the cloud computing resources420of the cloud computing node10-1,10-2, . . . ,10-nin accordance with the selection received at step500. For example, in embodiments, the cloud computing resource metering and billing program module410communicates with the resource provisioning81, the service level management84, and/or the SLA planning and fulfillment85to initiate the running of the user's tasks on the selected cloud computing resources. Still referring toFIG.5, at step520, each of the cloud computing nodes10-1,10-2, . . . ,10-ndetermines an amount of power and an amount of time used to run the user's tasks. In embodiments, the cloud computing resource metering and billing program module410determines the amount of power (e.g., a number of watts) utilized by the selected cloud computing resources of the cloud computing resources420on the cloud computing node10-1,10-2, . . . ,10-non which the user's tasks are running. Additionally, the cloud computing resource metering and billing program module410determines the amount of time (e.g., a number of seconds) spent by the selected cloud computing resources of the cloud computing resources420on the cloud computing node10-1,10-2, . . . ,10-nrunning the user's tasks. Still referring to step520, in other embodiments, the cloud computing resource metering and billing program module410uses a software tool or utility such as PowerTOP to estimate the amount of power and the amount of time used to run the user's tasks. For example, a software tool or utility may be used to estimate power usage (e.g., number of watts) of processes running on the cloud computing node10-1,10-2, . . . ,10-nthat are executing the user's tasks as well as an amount of time (e.g., number of seconds) used by the processes to execute the user's tasks. Still referring toFIG.5, at step530, each of the cloud computing nodes10-1,10-2, . . . ,10-ndetermines an electricity cost for the user's tasks as a number of standard units (e.g., joules). In embodiments, the cloud computing resource metering and billing program module410determines the electricity cost as the number of joules consumed while executing the user's tasks on the selected cloud computing resources of the cloud computing resources420of the cloud computing node10-1,10-2, . . . ,10-n. To determine the number of joules consumed, the cloud computing resource metering and billing program module410uses the amount of power utilized by the selected cloud computing resources on which the user's tasks are running and the amount of time spent by the selected cloud computing resources running the user's tasks, as determined at step520. In particular, in embodiments, the cloud computing resource metering and billing program module410multiplies the determined number of watts (from step520) by the determined number of seconds (from step520) in order to determine the number of joules that is the electricity cost. Still referring toFIG.5, at step540, each of the cloud computing nodes10-1,10-2, . . . ,10-nperforms metering/billing based on the number of standard units consumed by the user's tasks. In embodiments, the cloud computing resource metering and billing program module410provides information (e.g., a billing report) to a user (e.g., customer) or administrator about the number of standard units consumed by the user's tasks running on the selected cloud computing resources of the cloud computing resources420of the cloud computing node10-1,10-2, . . . ,10-nas well as a cost or fee for the use of the selected cloud computing resources determined based on the number of standard units. Still referring to step540, in embodiments, the cloud computing resource metering and billing program module410may provide the information about the number of standard units and the cost or fee for the use of the selected cloud computing resources to the user or administrator directly, for example, via one of the user computing devices430-1,430-2, . . . ,430-m, or indirectly, for example, via the management layer80of the cloud computing environment50ofFIG.2. In embodiments, additional billing functions may be performed and payments collected on the basis of the determined number of standard units consumed by the user's tasks running on the selected cloud computing resources of the cloud computing resources420of the cloud computing node10-1,10-2, . . . ,10-n. In a pay-per-use embodiment, in performing the metering/billing function at step540, the cloud computing resource metering and billing program module410may determine an amount to be charged to the user based on the determined number of standard units consumed by the user's tasks running on the selected cloud computing resources of the cloud computing resources420of the cloud computing node10-1,10-2, . . . ,10-n. In a pay-in advance embodiment, the user may create and maintain a prepaid account by purchasing a predetermined number of standard units in advance, and the cloud computing resource metering and billing program module410may debit the user's prepaid account on the basis of the determined number of standard units consumed by the user's tasks running on the selected cloud computing resources of the cloud computing resources420of the cloud computing node10-1,10-2, . . . ,10-n. The flow then returns to step500, and each of the cloud computing nodes10-1,10-2, . . . ,10-nagain receives a selection of cloud computing resources to run tasks from a user. FIG.6depicts a flowchart of an exemplary method performed by the cloud computing resource metering and billing program module410of the cloud computing nodes10-1,10-2, . . . ,10-nin accordance with aspects of the invention. The steps of the method are performed in the environment ofFIG.4and are described with reference to the elements shown inFIG.4. At step600, the cloud computing nodes10-1,10-2, . . . ,10-nreceive payment from a user for a specified number of standard units of cloud computing resources from each of a plurality of different cloud computing providers. In embodiments, the cloud computing resource metering and billing program module410running on the cloud computing nodes10-1,10-2, . . . ,10-nreceives information about at least one payment from the user for a specified number of standard units of cloud computing resources420from each of the plurality of different cloud computing providers. In embodiments, the cloud computing resource metering and billing program module410determines the number of standard units in accordance with the method ofFIG.5, as described herein. In an example, the cloud computing node10-1is associated with a first cloud computing provider, the cloud computing node10-2is associated with a second cloud computing provider, and the cloud computing node10-nis associated with a nth cloud computing provider. The payment received at step600includes payment for a first quantity of standard units of cloud computing resources420on the cloud computing node10-1associated with the first cloud computing provider, payment for a second quantity of standard units of cloud computing resources420on the cloud computing node10-2associated with the second cloud computing provider, and payment for an nth quantity of standard units of cloud computing resources420on the cloud computing node10-nassociated with the nth cloud computing provider. At step610, the cloud computing nodes10-1,10-2, . . . ,10-nrun the user's tasks on specified cloud computing resources from each of the plurality of different cloud computing providers. In embodiments, the cloud computing resource metering and billing program module410receives from the user information about specified cloud computing resources of the cloud computing resources420on the cloud computing nodes10-1,10-2, . . . ,10-nof the first to nth cloud computing providers, respectively, on which to run the user's tasks. This information may be received by the cloud computing resource metering and billing program module410in the form of a specification or a service request and may be received directly from one of the user computing devices430-1,430-2, . . . ,430-mor may be received from the management layer80of the cloud computing environment50ofFIG.2(e.g., in response to instructions received at the management layer80from one of the user computing devices430-1,430-2, . . . ,430-m). For example, the selection of particular cloud computing resources of the cloud computing resources420to be used to run the user's tasks may be received by the cloud computing resource metering and billing program module410from the resource provisioning81, the user portal83, the service level management84, and/or the SLA planning and fulfillment85of the management layer80of the cloud computing environment50ofFIG.2. Still referring to step610, the cloud computing resource metering and billing program module410communicates with the management layer80of the cloud computing environment50ofFIG.2to initiate the running of the user's tasks on the selected cloud computing resources of the cloud computing resources420on the cloud computing nodes10-1,10-2, . . . ,10-nof the first to nth cloud computing providers. For example, in embodiments, the cloud computing resource metering and billing program module410communicates with the resource provisioning81, the service level management84, and/or the SLA planning and fulfillment85to initiate the running of the user's tasks on the selected cloud computing resources. Still referring toFIG.6, at step620, the cloud computing nodes10-1,10-2, . . . ,10-nreceive a request from the user to change the cloud computing resources on which the user's tasks are run. In embodiments, the cloud computing resource metering and billing program module410receives a change request from the user, either directly from one of the user computing devices430-1,430-2, . . . ,430-mor via the management layer80of the cloud computing environment50ofFIG.2(e.g., in response to instructions received at the management layer80from one of the user computing devices430-1,430-2, . . . ,430-m). For example, the change request may be received by the cloud computing resource metering and billing program module410from the resource provisioning81, the user portal83, the service level management84, and/or the SLA planning and fulfillment85of the management layer80of the cloud computing environment50ofFIG.2. In an example, users may request to change the cloud computing resources on which the user's tasks are run when a required resource is not available or is overloaded on one provider. Still referring toFIG.6, at step630, the cloud computing nodes10-1,10-2, . . . ,10-nreallocate the standard units to different cloud computing resources based on the user's request. In particular, the cloud computing resource metering and billing program module410communicates with the management layer80of the cloud computing environment50ofFIG.2to reallocate the cloud computing resources420on the cloud computing nodes10-1,10-2, . . . ,10-nof the first to nth cloud computing providers that are used to run the user's tasks based on the change request received at step620. For example, in embodiments, the cloud computing resource metering and billing program module410communicates with the resource provisioning81, the service level management84, and/or the SLA planning and fulfillment85to reallocate the cloud computing resources420based on the change request received at step620. FIGS.7A and7Billustrate exemplary allocations of cloud computing resources420-1and420-2on cloud computing node700-1of a first cloud computing provider and cloud computing resources420-3and420-4on cloud computing node700-2of a second cloud computing provider in accordance with the method ofFIG.6. In the example illustrated inFIG.7A, in response to the cloud computing resource metering and billing program module410receiving payment from a user for 1500 standard units of cloud computing resources on the cloud computing node700-1of the first cloud computing provider, the cloud computing resource metering and billing program module410causes 1000 standard units to be allocated to running the user's tasks on cloud computing resource420-1and 500 standard units to be allocated to running the user's tasks on cloud computing resource420-2. Additionally, in response to the cloud computing resource metering and billing program module410receiving payment from the user for 1200 standard units of cloud computing resources on the cloud computing node700-2of the second cloud computing provider, the cloud computing resource metering and billing program module410causes 1200 standard units to be allocated to running the user's tasks on cloud computing resource420-3and 0 standard units to be allocated to running the user's tasks on cloud computing resource420-4. In response to the cloud computing resource metering and billing program module410receiving a request from the user to change the cloud computing resources on which the user's tasks are run, as illustrated inFIG.7B, the cloud computing resource metering and billing program module410reallocates the standard units such that 1200 standard units are allocated to running the user's tasks on cloud computing resource420-1and 300 standard units are allocated to running the user's tasks on cloud computing resource420-2. Additionally, in response to receiving the change request from the user, the cloud computing resource metering and billing program module410, the cloud computing resource metering and billing program module410reallocates the standard units such that 500 standard units are allocated to running the user's tasks on cloud computing resource420-3and 700 standard units are allocated to running the user's tasks on cloud computing resource420-4. FIG.8depicts a flowchart of an exemplary method performed by the cloud computing resource metering and billing program module410of the cloud computing nodes10-1,10-2, . . . ,10-nin accordance with aspects of the invention. The steps of the method are performed in the environment ofFIG.4and are described with reference to the elements shown inFIG.4. At step800, the cloud computing node10-1,10-2, . . . ,10-nreceives a selection of cloud computing resources on which tasks are to be run from each of a plurality of users. In embodiments, the cloud computing resource metering and billing program module410receives from each of the plurality of users a selection of particular cloud computing resources of the cloud computing resources420on the cloud computing node10-1,10-2, . . . ,10-non which to run the user's tasks. This information may be received by the cloud computing resource metering and billing program module410in the form of a specification or a service request and may be received directly from one of the user computing devices430-1,430-2, . . . ,430-mor may be received from the management layer80of the cloud computing environment50ofFIG.2(e.g., in response to instructions received at the management layer80from one of the user computing devices430-1,430-2, . . . ,430-m). For example, the selection of particular cloud computing resources of the cloud computing resources420to be used to run the user's tasks may be received by the cloud computing resource metering and billing program module410from the resource provisioning81, the user portal83, the service level management84, and/or the SLA planning and fulfillment85of the management layer80of the cloud computing environment50ofFIG.2. Still referring toFIG.8, at step810, the cloud computing node10-1,10-2, . . . ,10-nruns the tasks of each of the users on the selected cloud computing resources. In embodiments, the cloud computing resource metering and billing program module410communicates with the management layer80of the cloud computing environment50ofFIG.2to initiate the running of the tasks of each of the users on the selected cloud computing resources (from step800) of the cloud computing resources420on the cloud computing node10-1,10-2, . . . ,10-n. For example, in embodiments, the cloud computing resource metering and billing program module410communicates with the resource provisioning81, the service level management84, and/or the SLA planning and fulfillment85to initiate the running of the task of each of the users on the selected cloud computing resources. Still referring toFIG.8, at step820, the cloud computing node10-1,10-2, . . . ,10-nreceives a request to exchange cloud computing resources between the users. In embodiments, the cloud computing resource metering and billing program module410receives a change request from the users, either directly from one of the user computing devices430-1,430-2, . . . ,430-mor via the management layer80of the cloud computing environment50ofFIG.2(e.g., in response to instructions received at the management layer80from one of the user computing devices430-1,430-2, . . . ,430-m). For example, the change request may be received by the cloud computing resource metering and billing program module410from the resource provisioning81, the user portal83, the service level management84, and/or the SLA planning and fulfillment85of the management layer80of the cloud computing environment50ofFIG.2. In embodiments, the change request received at step820is a request to exchange or trade an equal number of standard units of cloud computing resources420on the cloud computing node10-1,10-2, . . . ,10-nbetween the users. In embodiments, the cloud computing resource metering and billing program module410determines the number of standard units in accordance with the method ofFIG.5, as described herein. Still referring toFIG.8, at step830, the cloud computing node10-1,10-2, . . . ,10-nreallocates the cloud computing resources420between the users based on the request. In embodiments, in response to receiving the request to exchange cloud computing resources at step820, the cloud computing resource metering and billing program module410causes the cloud computing node10-1,10-2, . . . ,10-nreallocate the standard units of the cloud computing resources420between the users based on the request. In particular, the cloud computing resource metering and billing program module410communicates with the management layer80of the cloud computing environment50ofFIG.2to reallocate between the users the cloud computing resources420on the cloud computing node10-1,10-2, . . . ,10-nthat are used to run the tasks based on the change request received at step820. For example, in embodiments, the cloud computing resource metering and billing program module410communicates with the resource provisioning81, the service level management84, and/or the SLA planning and fulfillment85to reallocate the cloud computing resources420based on the change request received at step820. FIGS.9A and9Billustrate exemplary allocations of cloud computing resources420-1,420-2,420-3, and420-4on cloud computing node900in accordance with the method of FIG.8. In the example illustrated inFIG.9A, in response to the cloud computing resource metering and billing program module410receiving a selection of cloud computing resources on which tasks are to be run from each of a plurality of users, the cloud computing resource metering and billing program module410causes 1000 standard units of cloud computing resource420-1and 500 standard units of cloud computing resource420-2to be allocated to running the first user's tasks and 700 standard units of cloud computing resource420-3and 500 standard units of cloud computing resource420-4to be allocated to running the second user's tasks. In response to the cloud computing resource metering and billing program module410receiving a request from the users to exchange the cloud computing resources between the users, as illustrated inFIG.9B, the cloud computing resource metering and billing program module410reallocates the standard units such that 1000 standard units of cloud computing resource420-1and 500 standard units of cloud computing resource420-4are allocated to running the first user's tasks and 500 standard units of cloud computing resource420-2and 700 standard units of cloud computing resource420-3are allocated to running the second user's tasks. FIG.10depicts a flowchart of an exemplary method in accordance with aspects of the invention. The steps of the method are performed in the environment ofFIG.4and are described with reference to the elements shown inFIG.4. At step1000, an edge computing device is used to collect data. In embodiments, the edge computing device is one of the user computing devices430-1,430-2, . . . ,430-m(ofFIG.4). In an example, the edge computing device may be a payment terminal or an Internet of things (IoT) device. The edge computing device may collect a large amount of data to be stored and/or may collect the data in a situation in which the cloud computing node10-1,10-2, . . . ,10-n(ofFIG.10) is inaccessible (e.g., due to limitations of the edge computing device). At step1010, the edge computing device transmits the collected data to a nearby computer. In embodiments, the nearby computer is another of the user computing devices430-1,430-2, . . . ,430-m(ofFIG.4) that is different from the edge computing device. The edge computing device may transmit the collected data to the nearby computer via the computer network440(ofFIG.4) or through another communication mechanism (e.g., Bluetooth, Wi-Fi, etc.). At step1020, the nearby computer transmits the collected data to a cloud computing node. In embodiments, the nearby computer transmits the collected data from the edge computing device received at step1010to the cloud computing node10-1,10-2, . . . ,10-nvia the computer network440(ofFIG.4) or through another communication mechanism (e.g., Bluetooth, Wi-Fi, etc.). At step1030, payment is made for the use of the nearby computer based on a number of standard units consumed. In embodiments, the cloud computing resource metering and billing program module410running on the cloud computing node10-1,10-2, . . . ,10-ndetermines the number of standard units consumed by the nearby computer in receiving the data from the edge computing device at step1010and transmitting the data to the cloud computing node10-1,10-2, . . . ,10-nat step1020. In particular, the cloud computing resource metering and billing program module410determines the number of standard units consumed as described herein with respect to steps520and530ofFIG.5. The cloud computing resource metering and billing program module410then bills an owner of the edge computing device based on the determined number of standard units, as described herein with respect to step540ofFIG.5. The owner of the edge computing device then makes payment for the standard units consumed to the owner of the nearby computer. Accordingly, it is understood from the foregoing description that embodiments of the invention provide a method of monetizing computing resources in which users pay for use of the computing resources based on standard units. Additionally, in embodiments, users may use a number of standard units of any resource that is equivalent to a paid number of standard units of another resource, thereby simplifying the use of cloud computing resources. Additionally, in embodiments, users may change cloud configurations according to a number of standard units for which they have paid or contracted, thereby providing for flexibility in the use of cloud computing resources. Additionally, in embodiments, users may pay for cloud computing resources for edge computing devices as needed, thereby optimizing the user's use of cloud computing resources. In embodiments, a service provider could offer to perform the processes described herein. In this case, the service provider can create, maintain, deploy, support, etc., the computer infrastructure that performs the process steps of the invention for one or more customers. These customers may be, for example, any business that uses cloud computing technology. In return, the service provider can receive payment from the customer(s) under a subscription and/or fee agreement and/or the service provider can receive payment from the sale of advertising content to one or more third parties. In still additional embodiments, the invention provides a computer-implemented method, via a network. In this case, a computer infrastructure, such as computer system/server12(FIG.1), can be provided and one or more systems for performing the processes of the invention can be obtained (e.g., created, purchased, used, modified, etc.) and deployed to the computer infrastructure. To this extent, the deployment of a system can comprise one or more of: (1) installing program code on a computing device, such as computer system/server12(as shown inFIG.1), from a computer-readable medium; (2) adding one or more computing devices to the computer infrastructure; and (3) incorporating and/or modifying one or more existing systems of the computer infrastructure to enable the computer infrastructure to perform the processes of the invention. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. | 57,387 |
11943286 | DETAILED DESCRIPTION OF THE EMBODIMENTS The following describes the embodiments with reference to the accompanying drawings. The embodiments may be applied to various communication systems, for example, a long term evolution (LTE) system, an LTE frequency division duplex (FDD) system, an LTE time division duplex (TDD) system, a universal mobile telecommunication system (UMTS), a 5th generation (5G) system, or a new radio (NR) system. A terminal device in the embodiments may also be referred to as user equipment (UE), a mobile station (MS), a mobile terminal (MT), an access terminal, a subscriber unit, a subscriber station, a remote station, a remote terminal, a mobile device, a user terminal, a terminal, a wireless communication device, a user agent, a user apparatus, or the like. The terminal device may be a device that provides voice/data connectivity for a user, for example, a handheld device or a vehicle-mounted device having a wireless connection function. Currently, some terminals are, for example, a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a mobile internet device (MID), a wearable device, a virtual reality (VR) device, an augmented reality (AR) device, a wireless terminal in industrial control, a wireless terminal in self-driving (self-driving), a wireless terminal in remote medical surgery, a wireless terminal in a smart grid, a wireless terminal in transportation safety, a wireless terminal in a smart city, a wireless terminal in a smart home, a cellular phone, a cordless phone, a session initiation protocol (SIP) phone, a wireless local loop (WLL) station, a personal digital assistant (PDA), a hand-held device or a computing device that has a wireless communication function or another processing device connected to a wireless modem, a vehicle-mounted device, a wearable device, a terminal device in a 5G network, and a terminal device in a future evolved public land mobile network (PLMN). This is not limited in the embodiments. By way of example, and not limitation, in the embodiments, the terminal device may alternatively be a wearable device. The wearable device may also be referred to as a wearable intelligent device, and is a general term of wearable devices, such as glasses, gloves, watches, clothes, and shoes, that are developed by applying wearable technologies to intelligent designs of daily wear. The wearable device is a portable device that is directly worn on a body or integrated into clothes or an accessory of a user. The wearable device is not only a hardware device, but also implements a powerful function through software support, data exchange, and cloud interaction. Generalized wearable intelligent devices include full-featured and large-size devices that can implement all or some functions without depending on smartphones, such as smart watches or smart glasses, and devices that focus on only one type of application function and need to work with other devices such as smartphones, such as various smart bands or smart jewelry for monitoring physical signs. In addition, in the embodiments, the terminal device may alternatively be a terminal device in an internet of things (IoT) system. IoT is an important part of future development of information technologies. A feature of the IoT is connecting a thing to a network by using a communication technology, to implement an intelligent network for interconnection between a person and a machine or between things. A network device in the embodiments may be a device configured to communicate with a terminal device. The network device may be a transmission reception point (TRP), an evolved NodeB (eNB) in an LTE system, a home base station (HNB), a baseband unit (BBU), or a wireless controller in a cloud radio access network (CRAN) scenario. Alternatively, the access network device may be a relay station, an access point, a vehicle-mounted device, a wearable device, an access network device in a 5G network, an access network device in a future evolved public land mobile network (PLMN), or the like; or may be an access point (AP) in a WLAN or a gNB in a new radio (NR) system. This is not limited in the embodiments. In a network structure, the access network device may include a central unit (CU) node, a distributed unit (DU) node, a RAN device including a CU node and a DU node, or a RAN device including a CU control plane (CU-CP) node, a CU user plane (CU-UP) node, and a DU node. In the embodiments, the terminal device or the network device includes a hardware layer, an operating system layer running above the hardware layer, and an application layer running above the operating system layer. The hardware layer includes hardware such as a central processing unit (CPU), a memory management unit (MMU), and a memory (also referred to as a main memory). An operating system may be any one or more computer operating systems that implement service processing through a process, for example, a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a Windows operating system. The application layer includes applications such as a browser, an address book, word processing software, and instant messaging software. In addition, a structure of an execution body of a method provided in the embodiments is not particularly limited, provided that a program that records code of the method provided in the embodiments can be run to perform communication according to the method. For example, the execution body of the method provided in the embodiments may be a terminal device or a network device, or a functional module that can invoke and execute the program in the terminal device or the network device. In addition, aspects or features may be implemented as a method, an apparatus, or a product using standard programming and/or engineering technologies. The term “product” covers a computer program that can be accessed from any computer-readable component, carrier or medium. For example, the computer-readable medium may include, but is not limited to: a magnetic storage component (for example, a hard disk, a floppy disk, or a magnetic tape), an optical disc (for example, a compact disc (CD) or a digital versatile disc (DVD)), a smart card, and a flash memory component (for example, an erasable programmable read-only memory (EPROM), a card, a stick, or a key drive). In addition, various storage media described in the embodiments may represent one or more devices and/or other machine-readable media that are configured to store information. The term “machine-readable media” may include, but is not limited to, a wireless channel, and various other media that can store, include, and/or carry instructions and/or data. To facilitate understanding of the embodiments, related terms are first described. 1. Multi-Access Edge Computing (MEC) Running at a network edge, the MEC may provide big data services, internet of things services, and data services, and open application programming interfaces (APIs) for third parties to quickly deploy new services. An MEC server usually has a high computing capability and is suitable for analyzing and processing a large amount of data. The MEC server includes three parts: a bottom-layer infrastructure, a middle-layer MEC platform, and an upper-layer application. 2. Application The application is actually application instance(s), and application instances are copies of the same application. The application instance is deployed on an edge node on which the application instance needs to be deployed. An application instance of an edge node provides a service for an application on a terminal device at a moment. Usually, a shorter distance between the application instance and a location of the terminal device indicates a lower packet transmission delay between the application and the terminal device and higher quality of service. When the terminal device is at a location1, an application instance1of an edge node1located at the location1is an optimal application instance of the terminal device. When the terminal device moves to a location2, an application instance2of an edge node2located at the location2is the optimal application instance of the terminal device. A physical entity of the edge node is an MEC platform. For ease of description, the following uses the MEC platform as an example for description. With reference toFIG.1, the following describes in detail a system architecture applicable to an embodiment of this application. The system architecture100shown inFIG.1includes a terminal device110, an MEC network element120, a first MEC platform130, and a second MEC platform140. A source application instance is deployed on the first MEC platform130, and a target application instance is deployed on the second MEC platform140. When the terminal device110is located near a location of the first MEC platform130, the terminal device110may access the source application instance through the first MEC platform130. When the terminal device moves and is located near a location of the second MEC platform140, the terminal device110may access the target application instance through the second MEC platform140. The source application instance and the target application instance herein are different copies of a same application. The MEC network element120is configured to manage and deploy an MEC platform. It should be understood that the terminal device110may access the foregoing application instance through a core network element, and the core network element is not shown inFIG.1. In a possible implementation, the core network element may include a centralized control plane network function network element and a distributed user plane network function network element. When the terminal device moves between different areas, the control plane network function network element may select an appropriate user plane network function network element. Therefore, user plane network function network element handover occurs, and an application instance corresponding to a user plane network function network element also needs to be switched. The application instance may be switched from the source application instance to the target application instance. The European Telecommunications Standards Institute (ETSI) defines an MEC reference point architecture in the specification ETSI GS MEC 003. The following describes in detail the reference point architecture with reference toFIG.2. FIG.2shows another system architecture200according to an embodiment. As shown inFIG.2, the system architecture200may include two parts: an MEC system level and an MEC host level. The MEC system level is responsible for global control of an MEC system, and may include the following network elements: 1. MEC host: The MEC host includes an MEC platform, a virtualization infrastructure, and MEC applications. The virtualization infrastructure provides virtualized computing, storage, and network resources for the MEC applications, and may provide persistent storage-related information and time-related information for the MEC applications. The virtualization infrastructure includes a data forwarding plane that executes a forwarding rule for data received from the MEC platform, and routes traffic among various applications, services, and networks. The MEC platform is the core of the MEC reference point architecture. The MEC platform receives a traffic forwarding rule from an MEC platform manager, the MEC application, or an MEC service, and delivers an instruction to the forwarding plane based on the forwarding rule. The MEC platform provides a service registration function, a service discovery function, a common service function, and the like, and may provide modules such as a traffic offload function (TOF), a radio network information service (RNIS), a communication service, and a service registry, to provide services for upper-layer application instances through these modules. The MEC platform further supports configuration of a local domain name system (DNS) proxy/server and may redirect data traffic to corresponding applications and services. The MEC platform may further communicate with another MEC platform through an Mp3 reference point. In a collaboration mechanism of a distributed MEC system, the Mp3 reference point may be used as a basis for interconnection between different MEC platforms. The MEC applications are virtual machine instances running on the MEC virtualization infrastructure. These applications communicate with the MEC platform through an Mp1 reference point. The Mp1 reference point may further provide additional functions such as identifying application availability and preparing or relocating an application state for a user when MEC handover occurs. 2. MEC orchestrator: The MEC orchestrator is a core function provided by the MEC. The MEC orchestrator may implement overall control of resources and a capability of an MEC network, including all deployed MEC hosts and services, available resources on each host, instantiated MEC applications, and a network topology. When selecting a to-be-accessed target MEC host for a user, the MEC orchestrator may measure a user requirement and available resources of each host and select a most appropriate MEC host for the user. If the user needs to perform MEC host handover, the MEC orchestrator triggers a handover program. The MEC orchestrator and an operations support system trigger instantiation and termination of an MEC application through an Mm1 reference point. An Mm4 reference point between the MEC orchestrator and a virtualization infrastructure manager is used to manage virtualized resources and virtual machine images of applications and maintain state information of available resources. 3. MEC platform manager: The MEC platform manager is configured to manage the MEC platform, lifecycles of MEC applications, and MEC application rules and requirements. Management of the lifecycles of the MEC applications includes creating and terminating MEC application programs and providing indication messages of application-related events for the MEC orchestrator. Management of the MEC application rules and requirements includes authentication, traffic rules, DNS configuration, and resolving conflicts. An Mm5 reference point is used between the MEC platform and the MEC platform manager. The reference point may implement platform configuration and traffic filtering rule configuration and is responsible for managing application relocation and supporting application lifecycle programs. Mm2 is a reference point between the operations support system and the MEC platform manager and is responsible for configuration and performance management of the MEC platform. Mm3 is a reference point between the MEC orchestrator and the MEC platform manager and is responsible for supporting lifecycle management of the MEC applications and MEC application-related policies and providing time-related information for available MEC services. 4. Virtualization infrastructure manager: The virtualization infrastructure manager is configured to manage virtualized resources required by MEC applications. Management tasks include allocation and release of virtualized computing, storage, and network resources. Software images can also be stored on the virtualization infrastructure manager for quick instantiation of the MEC applications. In addition, the virtualization infrastructure manager is further responsible for collecting information about virtual resources and reporting the information to upper-layer management entities such as the MEC orchestrator and the MEC platform manager through the Mm4 reference point and an Mm6 reference point respectively. 5. Operations support system: From a perspective of the MEC system, the operations support system is the highest level of management entity that supports system running. The operations support system receives requests for instantiating or terminating MEC applications from a customer-facing service CFS) portal and a terminal device (for example, UE), and checks integrity and authorization information of application data packets and the requests. Request data packets authenticated and authorized by the operations support system are forwarded to the MEC orchestrator through the Mm1 reference point for further processing. 6. Customer-facing service (CFS) portal: The CFS portal is equivalent to a third-party access point. A developer uses this interface to connect various applications developed by the developer to an MEC system of an operator, and enterprises or individual users can also use this interface to select applications that they are interested in and specify time and places for using the applications. The CFS portal may communicate with the operations support system through an Mx1 reference point. 7. User application lifecycle management (LCM) proxy: The user app LCM proxy is an entity used by an MEC user to request application-related services such as instantiation and termination. The entity may implement application relocation between an external cloud and the MEC system and is responsible for authenticating all requests from the external cloud, and then sending the requests to the operations support system and the MEC orchestrator through Mm8 and Mm9 reference points respectively for further processing. It should be noted that the user application lifecycle proxy can be accessed only through a mobile network, and an Mx2 reference point provides a basis for mutual communication between the terminal device and the user application lifecycle proxy. It should be understood that in the system architecture100, the MEC network element120may implement functions of at least one network element in the MEC orchestrator or the MEC platform manager inFIG.2. This is not limited in the embodiments. It should be further understood that the system architecture200includes three different types of reference points. Mp represents a reference point related to an MEC platform application, Mm represents a reference point related to management, and Mx represents a reference point related to an external entity. The foregoing system architecture200applied to this embodiment is merely an example of a network architecture described from a perspective of a reference point architecture, and a network architecture applicable to this embodiment is not limited thereto. Any network architecture that can implement functions of the foregoing network elements is applicable to this embodiment. It should be noted that names of interfaces between the network elements inFIG.2are only examples, and the interfaces may have other names in implementations. This is not limited in this embodiment. It should be noted that names of the network elements (for example, the MEC orchestrator and the MEC platform manager) included inFIG.2are merely examples and constitute no limitation on functions of the network elements. In another future network, the foregoing network elements may alternatively have other names. This is not limited in this embodiment. For example, in a 6G network, some or all of the foregoing network elements may use terms in 5G, or may use other names, or the like. This is uniformly described herein. Details are not described below. In addition, it should be understood that names of messages (or signaling) transmitted between the foregoing network elements are merely examples, and do not constitute any limitation on functions of the messages. FIG.3is a schematic flowchart of a communication method300according to an embodiment. The method300may be applied to the system architecture100shown inFIG.1or may be applied to the system architecture200shown inFIG.2. This embodiment is not limited thereto.S310: An MEC network element obtains information about a source application instance of an application accessed by a terminal device and information about a target application instance of the application.S320: The MEC network element sends a first message to a first MEC platform, where the first message is used to request to migrate a user context of the application from the source application instance to the target application instance, the source application instance is deployed on the first MEC platform, and the target application instance is deployed on a second MEC platform. Correspondingly, the first MEC platform receives the first message.S330: The first MEC platform sends a second message to the MEC network element, where the second message indicates a migration state of the user context of the application. Correspondingly, the MEC network element receives the second message from the first MEC platform. The migration state of the user context may include a state indicating migration is started, a state indicating migration is completed, a state indicating migration failed, or the like. Due to movement of the terminal device, a data network is switched, and an MEC platform corresponding to the data network is also switched, switched from the first MEC platform to the second MEC platform. In this embodiment of this application, before moving, the terminal device accesses the application by using the source application instance of the first MEC platform. After moving, when the migration state of the user context indicates that migration is completed, the terminal device may access the application by using the target application instance of the second MEC platform. According to the communication method in this embodiment, the MEC network element obtains the information about the source application instance and the information about the target application instance, and sends the first message to the first MEC platform to request to migrate the user context of the application, so that application instance-based migration of the user context of the application can be implemented in an MEC scenario. This helps ensure service continuity, and therefore ensure user experience. Optionally, the MEC network element in this embodiment is an MEC orchestrator or an MEC platform manager. Alternatively, the MEC network element in this embodiment can implement functions of at least one network element in the MEC orchestrator or the MEC platform manager. In a possible implementation, before the MEC network element obtains the information about the source application instance and the information about the target application instance, the method further includes: The MEC network element receives a third message from a core network control plane network element, where the third message is used to notify that a user plane path of the terminal device has changed. That an MEC network element obtains information about a source application instance and information about a target application instance includes: The MEC network element determines the information about the source application instance and the information about the target application instance based on the third message, where the source application instance is located at a location corresponding to an access identifier of a source data network, and the target application instance is located at a location corresponding to an access identifier of a target data network. Optionally, the third message includes at least one of the following information: an identifier of the terminal device, an identifier of the application, the access identifier of the source data network, and the access identifier of the target data network. The MEC network element may determine, based on the identifier of the application accessed by the terminal device and the access identifier of the source data network, the information about the source application instance corresponding to the application in the source data network; the MEC network element may determine, based on the identifier of the application accessed by the terminal device and the access identifier of the target data network, the information about the target application instance corresponding to the application in the target data network. In an optional embodiment, the method further includes: The MEC network element sends a fourth message to the core network control plane network element based on the second message, where the fourth message is a positive acknowledgment or a negative acknowledgment for the third message. For example, after completing migration of the user context between the source application instance and the target application instance, the MEC network element may send the fourth message to the core network control plane network element. If the fourth message is the positive acknowledgment, it indicates that the MEC network element accepts the migration of the user context of the application, and the migration state of the user context of the application is the state indicating migration is completed. In this way, the core network control plane network element may activate a new user plane path, so that the terminal device accesses the application by using the target application instance corresponding to the new user plane path. If the fourth message is the negative acknowledgment, it indicates that the MEC network element rejects the migration of the user context of the application, or the migration state of the user context of the application is the state indicating migration failed. In this case, the terminal device still accesses the application by using the source application instance corresponding to the old user plane path. In an optional embodiment, before the MEC network element receives the third message from the core network control plane network element, the method further includes: The MEC network element sends a fifth message to the core network control plane network element, where the fifth message is used to subscribe to a user plane path change event of the terminal device. In this way, the core network control plane network element sends the third message to the MEC network element when the user plane path of the terminal device changes. It should be understood that that the user plane path changes means that the terminal device moves and the user plane path of the terminal device needs to be changed. However, in this case, the new user plane path is not activated, and the terminal device still accesses the application by using the source application instance. In a possible implementation, the MEC network element includes an MEC orchestrator and an MEC platform manager, and actions performed by the MEC network element may include: The MEC orchestrator obtains the information about the source application instance of the application accessed by the terminal device and the information about the target application instance of the application. The MEC orchestrator sends the first message to the first MEC platform through the MEC platform manager. The MEC orchestrator receives the second message from the first MEC platform through the MEC platform manager. The MEC orchestrator may determine the information about the source application instance and the information about the target application instance based on the third message sent by the core network control plane network element. “The MEC orchestrator sends the first message to the first MEC platform through the MEC platform manager” means that the MEC orchestrator first sends the first message to the MEC platform manager, and then the MEC platform manager forwards the first message to the first MEC platform. Similarly, “the MEC orchestrator receives the second message from the first MEC platform through the MEC platform manager” means that the first MEC platform first sends the second message to the MEC platform manager, and then the MEC platform manager forwards the second message to the MEC orchestrator. Optionally, the method further includes: The MEC orchestrator sends, to the MEC platform manager, a message used to subscribe to the migration state of the user context of the application. Correspondingly, the MEC platform manager receives the message. The MEC platform manager sends a first acknowledgment message to the MEC orchestrator, to indicate that the message used to subscribe to the migration state of the user context of the application is received. Correspondingly, the MEC orchestrator receives the first acknowledgment message. Optionally, the method further includes: The MEC platform manager sends, to the first MEC platform, a message used to subscribe to the migration state of the user context of the application. Correspondingly, the first MEC platform receives the message. The first MEC platform sends a second acknowledgment message to the MEC platform manager, to indicate that the message used to subscribe to the migration state of the user context of the application is received. Correspondingly, the MEC platform manager receives the second acknowledgment message. Optionally, that the MEC orchestrator receives the second message from the first MEC platform through the MEC platform manager includes: The first MEC platform sends, to the MEC platform manager, a message indicating the migration state of the user context. Correspondingly, the MEC platform manager receives the message. The MEC platform manager sends a third acknowledgment message to the first MEC platform, to indicate that the message indicating the migration state of the user context is received. Correspondingly, the first MEC platform receives the third acknowledgment message. The MEC platform manager sends, to the MEC orchestrator, a message indicating the migration state of the user context. Correspondingly, the MEC orchestrator receives the message. The MEC orchestrator sends a fourth acknowledgment message to the MEC platform manager, to indicate that the message indicating the migration state of the user context is received. Correspondingly, the MEC platform manager receives the fourth acknowledgment message. FIG.4is a schematic flowchart of a communication method400according to an embodiment. The method400may be applied to the system architecture100shown inFIG.1or may be applied to the system architecture200shown inFIG.2. This embodiment is not limited thereto.S410: A core network control plane network element sends a third message to a first multi-access edge computing MEC platform, where the third message is used to notify that a user plane path of a terminal device has changed. Correspondingly, the first MEC platform receives the third message.S420: The first MEC platform obtains, based on the third message, information about a source application instance of an application accessed by the terminal device and information about a target application instance of the application.S430: The first MEC platform migrates a user context of the application from the source application instance to the target application instance, where the source application instance is deployed on the first MEC platform, and the target application instance is deployed on a second MEC platform. According to the communication method in this embodiment, the MEC platform obtains the information about the source application instance and the information about the target application instance, and then performs migration of the user context of the application, so that application instance-based migration of the user context of the application can be implemented in an MEC scenario. This helps ensure service continuity, and therefore ensure user experience. In an optional embodiment, that the first MEC platform obtains, based on the third message, information about a source application instance of an application accessed by the terminal device and information about a target application instance of the application includes: The first MEC platform sends a request message to an MEC orchestrator based on the third message, where the request message is used to request the information about the target application instance. The first MEC platform receives the information that is about the target application instance and that is sent by the MEC orchestrator. In an optional embodiment, the method further includes: The first MEC platform sends a fourth message to the core network control plane network element, where the fourth message is a positive acknowledgment or a negative acknowledgment for the third message. In an optional embodiment, the third message includes at least one of the following information: an identifier of the terminal device, an identifier of the application, an access identifier of a source data network, and an access identifier of a target data network. In an optional embodiment, before the first multi-access edge computing MEC platform receives the third message from the core network control plane network element, the method further includes: The first MEC platform sends a fifth message to the core network control plane network element, where the fifth message is used to subscribe to a user plane path change event of the terminal device. For related details of the method400, refer to the method300. Details are not described herein again. FIG.5is a schematic flowchart of another communication method500according to an embodiment. The method500may be applied to the system architecture100shown inFIG.1or may be applied to the system architecture200shown inFIG.2. This embodiment is not limited thereto.S501: A terminal device accesses an application by using a source application instance. The source application instance is deployed on a first MEC platform.S502: An MEC orchestrator sends a subscription message to a core network element, to subscribe to a user plane path change event. Correspondingly, the core network element receives the subscription message. The subscription message may be the fifth message in the method300. Optionally, the core network element may be a core network control plane network element, for example, a network exposure function (NEF) or a policy control function (PCF).S503: Trigger user plane path switching when the terminal device moves.S504: Because the MEC orchestrator subscribes to the user plane path change event, the core network element sends a notification message to the MEC orchestrator, to notify, to the MEC orchestrator, that a user plane path of the terminal device has changed. Correspondingly, the MEC orchestrator receives the notification message. The notification message may be the third message in the method300. The notification message may also be referred to as a user plane path change event notification or may have another name. Optionally, the notification message may include an identifier of the terminal device, an identifier of the application accessed by the terminal device, an access identifier of a source data network, and an access identifier of a target data network. For example, the identifier of the terminal device may be an internet protocol (IP) address of the terminal device, a generic public subscription identifier (GPSI) of the terminal device, or the like. This is not limited in this embodiment.S505: The MEC orchestrator obtains, based on the notification message, information about the source application instance of the application accessed by the terminal device and information about a target application instance. A data network may identify a deployment location of an application instance. The MEC orchestrator may determine, based on the identifier of the application accessed by the terminal device and the access identifier of the source data network, the information about the source application instance corresponding to the application in the source data network. The MEC orchestrator may determine, based on the identifier of the application accessed by the terminal device and the access identifier of the target data network, the information about the target application instance corresponding to the application in the target data network. Optionally, the information about the source application instance includes an identifier of the source application instance, an IP address of the source application instance, and a port number of the source application instance; the information about the target application instance includes an identifier of the target application instance, an IP address of the target application instance, and a port number of the target application instance.S506: The MEC orchestrator sends a migration request message to the first MEC platform through an MEC platform manager, to request to migrate user context information of the application. Correspondingly, the first MEC platform receives the migration request message through the MEC platform manager. Optionally, the migration request message includes the identifier of the terminal device, the information about the source application instance, and the information about the target application instance.S507: The first MEC platform sends a migration request message to a second MEC platform, to request to migrate the user context information of the application. Correspondingly, the second MEC platform receives the migration request message. The migration request message may be the first message in the method300. Optionally, the migration request message includes the identifier of the terminal device and the information about the target application instance. Optionally, the first MEC platform may directly send the migration request message to the second MEC platform or may send the migration request message to the second MEC platform through a dedicated application mobility service entity. This is not limited in this embodiment. Optionally, in S508, the second MEC platform sends an acknowledgment message to the first MEC platform, to indicate that the migration request message is received, and migration of a user context of the application is accepted. Correspondingly, the first MEC platform receives the acknowledgment message.S509: Perform migration of the user context of the application between the first MEC platform and the second MEC platform.S510: The second MEC platform sends a migration state notification message to the first MEC platform, to notify a migration state of the user context. The migration state of the user context may include a state indicating migration is started, a state indicating migration is completed, a state indicating migration failed, or the like. Correspondingly, the first MEC platform receives the migration state notification message. The migration state notification message may be the second message in the method300. Optionally, the second MEC platform may directly send the migration state notification message to the first MEC platform or may send the migration state notification message to the first MEC platform through the dedicated application mobility service entity. This is not limited in this embodiment. Optionally, in S511, the first MEC platform sends an acknowledgment message to the second MEC platform, to indicate that the migration state notification message is received. Correspondingly, the second MEC platform receives the acknowledgment message.S512: The first MEC platform sends the migration state notification message to the MEC orchestrator through the MEC platform manager. In this embodiment, the migration state notification message indicates that the migration of the user context is completed. Correspondingly, the MEC orchestrator receives the migration state notification message through the MEC platform manager. The migration state notification message may also be referred to as a user context migration acknowledgment message or may have another name.S513: The MEC orchestrator sends an acknowledgment message to the core network element, where the acknowledgment message is used to respond to the notification message in S504. In this embodiment, the acknowledgment message is a positive acknowledgment, indicating that the migration of the context is completed. Correspondingly, the core network element receives the acknowledgment message. The acknowledgment message may be the fourth message in the method300. The acknowledgment message may also be referred to as a user plane path change event acknowledgment message or may have another name.S514: The core network element activates a new user plane path.S515: The terminal device accesses the application by using the target application instance on the second MEC platform, that is, accesses the application through the new user plane path. According to the communication method in this embodiment, application instance-based migration of the user context of the application can be implemented in an MEC scenario. This helps ensure service continuity, and therefore ensure user experience. In the method500, the MEC orchestrator, the MEC platform manager, and the first MEC platform exchange messages in a synchronization manner. The first MEC platform may send the user context migration acknowledgment message (that is, the seventh acknowledgment message) to the MEC orchestrator through the MEC platform manager only after detecting that the migration of the user context is completed. The following method600shows an example of another message exchange manner. It should be understood that the method600shows only a message exchange process between an MEC orchestrator, an MEC platform manager, and a first MEC platform. Steps of other network elements (such as a terminal device and a core network element) are the same as those in the method500, and details are not described again. FIG.6is a schematic flowchart of another communication method600according to an embodiment. The method600may be applied to the system architecture100shown inFIG.1or may be applied to the system architecture200shown inFIG.2. This embodiment is not limited thereto.S601: The MEC orchestrator sends a subscription message to the MEC platform manager, to subscribe to a migration state of a user context. Correspondingly, the MEC platform manager receives the subscription message. The subscription message may be the sixth message in the method300. Optionally, the subscription message includes an identifier of the terminal device, an identifier of a source application instance, and an identifier of a target application instance.S602: The MEC platform manager sends an acknowledgment message to the MEC orchestrator, to indicate that the subscription message is received. Correspondingly, the MEC orchestrator receives the acknowledgment message. The acknowledgment message may be the first acknowledgment message in the method300.S603: The MEC platform manager sends a subscription message to the first MEC platform, to subscribe to the migration state of the user context. Correspondingly, the first MEC platform receives the subscription message. The subscription message may be the seventh message in the method300.S604: The first MEC platform sends an acknowledgment message to the MEC platform manager, to indicate that the subscription message is received. Correspondingly, the MEC platform manager receives the acknowledgment message. The acknowledgment message may be the second acknowledgment message in the method300.S605: The first MEC platform sends a migration state notification message to the MEC platform manager, where the migration state notification message is used to notify the migration state of the user context. The migration state of the user context may include a state indicating migration is started, a state indicating migration is completed, a state indicating migration failed, or the like. Correspondingly, the MEC platform manager receives the migration state notification message.S606: The MEC platform manager sends an acknowledgment message to the first MEC platform, to indicate that the migration state notification message is received. Correspondingly, the first MEC platform receives the acknowledgment message. The acknowledgment message may be the third acknowledgment message in the method300.S607: The MEC platform manager sends the migration state notification message to the MEC orchestrator. Correspondingly, the MEC orchestrator receives the migration state notification message.S608: The MEC orchestrator sends an acknowledgment message to the MEC platform manager, to indicate that the migration state notification message is received. Correspondingly, the MEC platform manager receives the acknowledgment message. The acknowledgment message may be the fourth acknowledgment message in the method300. It should be understood that S605to S608may replace S512in the method500. In addition, S601to S604may be performed in any time period before S510. This is not limited in this embodiment. FIG.7is a schematic flowchart of another communication method700according to an embodiment. The method700may be applied to the system architecture100shown inFIG.1or may be applied to the system architecture200shown inFIG.2. This embodiment is not limited thereto.S701: A terminal device accesses an application by using a source application instance. The source application instance is deployed on a first MEC platform.S702: The source application instance sends a subscription message to the first MEC platform, to subscribe to a mobility notification event of the terminal device. Correspondingly, the first MEC platform receives the subscription message. The subscription message may carry an identifier of the terminal device.S703: The first MEC platform sends a subscription message to a core network element, to subscribe to a user plane path change event. Correspondingly, the core network element receives the subscription message. The subscription message may be the fifth message in the method300. Optionally, the core network element may be a core network control plane network element, for example, a network exposure function (NEF) or a policy control function (PCF).S704: Trigger user plane path switching when the terminal device moves.S705: Because the first MEC platform subscribes to the user plane path change event, the core network element sends a notification message to the first MEC platform, to notify, to the first MEC platform, that a user plane path of the terminal device has changed. Correspondingly, the first MEC platform receives the notification message. The notification message may be the third message in the method300. The notification message may also be referred to as a user plane path change event notification or may have another name. Optionally, the notification message may include the identifier of the terminal device, an identifier of the application accessed by the terminal device, an access identifier of a source data network, and an access identifier of a target data network. For example, the identifier of the terminal device may be an internet protocol (IP) address of the terminal device, a generic public subscription identifier (GPSI) of the terminal device, or the like. This is not limited in this embodiment.S706: The first MEC platform sends a migration request message to an MEC orchestrator through an MEC platform manager, where the migration request message may include at least one of the following information: the identifier of the terminal device, the identifier of the application, and the access identifier of the target data network. Correspondingly, the MEC orchestrator receives the migration request message.S707: The MEC orchestrator determines, based on the migration request message, information about a target application instance of the application accessed by the terminal device. A data network may identify a deployment location of an application instance. The MEC orchestrator may determine, based on the identifier of the application accessed by the terminal device and the access identifier of the target data network, the information about the target application instance corresponding to the application in the target data network. Optionally, the information about the target application instance includes an identifier of the target application instance, an IP address of the target application instance, and a port number of the target application instance.S708: The MEC orchestrator sends a migration acknowledgment message to the first MEC platform through the MEC platform manager. Correspondingly, the first MEC platform receives the migration acknowledgment message through the MEC platform manager. Optionally, the migration acknowledgment message includes the identifier of the terminal device and the information about the target application instance.S709: The first MEC platform sends a migration request message to a second MEC platform, to request to migrate user context information of the application. Correspondingly, the second MEC platform receives the migration request message. The migration request message may be the first message in the method300. Optionally, the migration request message includes the identifier of the terminal device and the information about the target application instance. Optionally, the first MEC platform may directly send the migration request message to the second MEC platform or may send the migration request message to the second MEC platform through a dedicated application mobility service entity. This is not limited in this embodiment. Optionally, in S710, the second MEC platform sends an acknowledgment message to the first MEC platform, to indicate that the migration request message is received, and migration of a user context of the application is accepted. Correspondingly, the first MEC platform receives the acknowledgment message.S711: Perform migration of the user context of the application between the first MEC platform and the second MEC platform.S712: The second MEC platform sends a migration state notification message to the first MEC platform, to notify a migration state of the user context. The migration state of the user context may include a state indicating migration is started, a state indicating migration is completed, a state indicating migration failed, or the like. Correspondingly, the first MEC platform receives the migration state notification message. The migration state notification message may be the second message in the method300. Optionally, the second MEC platform may directly send the migration state notification message to the first MEC platform or may send the migration state notification message to the first MEC platform through the dedicated application mobility service entity. This is not limited in this embodiment. Optionally, in S713, the first MEC platform sends an acknowledgment message to the second MEC platform, to indicate that the migration state notification message is received. Correspondingly, the second MEC platform receives the acknowledgment message.S714: The first MEC platform sends an acknowledgment message to the core network element, where the acknowledgment message is used to respond to the notification message in S705. In this embodiment, the acknowledgment message is a positive acknowledgment, indicating that the migration of the user context is completed. Correspondingly, the core network element receives the acknowledgment message. The acknowledgment message may be the fourth message in the method300. The acknowledgment message may also be referred to as a user plane path change event acknowledgment message or may have another name.S715: The core network element activates a new user plane path.S716: The terminal device accesses the application by using the target application instance on the second MEC platform, that is, accesses the application through the new user plane path. According to the communication method in this embodiment, application instance-based migration of the user context of the application can be implemented in an MEC scenario. This helps ensure service continuity, and therefore ensure user experience. It should be understood that in the foregoing embodiments, it is assumed that the target application instance already exists on the second MEC platform, and the user context of the application is migrated from the source application instance to the target application instance. If there is no target application instance on the second MEC platform, the target application instance needs to be created on the second MEC platform before the user context of the application is migrated. Details are not described herein. It should be understood that sequence numbers of the foregoing processes do not mean an execution sequence. The execution sequence of the processes should be determined based on functions and internal logic of the processes and should not be constructed as any limitation on the implementation processes of the embodiments. The foregoing describes in detail the communication methods in the embodiments with reference toFIG.1toFIG.7. The following describes in detail communication apparatuses in the embodiments with reference toFIG.8andFIG.9. FIG.8shows a communication apparatus800according to an embodiment. The apparatus800includes a processing unit810and a transceiver unit820. In a possible implementation, the apparatus800may be the foregoing MEC network element or may be a chip in the MEC network element and is configured to perform the procedures or steps corresponding to the MEC network element in the method300. The processing unit810is configured to obtain information about a source application instance of an application accessed by a terminal device and information about a target application instance of the application. The transceiver unit820is configured to send a first message to a first MEC platform, where the first message is used to request to migrate a user context of the application from the source application instance to the target application instance, the source application instance is deployed on the first MEC platform, and the target application instance is deployed on a second MEC platform. The transceiver unit820is further configured to receive a second message from the first MEC platform, where the second message indicates a migration state of the user context of the application. Optionally, the transceiver unit820is further configured to receive a third message from a core network control plane network element, where the third message is used to notify that a user plane path of the terminal device has changed; and the processing unit810may be configured to determine the information about the source application instance and the information about the target application instance based on the third message, where the source application instance is located at a location corresponding to an access identifier of a source data network, and the target application instance is located at a location corresponding to an access identifier of a target data network. Optionally, the transceiver unit820is further configured to send a fourth message to the core network control plane network element based on the second message, where the fourth message is a positive acknowledgment or a negative acknowledgment for the third message. Optionally, the third message includes at least one of the following information: an identifier of the terminal device, an identifier of the application, the access identifier of the source data network, and the access identifier of the target data network. Optionally, the transceiver unit820is further configured to send a fifth message to the core network control plane network element, where the fifth message is used to subscribe to a user plane path change event of the terminal device. Optionally, the apparatus is an MEC orchestrator or an MEC platform manager. Optionally, the apparatus includes an MEC orchestrator and an MEC platform manager, and the MEC orchestrator includes: a first transceiver unit, configured to send the first message to the first MEC platform through the MEC platform manager, and receive the second message from the first MEC platform through the MEC platform manager. Optionally, the first transceiver unit is further configured to: send a sixth message to the MEC platform manager, where the sixth message is used to subscribe to the migration state of the user context of the application; the MEC platform manager includes: a second transceiver unit, configured to receive the sixth message, and send a first acknowledgment message to the MEC orchestrator; and the first transceiver unit is further configured to receive the first acknowledgment message. Optionally, the MEC platform manager includes: the second transceiver unit, configured to: send a seventh message to the first MEC platform, where the seventh message is used to subscribe to the migration state of the user context of the application; and receive a second acknowledgment message from the first MEC platform. Optionally, the MEC platform manager includes: the second transceiver unit, configured to receive the second message from the first MEC platform, send a third acknowledgment message to the MEC platform, and send the second message to the MEC orchestrator; the MEC orchestrator includes: the first transceiver unit, configured to receive the second message, and send a fourth acknowledgment message to the MEC platform manager; and the second transceiver unit is further configured to receive the fourth acknowledgment message. It should be understood that the transceiver unit, the first transceiver unit, and the second transceiver unit may be three independent units or may be an integrated unit. For example, the first transceiver unit is configured to perform receiving and sending actions corresponding to the transceiver unit. In this case, the transceiver unit and the first transceiver unit are integrated into one unit. Alternatively, the second transceiver unit is configured to perform receiving and sending actions corresponding to the transceiver unit. In this case, the transceiver unit and the second transceiver unit are integrated into one unit. This is not limited in this embodiment. In another possible implementation, the apparatus800may be the foregoing first MEC platform or may be a chip in the first MEC platform and is configured to perform the procedures or steps corresponding to the first MEC platform in the method400. The transceiver unit820is configured to receive a third message from a core network control plane network element, where the third message is used to notify that a user plane path of a terminal device has changed. The processing unit810is configured to: obtain, based on the third message, information about a source application instance of an application accessed by the terminal device and information about a target application instance of the application; and migrate a user context of the application from the source application instance to the target application instance, where the source application instance is deployed on the apparatus, and the target application instance is deployed on a second MEC platform. Optionally, the processing unit810may be configured to: send a request message to an MEC orchestrator based on the third message, where the request message is used to request the information about the target application instance; and receive the information that is about the target application instance and that is sent by the MEC orchestrator. Optionally, the transceiver unit820is further configured to send a fourth message to the core network control plane network element, where the fourth message is a positive acknowledgment or a negative acknowledgment for the third message. Optionally, the third message includes at least one of the following information: an identifier of the terminal device, an identifier of the application, an access identifier of a source data network, and an access identifier of a target data network. Optionally, the transceiver unit820is further configured to send a fifth message to the core network control plane network element, where the fifth message is used to subscribe to a user plane path change event of the terminal device. It should be understood that the apparatus800herein is presented in a form of a functional unit. The term “unit” herein may refer to an application-specific integrated circuit (ASIC), an electronic circuit, a processor (for example, a shared processor, a dedicated processor, or a group processor) configured to execute one or more software or firmware programs, a memory, a merged logic circuit, and/or another appropriate component that supports the described function. In an optional example, a person skilled in the art may understand that the apparatus800may be the MEC network element or the first MEC platform in the foregoing method embodiments, and the apparatus800may be configured to perform the procedures and/or steps corresponding to the MEC network element or the first MEC platform in the foregoing method embodiments. To avoid repetition, details are not described herein again. The apparatus800in the foregoing solutions has a function for implementing a corresponding step performed by the MEC network element or the first MEC platform in the foregoing methods. The function may be implemented by hardware or may be implemented by hardware executing corresponding software. The hardware or the software includes one or more modules corresponding to the foregoing function. For example, the transceiver unit may include a receiving unit and a sending unit. The sending unit may be replaced with a transmitter, the receiving unit may be replaced with a receiver, and another unit, for example, the processing unit, may be replaced with a processor, to separately perform receiving and sending operations and related processing operations in the method embodiments. In this embodiment, the apparatus inFIG.8may alternatively be a chip or a chip system, for example, a system on chip (SoC). Correspondingly, the transceiver unit (the receiving unit and the sending unit) may be a transceiver circuit of the chip. This is not limited herein. FIG.9shows another communication apparatus900according to an embodiment. The apparatus900includes a processor910, a transceiver920, and a memory930. The processor910, the transceiver920, and the memory930communicate with each other by using an internal connection path. The memory930is configured to store an instruction. The processor910is configured to execute the instruction stored in the memory930, to control the transceiver920to send a signal and/or to receive a signal. In a possible implementation, the apparatus900is configured to perform the procedures or steps corresponding to the MEC network element in the method300. The processor910is configured to obtain information about a source application instance of an application accessed by a terminal device and information about a target application instance of the application. The transceiver920is configured to send a first message to a first MEC platform, where the first message is used to request to migrate a user context of the application from the source application instance to the target application instance, the source application instance is deployed on the first MEC platform, and the target application instance is deployed on a second MEC platform. The transceiver920is further configured to receive a second message from the first MEC platform, where the second message indicates a migration state of the user context of the application. In another possible implementation, the apparatus900is configured to perform the procedures or steps corresponding to the first MEC platform in the method400. The transceiver920is configured to receive a third message from a core network control plane network element, where the third message is used to notify that a user plane path of a terminal device has changed. The processor910is configured to: obtain, based on the third message, information about a source application instance of an application accessed by the terminal device and information about a target application instance of the application; and migrate a user context of the application from the source application instance to the target application instance, where the source application instance is deployed on the apparatus, and the target application instance is deployed on a second MEC platform. It should be understood that the apparatus900may be the MEC network element or the first MEC platform in the foregoing method embodiments and may be configured to perform the steps and/or procedures corresponding to the MEC network element or the first MEC platform in the foregoing method embodiments. Optionally, the memory930may include a read-only memory and a random access memory and provide instructions and data for the processor. A part of the memory may further include a non-volatile random access memory. For example, the memory may further store information about a device type. The processor910may be configured to execute the instructions stored in the memory. When the processor910executes the instructions stored in the memory, the processor910is configured to perform the steps and/or procedures corresponding to the MEC network element or the first MEC platform in the foregoing method embodiments. It should be understood that the foregoing transceiver may include a transmitter and a receiver. The transceiver may further include an antenna. There may be one or more antennas. The memory may be an independent component or may be integrated into the processor. All or some of the foregoing components may be integrated into a chip for implementation, for example, integrated into a baseband chip for implementation. In this embodiment, the transceiver inFIG.9may alternatively be a communication interface. This is not limited herein. In this embodiment, the MEC network element or the first MEC platform may be a physical entity device or a virtual functional network element. This is not limited herein. In the embodiments, for ease of understanding, a plurality of examples may be used for description. However, these examples are merely examples, but this does not mean that these examples are optimal implementations for implementing this application. In the embodiments, for ease of description, a first message, a second message, and names of various other messages may be used. However, these messages are merely used as examples to describe content that needs to be carried or an implemented function. Names of the messages constitute no limitation. For example, the messages may alternatively be a notification message and a response message. These messages may be some fields in the messages. These messages may alternatively represent various service operations. It should be further understood that, the processor in the foregoing apparatus in embodiments may be a central processing unit (CPU), or may be another general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like. The general-purpose processor may be a microprocessor, or the processor may be any conventional processor or the like. The steps of the methods with reference to the embodiments may be directly performed and completed by a hardware decoding processor or may be performed and completed by using a combination of hardware in the decoding processor and a software module. The software module may be located in a mature storage medium in the art, for example, a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register. The storage medium is located in a memory, and the processor reads information in the memory and completes the steps of the methods in combination with the hardware of the processor. It may be understood that the memory in the embodiments may be a volatile memory, a non-volatile memory, or may include a volatile memory and a non-volatile memory. The non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory. The volatile memory may be a random access memory (RAM), used as an external cache. Through example but not limitative description, many forms of RAMs may be used, for example, a static random access memory (SRAM), a dynamic random access memory (DRAM), a synchronous dynamic random access memory (SDRAM), a double data rate synchronous dynamic random access memory (DDR SDRAM), an enhanced synchronous dynamic random access memory (ESDRAM), a synchlink dynamic random access memory (SLDRAM), and a direct rambus random access memory (DR RAM). It should be noted that the memory of the systems and methods described in the embodiments includes, but is not limited to, these memories and any memory of another appropriate type. According to the methods, the embodiments may further provide a computer program product. The computer program product includes computer program code. When the computer program code is run on a computer, the computer is enabled to perform the method corresponding to any network element in any one of the foregoing embodiments. For example, the computer may perform the method corresponding to the MEC network element in the method300, or the method corresponding to the first MEC platform in the method400. According to the methods, the embodiments may further provide a non-transitory computer-readable medium. The non-transitory computer-readable medium stores program code. When the program code is run on a computer, the computer is enabled to perform the method corresponding to any network element in any one of the embodiments shown inFIG.3toFIG.7. For example, the computer may perform the method corresponding to the MEC network element in the method300, or the method corresponding to the first MEC platform in the method400. According to the methods, the embodiments may further provide a system. The system includes one or more network elements in the foregoing method embodiments. For example, the system may include the MEC network element in the method300. For another example, the system may include the first MEC platform in the method400. All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When software is used to implement the embodiments, all or some of the embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the procedures or functions according to the embodiments are completely or partially generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a non-transitory computer-readable storage medium to another non-transitory computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. The non-transitory computer-readable storage medium may be any usable medium accessible by a computer, or a data storage device, for example, a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a high-density digital video disc (DVD)), a semiconductor medium (for example, a solid state disk (SSD)), or the like. Terms such as “component”, “module”, and “system” indicate computer-related entities, hardware, firmware, combinations of hardware and software, software, or software being executed. For example, a component may be, but is not limited to, a process that runs on a processor, a processor, an object, an executable file, an execution thread, a program, and/or a computer. As shown in figures, both a computing device and an application that runs on a computing device may be components. One or more components may reside within a process and/or an execution thread, and a component may be located on one computer and/or distributed between two or more computers. In addition, these components may be executed from various computer-readable media that store various data structures. The components may communicate by using a local and/or remote process and according to, for example, a signal having one or more data packets (for example, data from two components interacting with another component in a local system, a distributed system, and/or across a network such as the internet interacting with other systems by using the signal). “At least one” means one or more, and “a plurality of” means two or more. The term “and/or” describes an association relationship between associated objects and indicates that three relationships may exist. For example, A and/or B may indicate the following cases: only A exists, both A and B exist, and only B exists, where A and B may be singular or plural. The character “/” generally indicates an “or” relationship between the associated objects. “At least one of the following items (pieces)” or a similar expression thereof means any combination of these items, including a single item (piece) or any combination of a plurality of items (pieces). For example, at least one of a, b, or c may indicate a, b, c, a and b, a and c, b and c, or a, b, and c, where a, b, and c may be singular or plural. A person of ordinary skill in the art may be aware that, in combination with various illustrative logical blocks and steps, described in the embodiments, can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of the embodiments. It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing described system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiments, and details are not described herein again. In the several embodiments, it should be understood that the system, apparatus, and method may be implemented in other manners. For example, the apparatus embodiments are merely examples. For example, division into the units is merely logical function division, and may be other division during actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electrical, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located at one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the embodiments. In addition, functional units in the embodiments may be integrated into one processing unit, each of the units may exist alone physically, or two or more units may be integrated into one unit. When the functions are implemented in a form of a software functional unit and sold or used as an independent product, the functions may be stored in a non-transitory computer-readable storage medium. Based on such an understanding, the embodiments, or the part contributing to the current technology, or some of the embodiments may be implemented in a form of a software product. The computer software product is stored in a storage medium and includes several instructions for instructing a computer device (which may be a personal computer, a server, a network device, or the like) to perform all or some of the steps of the method described in the embodiments. The storage medium includes any medium that can store program code, for example, a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc. The foregoing descriptions are merely implementations of the embodiments, but are not intended to limit the scope of the embodiments. Any variation or replacement readily figured out by a person skilled in the art shall fall within the scope of the embodiments. | 77,401 |
11943287 | DESCRIPTION OF SPECIFIC EMBODIMENTS OF THE INVENTION The present invention may be embodied in other specific system and/or methods. The described embodiments are to be considered in all respects as only illustrative and not restrictive. In particular, the scope of the invention is indicated by the appended claims rather than by the description and figures herein. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope. The embodiments of the present invention propose an architecture for a federation of DLT networks to work together regardless of the underlying technologies to guarantee data consistency, wider consensus and enhanced trust, enhancing therefore the DLT networks performance. The proposed protocol may include among others, a scheme for the naming and discovery of every resource in the federation of networks (identities, applications, networks, blocks, etc.); a distributed transport mechanism for the exchange of control data between the different networks of the federation; and the spread and storage of “proofs of history” of the network to protect and validate the connected networks integrity. To broadcast networks' proofs of history used to validate their data integrity, a mechanism based in control blocks is devised. This control block is spread periodically throughout the federation of networks, and it ensures that the information of a network has not been modified and the data is still complete, without revealing any information of the network. This protocol may include schemes to ensure the liveliness of control blocks and congestion prevention. Each DLT network comprises one or more (usually many of them) nodes (computing nodes); the computing nodes are electronic devices of any type (for example, servers) including databases storage capacity (memory) and processing capacity (processor). Every network in the federation may implement a stack with the following layers (protocols), as shown inFIG.1. In other words, each network (actually each node of the network) implements the following functional layers to perform the different required tasks. (This is only a non-limitative example and some of the layers may be optional, that is, not all the layers are mandatory for the implementation of the present invention):Application layer: It includes all the services and smart contracts deployed over a specific network.Identity Layer: Layer to manage all the identities of the components in the federation of networks, such as users, deployed services, blocks, individual networks, nodes, etc. This scheme enables the representation of any component of the DLT (e.g. blockchain) networks. For example, for the identification of mentioned components mentioned, it may be used the standard of decentralized identifiers (DIDs) created by the W3C community. In this case, taking into account the different types of components (resources) mentioned above, the identifiers may be implemented in the following form: did:vtn:<networkid>:<resource type>:<resourceid>;where: “Network ID” represents uniquely the DLT network; “Resource type” represents the resource as for example service (to represent applications), user, node, block, etc.; and “Resource id” is a unique identifier of the resource.This is only an example and any other identification technique may be used.Virtual Trust Network (VTN) Layer: It offers a layer of aggregation to all the networks in the federation. The aggregation is ensured, for example, through the Discovery Naming Service protocol which allows to uniquely access any resource (component) in the federation of networks through a single endpoint.Consensus Layer: It manages the implementation of the data integrity of networks in the federation. As it will be explained later, every network that wants to participate from this trust-enhancing protocol will have to deploy a process (usually a daemon process) to manage the exchange of control data and control blocks.Transport layer: It defines the different interfaces and communication protocols to be able to connect the independent networks of the federation.Ledger and blocks define the storage layer (for example, in a blockchain). These layers will be characteristic, specific and independent for each network. Now, the consensus layer (which is the most significant layer of the proposed architecture) will be more detailed explained. In a DLT federated network ecosystem (where the present invention can be applied), the objective of the consensus layer is to be able to guarantee the integrity of the data of another federated DLT (for example, blockchain) network (for example, in the event that it does not have a sufficient number of nodes to be able to provide a sufficient level of trust, or in case that part of its ledger is altered or lost). Without loss of generality, let's consider three different DLT networks (also called DLT platforms, for example, blockchain platforms), A, B, C, with their different states of the ledger in their own timeline (time n, time n−1) as showed inFIG.2. If the state of the network A or some of its nodes have been compromised (stripped inFIG.2), the networks B and C should be able to guarantee that the state of A is still consistent in base of their states of the world. All this without revealing any information that has been stored in the network A. For this reason, and in order to achieve this objective of guaranteeing integrity, according to embodiments of the present invention, each DLT network of the federation must issue a control block every certain time, which will be stored in the rest of the federated networks. Every federated network can choose the period to generate the control block. It can be, for example, a certain pre-established number of transactions (transactions period) and/or it can be a certain time (time period) that has elapsed since the last control block was generated. That is, as a non-limitative example, a control block can be generated every 4 milliseconds and/or every three transactions. Here, it is considered a transaction in a DLT network, for example, an operation used to trigger changes in a DLT network. Generally speaking, the interaction with DLT networks is performed through transactions that trigger changes in the ledger or functions in a smart contract. This parameter (the time or the number of transactions every which a new control block is generated) would be called “pulse” and each federated network will have its own pulse. Furthermore, preferably the parameter should be private and not shared with any federated network. In this way, it is not possible to calculate in a deterministic way what interactions and activities are occurring in the different networks. In the example depicted inFIG.3, network A will generate a control block represented in a transaction, every four transactions (transaction period A=4). This information will be spread to the neighbor networks (B y C) and will be stored in their ledger. In the same way, network B will generate its control block every six transactions transaction period A=6) and will be stored in the ledger of the neighbor networks (A y C). As shown in the example ofFIG.3, the transmission or reception of control blocks may be considered transactions also used in each network in counting their pulse. For redundancy, the control block is sent by a network to all the rest federated networks or to some of them (generally speaking it will be sent to N networks, N>=1 where N is a design parameter); however, it must be ensured that it has been stored in a minimum of them. By each network, chaining the blocks in the different ledgers, the integrity of the control blocks is given much more security, due to the blockchain characteristics. The control block may have different contents. In an embodiment said control block will contain at least one (preferably all) of the following contents: an identification of the node and the source network (the network that has generated the control block); information about the total number of blocks in the source network until the generation of the control block, the number of control blocks already generated, the ID of the previous control block sent, proof of history to validate the data integrity of the source network in a specific moment (specific time stamp). Using the standard defined in the identity layer, a control block may be represented as follows: did:vtn:<networkid>:blockc:<hash>{ OriginNumber of blocksNumber of control blockID of previous control blockProof of history } whereOrigin will be the node in a determined source network that has generated the control block.Number of blocks, represents the total number of blocks in the source network.Number of control block, the total number of control blocks generated.ID of the previous control block, for example, represented with the DID nomenclature previously mentioned.Proof of history implements the required information to validate the data integrity of the source network in a specific timestamp (this determined moment). The generation of the proof of history is be critical in ensuring the integrity of data on a network and therefore ensuring its trust level. In an embodiment sparse Merkle multiproofs is going to be used. That is, in this embodiment, as a base implementation for the proof of origin scheme, the use of the state-of-the-art sparse Merkle multiproof mechanism is proposed (this is only an example, and others proof generation techniques can be used). A multiproof is a group of proofs against the Merkle tree; all the proofs are encapsulated and send together, and it is possible to recalculate the root of the Merkle tree using these proofs. In this case, to generate the Merkle tree it's necessary to use the transactions generated in a pulse (that is, the transactions generated since the last control block was generated). Also, with the goal of not revealing any content of the transactions generated in the pulse and to ensure consistency with the history of control blocks, in an embodiment it is added to the Merkle tree the root of the previous tree, that is, the root of the tree generated for the previous control block. The root of the Merkle tree will be stored in the ledger of the network that wants to ensure its integrity (the network generating the control block). To generate the multiproofs, the root of the previous tree will be shared as leaves of the tree and some hashes of the different levels of the tree. This is illustrated inFIG.4. Some of the nodes (leaves/root) of the built Merkle tree will be send as the multiproofs inside control blocks. In the example, shown inFIG.4, the marked leaves of the tree are going to be the multiproofs that will be send as proof of history inside control blocks. This is only an example, and any other number of leaves or any other leaves of the tree can be included in the multiproofs sent. If the network that is sending the proof wants to verify its own history, it will have to recollect all the multiproofs, to rebuild de Merkle root and to check that the Merkle root it's the same. If the root is equal, the history will be complete, if not, the history in the ledger has been modified. In the above example, sparse Merkle trees are used in the proof of origin scheme for the validation of data; however this is only an example, and in other embodiments, other cryptographic techniques could be used to offer integrity over the data in the source, as, for example, ZKProof, simpler hash constructions, authentication primitives or any other cryptographic structures. The mechanism to spread the information (the control blocks) will be implemented in the transport layer of the stack mentioned above. The exchange of control blocks will be implemented by a process (usually a daemon process or service) inside a network node. It will not be necessary for the daemon to be implemented through a smart contract, with all the scalability problems that this involves. As mentioned above, the peers (usually all peers) generate control blocks, the (daemon) service may be able to collect all of these blocks, even if they are repeated. In this way, a basic level of tolerance is achieved in the event of Byzantine failures in the generation of the control block (as it will be explained later). On the other hand, all the networks (networks nodes) that receive control blocks must send a confirmation message (ACK) to the source network (seeFIG.5). This network stores these confirmations (for example, in a distributed hash table DHT) in order to monitor the blocks disseminated in the neighbor networks for further validation. With the proposed method, even if all but one of the nodes of the network are down, the surviving node, will be able to guarantee that the network's history has not been altered. For this, the (daemon) service of the node will request to the neighbor networks the verification of the control block (in the above explained example, the sparse Merkle multiproof). The scalability of the network could be affected if the number of control blocks it has to store is too high (it could damage the internal processes of the network). For this reason, the daemon must monitor the state of its own network to determine to what extent it can accept control blocks from other networks. Thus, if a network doesn't receive enough ACKs for its transmitted control blocks, it can infer that peer networks in the federation may be quite congested, and it may need to reduce its control block generation rate (e.g. Tc) until peer networks are ready to accept every control block. The exchange of control information between nodes of different networks may be performed using a common p2p transport layer. Blockchain and DLT networks are built over p2p transport protocols (for example, libp2p family protocols of any other transport protocol) responsible for the discovery of nodes, the routing of messages throughout the whole network, the discovery of data stored in the network, the publishing and subscription of messages, and the exchange of control messages for the orchestration of the network. In an embodiment, the present invention scheme is built over a common p2p transport protocol layer enabling the discovery of nodes of different DLT networks and the broadcast of control messages and control blocks with every node participating from the system. These protocols may be implemented over the transport layer of existing protocols such as TCP/UDP and IP. The communication between networks A and network B (to transmit/receive control blocks) is made through one or more (tele)communications networks using any known communication technology (wireless or wired communications technology, mobile communications of any type as for example, 2G, 3G, 4G, 5G or any other communications technology). As previously mentioned, the scheme for generation of control blocks in every network is orchestrated using a process (usually a daemon process) which is usually implemented over a common (P2P) transport protocol layer. In an embodiment, in this control blocks generation scheme the following concepts/tasks may be distinguished (this is only an exemplary embodiments, and not all are mandatory for the implementation of the present invention):Detection Trust Index: Each network will require a different level of trust and integrity of its data according to the specifics of the network and its hosted use cases. In order for networks to adapt the operation of the control block protocol to their desired level of trust, the concept of “detection level trust index” is introduced. This parameter it is an objective metric to measure the level of trust according to the specific configuration of the proposed protocols; specifically, it is a measure of probability of loss of the data integrity in a network.Control blocks broadcasting: It specifies the scheme of control blocks broadcasting and exchange of control blocks with other networks of the federation.Control blocks liveliness: It is a mechanism (usually an algorithm) used to determine and ensure that other networks are still storing the control block data required to perform a future validation in a protected network.Congestion control: This mechanism is included to avoid congestion in the connection link between networks and in the destination networks by preventing control block issuers (source network) from indiscriminately broadcasting control blocks, hence avoiding potential DDoS (Distributed Denial of Service) and reducing as possible the bandwidth protocol of the control block scheme. This congestion control is optional for the operation of the overall system, and its implementation may be based on the resolution of a simple Proof of Work problem in order to be entitled to broadcast a control block. Thus, in order for a node to send a control block, it will have to find the nonce that ensures the hash of the full control block (in an embodiment, this algorithm s inspired in Ethereum's Whisper messaging congestion control system). The difficulty of this Proof of work problem is proportional to the status of the connection links between networks (or rather between the electric devices of the networks) and of the rate of generation of control blocks of the source network: CongestionDifficulty=(1AvgTc)*(AvgRTT+AvgValidation)=(1AvgTc)*(AvgACK)Where AvgTCis the average rate of generation of control blocks, AvgRTTthe average Round Trip Time of packages in the broadcast of messages and AvgValidationthe average validation time required to insert the control block into a ledger of a counterpart (reception) network. The sum (AvgRTT+AvgValidation) can be simplified into AvgACKwhich the average time spent from the broadcast of a control block to the reception of the ACK from the reception network.When a network node broadcasts a control block, it waits for a minimum number of ACKs (Kc) from different networks to consider valid the broadcast of the control block. This minimum number of ACKs required (Kc) may be chosen by the node according to its desired Trust Detection Index (as it will be later explained about this below). Control blocks are generated every Tc transactions (Transaction period=Tc). This period is a design parameter determined by each network.In an embodiment, if less than Kc ACKs are received, the same control block is retransmitted until enough ACKs are received. If after several retransmissions, Kc ACKs are not received (for example, due to a congestion situation), the value of Kc may be reduced. In a possible implementation, Kc is considered as an optimal value and a minimum threshold, such as Kc=(1, 3). For example, the default Kc would be 3, but if there is congestion in the network and after several retransmissions 3 valid ACKs are not received, the system would settle with at least 1 ACK (this is only a non-limitative example and any other values of Kc can be chosen).Every network node may keep a local storage locally to account for the ACKs received from other networks, and in order to keep track of the control blocks shared for future validations. Thus, for every control block a node keeps a key-value structure containing the content and ID of the control blocks transmitted and the ACKS received from other networks for each control block. This is shown in an example in the following table: # Control Block <hash>ValueCB1<55rf1ek>CB1ContentACK1, ACK2, ACK3, ACKnCB2<29e1oe>CB2ContentACK1, ACK2, ACK3, ACKnCB3< 9m2iyo>CB3ContentACK1, ACK2, ACK3, ACKnCB4<10d23d>CB4ContentACK1, ACK2, ACK3, ACKnIn an embodiment, every control block is identified through its hash (as shown in the table above). This id represents the key of this local key-value store. The value for each key is block ci hash, where block ci hash is the hash of the control block (its unique identifier). On the other hand, in the value it's necessary to store the control block that has been issued and a list of ACKs received from other networks that has received said control block. These ACKs may have the following form: {origin_address, type, timestamp, origin signature}, origin address is the source address (or resource uri if the discovery service is used) where the control block can be retrieved in the future (e.g. the address of the computing node storing the control block in the DLT network which has received the control block), the type determines the type of DLT platform from the source network generating the ACK, timestamp may include information about the specific moment of time this ACK was generated and, alternatively or additionally, the network receiving the ACK also timestamps in its registry the specific moment when the ACK was received from the neighbour network (this will be used for the liveliness algorithm); and finally the signature authenticates the sender of the ACK.Each network node stores this key-value structure including information about the control blocks said network node has transmitted (that is, this key-value structure is a network node level element and it is not necessary to replicate said information in the rest of nodes of the network). This can be stored in a node as a plain storage, or it can use cryptographic constructions such as a DHT or a Merkle DAG (Directed Acyclic Graph), or an index-based storage to ease the future search and validation of data. This occurs in the case of the nodes of the network that is issuing (generating and transmitting) the control blocks. However, in the case of the nodes which receives the control blocks from another network, the control block is stored in their distributed ledgers (for example, blockchains). That is, each node of a federated network stores the control blocks received from other networks in a distributed ledger/blockchain (which is usually replicated in all rest of nodes of the networks). The control blocks can be sent from a single node of the source network (A) to the rest of networks (B, C) or from several nodes of a network (A) to the rest of networks. Usually the control block is broadcasted to all the networks of the federation. In an embodiment this is done through their shared networking layer; so the source network (A) will not choose specific networks to send its control block, but it would broadcast the block and wait for ACKs from others (it is similar to a publish-subscriber approach, where A publishes the control block to the network and waits for subscribers to answer). It should be noted that in case two or more nodes in the same DLT network broadcast control blocks, in an embodiment the blocks broadcasted by each of the nodes are identical. Thus, whenever two control blocks with the same content submitted by two different nodes in a same DLT network reach another DLT network they do not need to be duplicated in this DLT but can be written just once. To this block broadcasting scheme, a liveliness algorithm is attached to ensure that a minimum number of the control blocks broadcasted are still “alive” in other networks, or otherwise some of them have been lost, for instance if any of those networks is down or has been hacked. For this purpose, every certain number (Tl) transactions (liveliness period also chosen by the corresponding network node) at least the oldest control block is chosen and requested by a source network to the networks from which ACKs where received when sending said control block the first time. This process can be done by one or more nodes of the source network and by one or more nodes of the rest of networks (which received the control blocks). If the requests are successfully answered by at least Kc networks nothing is done (that is, if nodes from at least Kc networks have stored in distributed ledgers, for example in blockchains, the control blocks and, consequently, successfully answer to the requests, then nothing is done). If the number of successful requests is less than Kc it means that the required Kc copies of that control block are not available. Hence, this control block (with the same content) is re-broadcasted until “alive” ACKs from nodes of at least Kc different networks are collected. This ensures that control blocks will always be available for future validations and error detections in at least Kc networks. Thanks to the proposed embodiments, the maximum detection trust index (higher probability of no loss in the data integrity of a network) is offered so nodes of a DLT network can be surer that they can detect and recover from failures in their infrastructure. The higher the trust index, the higher the certainty that an error will be detected, and the data integrity kept. The trust detection may be defined as follows: DetectionTrustIndex=[1−Probcollusion+Probliveliness+Probblindspot)] According to the above definition, the detection trust index considers the probability of collusion of networks when storing the broadcasted control blocks which also takes into account the redundancy in the broadcast of these blocks, the specifics of the liveliness algorithm, and the blind spot. The blind spot is the number of transactions not protected by a specific control block because they are in the middle of the most recent control block generation period and can be forged. That is, the transactions in a blind spot situation will be the transactions between the generation and transmission of consecutive control blocks, so the maximum number of transactions in a blind spot situation will be the transactions between the transmission of a control block in a certain time (n) Cbnand the transmission of the previous control block Cbn−1, and it will be equal to the transaction period Tc as defined by the following equation: Blindspot=Cbn−Cbn−1=Tc In an embodiment, once a certain control block is transmitted (broadcasted) by a node of a first network, the reception of ACKs (from nodes of another DLT networks) for said certain control block is sealed after the next control block is received, in order to avoid a large space of potential collusions. Thus, new ACKs for a control_block (Cbn), are only accepted until control_block (Cbn+1) is generated (always considering that the minimum number of Kc of ACKs for the block were already accepted, if not the generation of the new control block is usually not allowed). Under this scenario, collusions (e.g. the transmission and storage of fake ACKs) can only be performed throughout the blind spot period. In the liveliness algorithm, if the redundancy decreases below the threshold of Kc, a new set of control blocks are sent to avoid loss of information. Thus, the blind spot probability (Probblindspot, also known as αblindspot), the probability of a control block with Number Txs Control Period (Tc) being forged after being sealed can be computed as: αblindspot=NumberTxsControlPeriodTotalcontrolblocks*NumberTxscontrolperiod=1Totalcontrolblocks→0 The more total control blocks (with a number of transactions per control block) a network have shared (transmitted) and sealed already, the lower the probability of a block being forged outside the blindspot period, as malicious nodes would have to forge also previous control blocks already stored in other networks. So, this problem only appears at the beginning of a network joining the federation, and its probability quickly goes to zero when control blocks start being shared. Hence, the system can mainly be forged if a set of malicious nodes of different networks collude against a specific network, and a network alone is not able to forge the storage of a control block. Considering this, the probability of collusion under a single ACK may be modelled according to the number of nodes in the federation using the following distribution: x=ProbcollusionsingleACK={1ifp≤q;qpifp>q} Where p is the probability of the network storing the control block being an honest network, and q the probability of it being a malicious network. The probability of a network being honest and malicious may be modelled as follows: q=(AvgRTT+AvgValidation)*maliciousnodes=AvgACK*maliciousnodes p=(AvgRTT+AVgValidation)*honestnodes=AvgACK*honestnodes where AvgRTTis the average Round Trip Time of packages in the broadcast of messages and AvgValidationthe average validation time required to insert the control block into a ledger of a counterpart (reception) network. The sum (AvgRTT+AvgValidation) can be simplified into AvgACKwhich the average time spent from the broadcast of a control block to the reception of the ACK from the reception network. Usually the AvgACKis quite similar for malicious and honest nodes. So, considering AvgACKsimilar to each other for the case that the node is a malicious node and the case that the node is an honest node, q and p could be simplified as the (normalized) number of malicious nodes and honest nodes in the whole federation of networks, respectively. As previously explained, at least Kc honest ACKs (throughout the blindspot) from different networks are needed in order to the transmission of a control block be accepted as valid. Modelling this fact using a Poisson distribution (this is only an example, and any other statistic distributions can be used) the probability of receiving less than Kc honest ACKs throughout the blind spot can be modelled as: αcollusion(Kc)=Pr(X<KcinTc)=∑i=0Kc-1λi*e-λi!whereλ=E[X]=pqαcollusion(Kc)=∑i=0Kc-1(pq)i*e-(pq)i! On the other hand, the liveliness fragment of the Trust Detection Index can be specified as the probability of loss of a stored control block in a network, plus the liveliness period between liveliness checks divided by the number of control blocks checked. If a control block has been lost, the source network must resend it to the other networks, for the control block to be stored again and an ACK is received for each storage of this block. This opens the door to a new interval of potential collusion that needs to be accounted in the liveliness probability: αliveliness=LivelinessperiodLivelinesschecks*Probabilityloss*Probabilitycollusion(Kl)=TlKl*αloss*αcollusion(Kl)=TlKl*αloss*∑i=0Kl-1(pq)i*e-(pq)i! where Kl (Kl>=1) is the number of blocks checked in the liveliness algorithm (that is, the number of previously transmitted control blocks requested in the liveliness process); usually the Kl oldest control blocks (the Kl control blocks not checked for the longest time) are requested in the liveliness process; if Kl>1 the liveliness process is not performed individually but in batches of Kl blocks Tl is the period of liveliness checks (that is, in the liveliness process, Kl control blocks are checked every Tl transactions). And αlossis the probability of loss of a stored control block in a network and αcollusionthe probability of an ACK being corrupted in the destination network. In the above formula and in some of the formulae presented in this text, the symbol a is used to refer to probabilities. Consequently, considering the configuration metrics of the schemes previously presented, in an embodiment, the detection trust index (DTI) can be computed as (the blindspot probability has not been included because, as previously explained, said probability quickly goes to 0): DTI=1-[αcollusion(Kc)+αliveliness]=1-[∑i=0Kc-1(pq)i*e-(pq)i!+TlKl*αloss*∑i=0Kl-1(pq)i*e-(pq)i!] The blind spot probability is disregarded for the calculation of DTI, because it quickly goes to 0 as previously explained. This way, as it can be seen on the above formula, DTI does not depend on Tc. This trust detection index uses to be specified by the source network according to the level of integrity it wants in its network. Thus, the different configuration parameters of the protocol stack (Kc, Kl, Tl) need to be fine-tuned to obtain the specific detection trust index desired for the network. Thus, considering different configuration parameters, it can be inferred the level of protection of the network and the required depth in control blocks to ensure the desired integrity of data. For example,FIG.6shows a graphic considering how the values of Kc affect the probability of collusion for different values of malicious nodes (q) in the federation of networks (that is, in all the networks to which control blocks are broadcasted). From this, the following conclusions can be inferred: (i) the higher the Kc, the higher the probability malicious nodes have to forge ACKs, as the source network waits for more ACK before sealing the control block; (ii) the number of expected malicious nodes in the network affects the maximum Kc that a node needs to select to avoid collusions (or to have a certain probability of collusion). For a small number of malicious nodes (100inFIG.6), the probability of collusion stays low independent of the value of Kc, so the level of redundancy of control blocks can be selected as desired; however, when the number of malicious nodes increases, the Kc has to be selected so that the system has enough redundancy but it doesn't open the door to a high probability of forgery of ACKs. Thus, for q (number of malicious nodes)=800 nodes, the maximum value to be selected of Kc should be approx. 5 under this scenario to avoid an explosion of the probability of collusion. This maximum value of Kc is approx. 2 if q=2000. Now, the effect of malicious nodes compared to honest nodes (or to the total number of nodes) in the probability of collusion for fixed values of Kc will be considered (FIG.7, which considers a fix value Kc=Tc=2). It can be seen that, as expected, the more malicious nodes there are in the network, the higher the probability of collusion. Nonetheless, adjusting Kc for the expected number of malicious nodes allows to minimize the effect of these nodes (as shown in previous figure). InFIG.7, it is shown the value of the Probability of Collusion depending on the ratio of malicious nodes/total nodes; forFIG.7, it has been taking into account the number of expected malicious nodes not in a single network but in all the networks to which control blocks are broadcasted. It has to be pointed out that inFIG.7the value of the Probability of Collusion is low, even for a high ratio of malicious nodes, because of the chosen values of Kc and Tc for this case, Kc=Tc=2). The control blocks stored in external networks may be lost due to outages, nodes storing the control blocks being disconnected, connection problems . . . ; the proposed liveliness protocol aims to minimize this loss of control blocks. Thus, three main configuration metrics (parameters) that can be selected in this proposed scheme, Kc, Kl and Tl to control the integrity of the network (represented by the Detection Trust Index). Tc is also a configuration metric (parameter) which can be selected; however, as previously explained, it does not affect directly the DTI. What Tc influences is the number of transactions a source network wants to include in its control block (as a control block is transmitted every Tc transactions); or in other words, Tc is the size of the batch of transactions protected in a control block This will differ between DLT networks in the federation according to the level of transaction protection each specific network is seeking. InFIG.8, it is shown the behavior of the detection trust index for different values of Kc and Kl for a specific number of malicious nodes. The detection trust index will give a value according to the specific metrics used and the conditions of the network. The higher this value, the larger the data integrity protection of the DLT network. So, thanks to the solution proposed in this patent application, taking into account the characteristics of the rest of networks of the federation, each network node can configure his control blocks process (choosing appropriated values of Kc, Kl and Tc) in order to achieve a level of integrity (trust) desired for the network (a certain DTI). In other words, the proposed solution allows that a network assures a certain level of trust only by applying the proposed control blocks mechanism, selecting appropriated values of the parameters. Considering, for example, a federation of DLT networks where the rate of malicious nodes compared to honest nodes is below the 60% (a typical, and in many case extreme, scenario in decentralized environments), in order to optimally operate the system and reach a level of data integrity in the federation trust of over the 99%, a possible recommendation will be to configure the system with the following parameters:Redundancy of control blocks (Kc) equal to 3, being the optimal interval of values to accommodate different scenarios of malicious nodes and reach optimal results in the range [2, 5].The period of the control block does not have a significant impact in the result of the data integrity, so any value around the order of a minute would make the system work optimally. In an embodiment, it is recommended using a value in the range of 30 seconds and 5 minutes, according to the performance requirements of the network.Finally, for the Kl, it is proposed a liveliness check with Kl=2, optimally, with any value between 1 and 3 being suitable without harming the operation of the system; and a liveliness period (Tl) of Kl*Tc (the Tc selected for the control block generation period). This gives values of αlossaround 0.01, so it provides a good result. The system would also work in scenarios with a higher rate of malicious nodes, but the configuration proposed wouldn't ensure the 99% probability. Nonetheless, this scenario where there are decentralized networks with over a 60% in the rate of malicious nodes is rare. In the above explained embodiments, it has been considered a redundancy of Kc networks (number of networks storing the control blocks) and that this redundancy has to be kept throughout all the operation of the system. However, as in many blockchain and DLT technologies, this requirement may be relaxed so that instead of ensuring no forgery over the Kc control blocks received, it is ensured for example no 51% attack, i.e. instead of ensuring the reception of Kc or Kl ACKs to be sure that there is enough redundancy and liveliness of control blocks, the requirement may be relaxed to at least a number of ACKs>Kc/2. Thus, in an embodiment, a network will wait preferably to Kc ACKs, but if this is not possible instead of keep retransmitting, it will settle to, for example, at least more than Kc/2 ACKs (that is, in the embodiment, if the selected Kc is too high and it is not possible to wait to Kc ACKs, Kc is then decreased to Kc/2). Some of the different protocols and schemes in the proposed protocol stacks are optional. Thus, for a network to implement embodiments of the present invention it doesn't have to use the full protocol stack, and it can select the ones that better fits is purpose. For example, a network could choose to relax its trust requirements, and use the control block broadcasting scheme, and the resource discovery system, and avoid the use of the liveliness and congestion control algorithms. In the above explained embodiments, it has been presented the case of the Trust Detection Index to objectively measure the level of trust and integrity over a network, but modifications of this metric or other metrics could be used. As for example, a Trust Validation Index, which instead of determining to what extent an error can be detected in the data of the network, it also introduces the extent in which errors can be detected and fixed. The description and drawings merely illustrate the principles of the invention. Although the present invention has been described with reference to specific embodiments, it should be understood by those skilled in the art that the foregoing and various other changes, omissions and additions in the form and detail thereof may be made therein without departing from the scope of the invention as defined by the following claims. Furthermore, all examples recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof. It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the invention. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. | 41,189 |
11943288 | DETAILED DESCRIPTION The following detailed description provides examples that highlight certain features and aspects of the innovative decentralized content distribution network claimed herein. Different embodiments or their combinations may be used for different applications or to achieve different results or benefits. Depending on the outcome sought to be achieved, different features disclosed herein may be utilized partially or to their fullest, alone or in combination with other features, balancing advantages with requirements and constraints. Therefore, certain benefits will be highlighted with reference to different embodiments, but are not limited to the disclosed embodiments. That is, the features disclosed herein are not limited to the embodiment within which they are described, but may be “mixed and matched” with other features and incorporated in other embodiments. Disclosed embodiments provide a decentralized content distribution network (dCDN), which enables efficient transfer of files while lifting the load from the communication networks. In disclosed embodiments, dCDN is implemented as a Software Development Kit (SDK), allowing application access to online content more inexpensively and reliably than traditional CDNs. The dCDN SDK seamlessly builds a decentralized peer-to-peer content distribution network using both Internet and device-to-device connections. The customers are primarily mobile app developers wishing to reduce their content distribution network costs and improve their app's reliability and DDoS resilience. The dCDN SDK is designed for simplicity and usability by mobile app developers, even those relatively unsophisticated in network-related engineering. The following comparisons put the dCDN into context. One key difference between a dCDN and a CDN is the existence of device-to-device connections, providing unparalleled robustness. Major differences between a dCDN and a conventional CDN are:1. A CDN requires the installation and maintenance of physical infrastructure, while a dCDN does not require a physical infrastructure;2. A conventional CDN is an alternative to other conventional CDN, while dCDN can be used together with one conventional CDN or enable the customer to use multiple CDNs at once easily and transparently; and3. A dCDN is fundamentally cheaper. For example, Akamai Download Manager, a popular CDN based on Red Swoosh, is an inferior copy of BitTorrent, while dCDN integrates recent technological advances. Differences between Akamai Download Manager, and dCDN include but are not limited to:1. The dCDN uses the Low Extra Delay Background Transport (LEDBAT) protocol. Akamai Download Manager is where BitTorrent stood in 2007 in this regard.2. The dCDN uses a powerful DHT of 250M nodes to provide reliability and redundancy in the components that would otherwise need to be centralized. As a result, the dCDN works even when all central infrastructure is unreachable. Akamai Download Manager does not.3. The dCDN has the ability to build its own additional device-to-device connectivity, relieving existing networks of congestion and repeated transfers. Akamai Download Manager does not.4. dCDN is seamlessly peer-to-peer. Akamai Download Manager requires extra actions from users to achieve this. When a user wants to download a large file which Akamai can serve over their peer-to-peer network, they are told to download and install the “Akamai NetSession Interface,” which acts as a download manager and a peer-to-peer client. Since many users are unaware of Akamai's role in content delivery, they are often suspicious and refuse the additional installation. In the peer-to-peer space, BitTorrent more closely resembles dCDN functionality than Akamai Download Manager does, but at root it is substantially different. BitTorrent can only be used for specific files deliberately transmitted, and separates host swarms by content, while dCDN has all the nodes on the network working together. In general, BitTorrent is too complicated and industrial for use by the relatively few CDN customers taking the DIY route. In one embodiment, dCDN is faster, cheaper, and more reliable than existing web downloads, provides seamless content ingestion from the web, includes a library which can easily be embedded into other apps, improves performance, and is easier to integrate than even conventional CDN. Reliability and predictability are crucial aspects of any standard CDN performance. Considering an origin download process, performance can only be enhanced by implementing dCDN. The bigger the dCDN, the better it functions. With few nodes, there are fewer candidates to perform any of the dCDN functions, making it difficult for the network to recover from failure. Node multiplicity provides redundancy, increase stability, and improves performance. A key metric for any content distribution system is speed. People care how quickly something works more than any other factor. One example of this fundamental user preference was the rapid switch from eDonkey to BitTorrent: eDonkey was an established market leader in the peer-to-peer downloads space, but users were motivated to switch to BitTorrent, despite initial disadvantages in other areas, because of markedly improved download performance. For users, speed is the critical factor determining product choice. Implemented embodiments of dCDN work seamlessly for end users. For any content distribution system, the goal is to deliver bytes to the destination. From the point of view of the end user, the ideal distribution system is an implementation detail chosen by app developers, providing excellent performance and running seamlessly in the background. Akamai Download Manager requires a separate download and installation, while dCDN is invisible, requiring no additional installation and no input from the end user. The dCDN has no negative impact on end user experience. A key requirement of adoption is that the functions that a node performs for the benefit of delivering content to others do not adversely affect the user's own experience. Network performance for other apps and battery life should not be impacted. Prior to 2008, versions of BitTorrent essentially hijacked their local networks, saturating the uplink buffer, driving round-trip latency to seconds, and making the network unusable for traffic other than BitTorrent. Because of the value users received from BitTorrent, and its superiority to any alternatives, users forgave this deficiency and worked around the problem by running the software at night or by setting manual limits. The introduction of LEDBAT, which allowed BitTorrent to use 100% of network capacity without adversely impacting other traffic, immediately stepping aside as capacity was required by other applications, allowed the user base to expand from a limited number of technically savvy people capable of working around the limitations of BitTorrent to average consumers. Today, expectations for content distribution systems are higher than in the past, and negative impact upon unrelated Internet traffic is a total nonstarter, thus requiring the use of LEDBAT. The dCDN SDK allows developers to quickly master and deploy dCDN-enabled networks, accelerating wide adoption due to its streamlined implementation. Some dCDN developers are sophisticated enough to create standalone network-related tools; most, however, are user interface developers who are creating a mobile game that needs to download content packs or a simple mobile container app, such as those used by The New York Times or the BBC. For the latter class of developer, ease of SDK integration is crucial. A general-purpose dCDN delivers static content-addressable objects with, optionally, opportunistic encryption. In one embodiment, dCDN can be thought of as a peer-to-peer system which delivers arbitrary content. While the BitTorrent network is segmented by host swarm for the purposes of data connections—one for each piece of content—every dCDN node is part of the same swarm, capable of serving many different types of content. BitTorrent already makes its distributed hash table (DHT) global using this method. dCDN extends this feature to its logical conclusion, merging the transport swarms. This has key advantages when many smaller objects are required rather than, as is typical in the BitTorrent, a few very large ones (such as movie or music files). LEDBAT is a congestion control protocol used by BitTorrent for transfers, by Apple for Software Updates, and, since the summer of 2016, by Microsoft in Windows 10, as well as by countless other companies distributing bulk data in background. LEDBAT is estimated to carry 13-20% of Internet traffic. Before it used LEDBAT, BitTorrent would overwhelm the network, making it unusable for any other traffic. Adopting LEDBAT allowed BitTorrent to vastly expand the user base, have better relations with internet service providers, and made the software easier to use. LEDBAT works by continuously measuring the one-way delay from the sender to the receiver and adjusting the sending rate to target a predetermined low target extra delay. The target is chosen so that it is low enough not be have a perceptible effect for human interactions. The mechanism by which LEDBAT accomplishes this is based on control theory. BitTorrent DHT is the largest currently deployed decentralized, serverless piece of infrastructure on the Internet, encompassing a quarter billion nodes. The DHT allows one to store short byte snippets in the distributed system, making it ideal for resource location. In one embodiment, dCDN takes advantage of the deployed BitTorrent base as a robust discovery mechanism. Thus, existing BitTorrent clients can help get the system running and helping to solve the initial bootstrapping problem. Device-to-device (D2D) connectivity, pioneered in FireChat, allows people to build a network without any infrastructure, even when the Internet is not functional. This also works with laptops, which of course become more important for content distribution. D2D connectivity adds a layer of independence from other factors not available with any other techniques. In one embodiment, before content can be accessed by the peer-to-peer component of the system, the content needs to be ingested. In this embodiment, ingestion is the process of making the content addressable by URL or URL plus a content hash. Ingestion is handled by an injector, which is a trusted party that authenticates and adds the content to the dCDN. During content ingestion, a web download from the origin can be started, so that the dCDN can deliver the content faster using the dCDN peer-to-peer and device-to-device techniques, and not slowing content delivery down. With the content ingested, the content can be addressed by URL or by URL plus content hash. Any node in the dCDN can request the ingestion of some piece of content by URL or by URL plus content hash. Furthermore, any node can elect to act as an ingestion bridge when ingesting by URL plus content hash. Ingestion probability is based on local policy and history of ingestion success. Thus, in an environment where ingestion does not work, ingestion by a node may not make sense to try. In one embodiment, content ingestion by URL alone—or rather, the act of supplying a content hash for a given URL—is sensitive. There will be three stages in implementing it: any node, trusted nodes, and a Byzantine reputation system. Option 1—Ingestion by anyone: In this embodiment, the same rules apply to ingestion by URL as to ingestion by URL plus content hash. Option 2—Ingestion by trusted nodes: The mapping from URL to URL plus content hash in the DHT is protected by a public key signature in this case. The public keys are embedded in the dCDN clients. A set of well-connected trusted nodes are each in possession of one of the private keys. These nodes act as an ingestion bridge and also help seed the content. In one embodiment, these nodes perform a functionality that is similar to uploaders in BitTorrent or exit nodes in Tor. Option 3—Byzantine reputation system: Each node willing to act as an ingestion bridge uses its own public/private key pair. In this embodiment, none of these nodes are inherently trusted, but these nodes can build up reputation among themselves and with the consumer nodes. Each time a node ingests correctly and timely some resource for the first time, the reputation of the node improves; when that ingests incorrectly or too slowly to matter, the node reputation drops. For any option: Once the content is ingested, the content becomes content-addressable. When data is transferred between nodes, in one embodiment, the data can be opportunistically encrypted, similarly to BitTorrent's encrypted mode, but using different modern primitives. For example and in one embodiment, dCDN can employ different hash functions and/or cryptographic processes (e.g., BLAKE, SALSA20, and/or other various types of hash functions and/or cryptographic processes). This protects privacy and security of communication and is used in the same scenarios as HTTPS for static content. From a developer's perspective, dCDN integration consists of1. linking against the SDK; and2. replacing the OS API calls that fetch HTTP(S) objects with call into the dCDN by adding a couple of letters to the beginning of their names. The process is designed to be straightforward to implement for any mobile app developer, including ones without background in networks and content distribution. For example, in the case of Android, there's a replacement for httpClient.execute and, in the case of iOS, for dataWithContentsOfURL. These are direct replacements which allow a developer to use the new calls into the dCDN the same way they would use the usual OS API calls. This mechanism of integration makes it so that users do not have to do anything extra to use the dCDN. Whether to even expose the existence of the dCDN to the end users is a decision left up to the integrating software. In a further embodiment, an even easier for a developer's mechanism of integration is through the use of http_proxy setting: on Android, for example and in one embodiment, this proceeds as follows: The developer of the mobile application integrating the SDK adds the SDK to the build and (optionally) invokes an initialization method. Explicit or implicit initialization mechanism subsequently sets up the proxy setting so that all connections flow through the proxy operated locally (and addressed through an IP address associated with the local host). In another embodiment, a proxy can be set up on the mobile device such that each URL request is routed through this proxy. In this embodiment, by configuring this local proxy on the mobile device, there is not a need to compile in the dCDN SDK. As described above, LEDBAT can be used as an underlying protocol for the content distribution. In one embodiment, the use of LEDBAT eliminates the impact that peer-to-peer apps can have on the network performance for other apps: the content distribution lets the entire capacity of the network be exploited while avoiding interfering with other traffic. When implemented on a mobile device, the system would only serve others either when the application is in use, or when the device is plugged in and connected to Wi-Fi, to prevent excessive data usage or impact on battery life. In one embodiment, any object can be fetched from a variety of sources, including the origin (e.g., the device and path represented by the URL for the requested content). This guarantees that performance can only be improved. An option to avoid origin fetching will be provided for applications that either want the savings of directing more traffic peer-to-peer, or that wish to do so for policy reasons or user choice. Multiple sources allow for both faster and more predictable downloads, similarly to BitTorrent. In one embodiment, dCDN provides:1. a mechanism for locating content that can be served by the peers;2. a mechanism for ingestion of new content; and3. content delivery using one or more peer-to-peer or a device-to-device connection plus, optionally, origin. In this embodiment, this is a usable dCDN embodiment that can be improved by a variety of features. FIG.1is an illustration of an embodiment of the system100that is part of the dCDN. InFIG.1, system100includes a peer106that wishes to download content using the dCDN. In one embodiment, the peer106checks for a connection to the cloud injector102A. In one embodiment, a cloud injector102A-B is a trusted party that authenticates and adds the content to the dCDN. In this embodiment, each cloud injector102A-B has a private key that is used to sign the content by creating signature. While in one embodiment, the cloud injector102A-B creates a Merkle tree for the signature using the private key, in alternate embodiments, the cloud injector102A-B can create a different type of cryptographic signature (e.g., the X-Sign format described below). In one embodiment, by cryptographically signing the content from a trusted partner, a peer (such as peer106) will know that the content added to the dCDN has been authenticated. This is different from BitTorrent, where anyone can add content by creating a torrent file and a peer does not if the content from the torrent is authentic or has been maliciously altered. In one embodiment, the peer106can request the content from the injector102A using the DHT described above. In one embodiment, the peer106attempts to connect to the injector102A for the particular content. In this embodiment, the connection not available to the injector102A, so the peer106attempts to the get the content form another source, such as the injector proxy104. In one embodiment, an injector proxy104is as a proxy for an injector102B. The peer104can identify the injector proxy104by reading an injector proxy swarm, using cached proxy addressing information, and/or using peer exchange for an injector proxy swarm. With a successful connection to the injector proxy104, the proxy104can fetch and transmit data to the peer106. In one embodiment, the injector proxy connects to a different injector, namely injector102B, which requests the content. The requested content can be routed to the peer106via the injector102B and the injector proxy104. While inFIG.1, the peer is illustrated downloading content from the proxy, in alternate embodiments, the peer can download from multiple content sources (e.g. multiple injectors and/or proxies).FIG.2is a block diagram of an embodiment of a system200that includes a peer202retrieving content from dCDN content sources210A-N. InFIG.2, a peer202can download content from one or more dCDN content sources210A-N. In one embodiment, each of the dCDN sources210A-N can be a peer, an injector, or a proxy as described elsewhere. Furthermore, the peer202can be a peer106as described above. In one embodiment, a peer is a device that is capable of participating in the dCDN. In one embodiment, the peer202includes an application204, dCDN SDK206, and dCDN configuration208. In this embodiment, the application204can be any type of application that is running on the peer202that can request content (e.g., web content, software updates, media content, and/or any other type of downloadable content). In addition, the dCDN SDK206is an SDK that implements the peer functionality of the dCDN. For example and in one embodiment, the dCDN SDK206can include functionality for locating injectors and/or proxies, making requests for the content, handling multiple requests for content, and/or other functionality. In particular, the dCDN SDK206can be used to split up the download among multiple dCDN download sources. In this example, when the application204makes a download requests, the dCDN SDK206receives the requests and determines the number of content sources for the swarm that should be attempted to use. For example and in one embodiment, the dCDN SDK206can be configured to use tens or hundreds of content sources210A-N. Unlike a BitTorrent client where the swarm for the content is initially defined in the torrent file, a dCDN peer202will use the dCDN configuration208to determine content sources for the requested content. In one embodiment, the dCDN configuration208can include a DHT that the dCDN SDK uses to determine the number of content sources and which content sources to use to download requested content. In addition, the dCDN configuration can also include cached dCDN content source addressing information (e.g., addresses and/or port information). Alternatively, the peer202can exchange swarm information with other peers (not illustrated). In one embodiment, a swarm is a set of nodes in the dCDN that can provide content to the peer. In this embodiment, unlike a BitTorrent swarm, which is a set of peers that share a torrent, a dCDN swarm is not limited on particular content or set of content. Instead, a dCDN swarm is a set of nodes that can provide content to the peer and is not necessarily limited to one particular piece of content. In one embodiment, with the swarm information in the dCDN configuration208, the dCDN SDK206can determine the number of dCDN content sources210A-N and which of the dCDN content sources210A-N to initially contact for downloading the content. For each of the dCDN content sources to use, the peer202determines which portion of the requested content the peer202should request from that dCDN content source. In one embodiment, if the peer202wishes to download a (potentially large) file, such as a media file or operating system update (which can be in the gigabytes of data), the peer202can determine to download portions from different ones of the dCDN content sources210A-N. The portions can be the same or different sizes. Which sized portions to download from the dCDN content sources210A-N is determined using the dCDN SDK206. In one embodiment, the dCDN SDK206uses a heuristic to determine the requested size for a portion to be downloaded from a particular dCDN content source. In this embodiment, the heuristic can be based on one or more factors, such as IP address, content size, latencies to other peers, throughput, performance of peers on the same network, and/or other factors. For example and in one embodiment, the peer202can save latencies, performance, and/or throughput from other peers, proxies, or injectors from previous content requests. Alternatively, the peer202can infer performance data from another peer, proxy, or injector that is on the same network with another peer, proxy, or injector that is on the same network. In this example, if peer202knows about certain performance about peer A and peer B is in the network or sub-network, peer202can infer this performance data to peer B. In a further example, peer202can exchange any of these factors with other peers, proxies, and/or injectors. In another embodiment, the heuristic can include con include a factor based on newness of the peer, proxy, or injector. For example and in one embodiment, if peer202can request a smaller size for a peer, proxy, or injector that peer202has not used before as opposed to a peer, proxy, or injector that has a known set of factors. Alternatively, or in addition, the dCDN SDK206can determine portion size using machine learning. For example and in one embodiment, the dCDN SDK206can use a neural network to determine requested sizes from a dCDN content source. In this example, the peer202receives a neural network, where the neural network can include training data based on known factors, such as the factors outlined above. In addition, the dCDN SDK206can further refine the neural network as the peer202gathers more information regarding the factors outlined above. In addition, dCDN SDK206uses a peer, proxy, or injector for each of the downloadable content portion as described inFIG.1above. For example and in one embodiment, assume that the peer202will request twelve different portions from three different dCDN content sources (say,210A-C). The dCDN SDK206determines the sizes of the twelve different portions and schedules the downloads from the three different sources210A-C. In this example, the dCDN content sources210A-C can either be an injector or a proxy. For example and in embodiment, the dCDN SDK206determines that dCDN content source210B can handle the most bandwidth, dCDN content source210C can handle the least bandwidth, and dCDN content source210A falls somewhere in-between. In this example, the dCDN SDK206can create three large portions, four medium sized portions, and four small size portions for the downloadable content. The portions within each grouping can be the same or different. In this example, the dCDN SDK206would make requests to dCDN content source210C for the large sized portions, dCDN content source210A for the medium sized portions, and dCDN content source210B for the small sized portions. In one embodiment, the dCDN SDK206can make the size requests using a range parameter in the HTTP protocol. In this embodiment, dCDN SDK206makes the requests using HTTP over LEDBAT protocol. In one embodiment, the dCDN mechanism for downloading the content is different from BitTorrent in that, for each downloadable content, a BitTorrent torrent file is required. This torrent file defines even-sized pieces of the downloadable content. A BitTorrent client uses the torrent file to determine which nodes to use for downloading the different pieces of the downloadable content. In contrast, dCDN does not require a torrent file for each of the downloadable content. Instead, dCDN determines on the fly which portion of the downloadable content is be retrieved from the different dCDN content sources. In one embodiment, the mechanism for locating content that can be served by the peers may be a DHT. Any DHT can be used, such as the BitTorrent DHT. An advantage of using the BitTorrent DHT is that this DHT ties into a large robust existing network of nodes that can store the data. In addition, and in one embodiment, the content delivery connection(s) will work better when the content delivery connection(s) use a delay-based congestion control mechanism such as LEDBAT. Under this embodiment, the receiver measures delay in packet transfer and will infer from a positive increase in one way delays that congestion is increasing and will therefore adjust the transfer rate accordingly. In another embodiment, use and management of multiple LEDBAT or other peer-to-peer streams allows for an increase in performance than a single stream. For example and in one embodiment, a good number of connections to keep active is up to 4 or 5, but a much larger number of open quiescent connections can significantly improve a failover time and the ability to discover better peers. In an alternative embodiment, there can be a smaller or greater number of active connections for each peer. In a further embodiment, algorithms that govern sourcing of nodes can result in much better performance. One example of such an algorithm is to sort peers by download rate that has been obtained from them so far, and only serve return data requests from the top five. Another example is to keep track of which peers have historically provided good performance and prefer them. In one embodiment, a peer can reserve some connections purely for discovery of good peers, so that a fraction of active connections, such as one out of five, is to a random peer. Trusted content ingestion nodes (e.g., injectors) can, in one embodiment, protect the system from incorrect data inserted into the URL to content hash mapping. In one embodiment, an injector cryptographically signs the content to inject the content into the dCDN. In the system operation as described, and in one embodiment, ingestion and serving can be consecutive steps. For systems designed to serve content with highly concentrated demand where a few hot resources are a substantial fraction of total demand, this is sufficient. When the content large, and in a further embodiment, however, the system to simultaneously ingest and serve from trusted nodes. For example and in one embodiment, the content is a heavily anticipated large system update for a mobile device. In this example, as soon as the update is made available, numerous peers would want to download the system update from one or more injectors. As each of the injectors start to ingest the system update, parts of the system update are ready to be downloaded by peers connecting to the injectors. In one embodiment, by using a Merkle tree to ingest the content, parts of the system update can be authenticate and ready for distribution. In this embodiment, when a mapping from URL to content hash is not yet known (because the full content has not been downloaded yet), one or more trusted nodes (such as an injector) can start to download the content to both learn the content hash and to seed the content in the dCDN system. These trusted nodes can start serving this content immediately before they finish the download. This speeds up the first time a resource is requested by a client the most. In one embodiment, the dCDN can operate by sending data in cleartext. In an alternate embodiment, if the original static resource is served using HTTPS rather than HTTP, it better matches the security properties to use opportunistic encryption on transport connections. In this embodiment, the data is encrypted using a symmetric cipher, such as a block cipher in counter mode, with a key negotiated at the beginning of connections or computed using identities of the nodes. Negotiation involves an additional round-trip. A way to avoid the extra round trip is to identify nodes by their public keys and use the knowledge of associated private keys to avoid the additional round-trip. In the most minimal case, each side can choose its own key and initialization vector and, optionally, HMAC key and transmit the material to the other side at the beginning of the connection, with the keys encrypted using the recipient's public key. Any public key encryption scheme, block cipher, and HMAC are suitable for this approach. The dCDN can rely on only a sole origin URL. However, CDN customers are increasingly interested in using multiple CDNs. In one embodiment, the dCDN, architecturally, is in a unique position of being able to optimize the use of multiple underlying CDNs, balancing cost and performance. In this embodiment, the distributed part of the dCDN becomes one of the underlying CDNs that can be used to be balanced with other CDNs. One approach of such optimization is to assign monetary costs to each underlying CDN (including the origin) and to minimize a utility function that balances cost, expected before arrival, and expected variance of the download rate. Any function monotonic in each parameter can be used as such a utility function. In a peer-to-peer system, such as BitTorrent, a node participates in a swarm for the purpose of obtaining the data for that node's own use. In one embodiment, the dCDN can incorporate altruistic nodes, which are nodes that are used to help others. In this embodiment, the motivation to introduce the altruistic nodes is to make the system work better. Altruistic node can be introduced by an operator of the dCDN as, effectively, a form of alternative light-weight infrastructure. The naïve approach of having altruistic nodes act identically to normal nodes while seeding the content indefinitely is, somewhat surprisingly, suboptimal, because acting in this way, the node will download the content that nodes wants to help serve exactly once. To break even, the altruistic node will need to upload this content once on average. Moreover, when the node is downloading the content, it is taking up resources that might be better spent delivering the content to its ultimate beneficiary. For example, consider a simplified situation with one seeder, one downloader, and one altruistic node, all with network connections of the same symmetrical capacity; it is easy to see that in this case, the optimal behavior for the altruistic node is to do nothing until the downloader has the file, as any network activity by the altruistic node increases the time to arrival of the content at the destination without any reduction in the download rate variance or cost. In another embodiment, one approach much better than download-everything is for altruistic nodes to initially download only small and ideally non-overlapping portions of the content. For example and in one embodiment, most conservatively, if there are many such nodes, each altruistic download a single block and wait with downloading more until the downloaded is served k times (k˜2 helps minimize the risk that a second block won't be successfully served). Usually with peer-to-peer systems, such as BitTorrent, only logical connections established over the existing Internet links are used. In one embodiment, with dCDN, device-to-device connectivity makes it possible to use the additional capacity of wireless connections established in physical proximity to transfer data. The nearby connections can be Bluetooth, Bluetooth Low Energy, ANT, Wi-Fi ad hoc, Wi-Fi Direct, NFC, Airdrop, Wi-Fi Aware, and other device-to-device connectivity options. The dCDN is able to take advantage of connections built in background seamlessly without a need for user interaction. In a further embodiment, the dCDN can use multiple connections of different types concurrently (e.g., Wi-Fi for one connection and Bluetooth for another connection). In one embodiment, the dCDN content hash can be any hash that uniquely identifies the content. In particular, the content hash can be the BitTorrent infohash, providing BitTorrent compatibility with the ability to download objects from the BitTorrent network, enabling the dCDN to act as an advanced BitTorrent client, and giving access to substantial amounts of existing content. For example, Amazon S3 content is available via BitTorrent. In a further embodiment, peers in the dCDN are able to exchange information about other peers, rapidly discovering other nodes in the system. For example and in embodiment, peers can exchange information about injector and proxy nodes. This allows a particular peer to select from a greater range of possible injector and/or proxy nodes. While in one embodiment, the dCDN is incorporated in a library or an SDK incorporated into another application, dCDN can be used as a client making the full use of the system. Two examples of such clients are a dCDN-enabled web browser and a BitTorrent client. In addition to the two modes of content ingestion described above, a more sophisticated mode, a Byzantine reputation system for content ingestion nodes, is possible. In this mode and embodiment, as in the mode where ingestion can be done by anyone, any node that wants to participate in content ingestion may attempt to do so. However, as in the trusted node ingestion mode, the content is protected and delivered securely even in the presence of malicious nodes in the system. Nodes with neutral reputation need to perform ingestions that are probabilistically checked by nodes with high positive reputation. When ingestion is performed timely and correctly (the content hashes coincide), the node's reputation improves. Otherwise, the node's reputation plummets. In one embodiment, multiple connections to the same origin can help to both improve the download rate and reduce the variance of the download rate. In the case of resources identified by URLs with HTTP or HTTPS, HTTP range requests can be used to obtain different parts of the resource. In this embodiment, four is a good number of connections; even at two connections, however, notable improvements can be obtained, particularly improving the experience on lossy network links. As the number of connections increases, the returns from additional connections diminish, until performance actually starts going down. The peak location depends on a variety of network conditions, but it rarely makes sense to go beyond 16 connections, and two is enough for smaller files in particular. Four is a reasonable compromise, working well across a range of network conditions when there's a possibility that the conditions are particularly bad or the file is large, such as a software update or a video stream or download. In one embodiment, it is better to achieve low extra delays in the network on the transport layer. However, in some situations it can be preferable to accomplish low delays with changes to the network itself rather than the transport protocol implemented end-to-end by the end nodes. In this case, the network device at the bottleneck of the connection, such as a router, a switch, or a cable or DSL modem can accomplish it with Active Queue Management (AQM). System Overview and Terminology The system consists of injectors, which form a trusted service that runs in the cloud and of peers. Some peers act as injector proxies; injector proxies simply as proxies. Each injector possesses a private injector key. Each peer has a hardcoded copy of all injector public keys. In one embodiment, the injectors are used to initially obtain content from a web origin and to place it into the peer-to-peer (dCDN) network. While in one embodiment, peers can use the BitTorrent distributed hash table (DHT) to find injectors, injector proxies, and/or other peers interested in the same or similar content, in alternate embodiments, peers can use a different mechanism to find injectors, injector proxies, and/or other peers (e.g., cached injector information, peer exchange, and/or other types of information exchange mechanisms). In a further embodiment, the peers can use Low Extra Delay Background Transport (LEDBAT) in Micro Transport Protocol (uTP) framing to connect to one another, as well as to injector proxies and injectors. Alternatively, the peers can use different can use a different transport protocol for the download of the content portions (for example, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and/or other types of transport protocols). The peer-to-peer and device-to-device connections are called transport connections and run the peer protocol. Injector Behavior The injector, in one embodiment, is a daemon that runs in the cloud. When an instance of an injector starts, the injector announces its Internet Protocol (IP) address and port number in the BitTorrent DHT of the injector swarm, SHA-1 (“injectors”), to make this injector easier to be found. The injector can accept transport connections from peers and injector proxies. In one embodiment, an injector is used as a trusted source to add content to the dCDN by signing the content as described above. In addition, the injector swarm is the set of injectors that available in the dCDN, In one embodiment, the injector swarm is stored in the DHT. In one embodiment, the DHT is a distributed table that stores (key, value) pairs. A peer can lookup information using a key, such as a 160-bit key. In this embodiment, the key can be generated using a cryptographic hash function, such as Secure Hash Algorithm-1 (SHA-1). With the key, a peer can lookup for injectors, injector proxies, URLs, or domains in the DHT (e.g., using the keys SHA-1 (“injectors”), SHA-1 (“injector proxies”), SHA-1 (“URLs”), or SHA-1 (“domains”)). The peer would perform the lookup and read the value(s) associated with the key in the DHT. Furthermore, and in one embodiment, the dCDN is repurposing the DHT, so as to store different types of information than was used for BitTorrent. In one embodiment, an injector announces itself by inserting characteristics of this injector into the DHT. In a further embodiment, an injector can include the content signature when an URL is inserted into the DHT. In this embodiment, if the content signature includes a Merkle tree, the Merkle tree is inserted in the DHT. Injector Verification Challenge In one embodiment, a peer may challenge an injector or an injector proxy. Successful response to the challenge verifies that the challenged entity can connect to (or is) an injector. In one embodiment, to avoid pointlessly fetching external URLs, the TRACE method is used. This method of HTTP echoes back the request from the peer. Since the injector signs its responses with the X-Sign header, the response is guaranteed to have come from an injector if the challenge is unique. The challenge includes a cryptographically secure random nonce. The format of the challenge is TRACE/(uuid) HTTP/1.1. The response is the echoed request with an added X-Sign header. In another embodiment, the injector signs the response with an X-mSign header. In this embodiment, the X-mSign header includes a Merkle tree. Peer Behavior A peer performs some actions on start and some actions for each download request. Peer Behavior on Start When a peer starts, it connects to an injector and, if successful, becomes a proxy. If not successful, it connects to a proxy instead. This works as follows. Connect to an Injector When a peer starts, the peer attempts to connect to an injector. To do so, the peer can find injectors using the following methods:1. read the injector swarm (using the announce_only_get_flag),2. use hardcoded IPs and ports,3. use IPs and ports cached from previous runs, and4. use peer exchange for the injector swarm. After connecting to an injector, a peer may verify that the injector is real using the injector verification challenge. Become a Proxy, or Connect to One If the peer connected to an injector successfully, the peer can start acting as an injector proxy. If the peer failed to connect to an injector, the peer can connect to an injector proxy. To find an injector proxy, the peer can use the following methods:1. read the injector proxy swarm (SHA-1 (“injector proxies”)),2. use IPs and ports cached from previous runs, and3. use peer exchange for the injector proxy swarm. Peer Behavior on Request When a peer gets a request for content identified by a URL from the application, in one embodiment, the peer opens an origin connection and tries to start getting the object as usual using HTTP or HTTPS. In addition, the peer differentiates between static and dynamic requests. Static requests are those where the content is public and would be usable by many other people requesting the same URL. Dynamic requests are those that return private or personalized content. For example, the front page of The New York Times is static, while the Facebook feed is dynamic. The peer may treat GET requests as static and POST as dynamic. This is the bare minimum for differentiating between the two. The peer can have substantially more sophisticated heuristics than the minimum GET/POST difference. Note that static resources may be mutable. It is not the immutability, but scope of permitted and useful applicability that makes a resource static. Peer Behavior on Static Request In addition to the origin connection, the peer can immediately announce on the URL swarm and start trying to establish up to 1 peer connection. In one embodiment, each URL has an associated swarm used to find other peers interested in this content. The URL swarm is currently SHA-1 (“URL”). It is anticipated that this will change in a future protocol version due to two different concerns: DHT scalability and security. If there's no-one found on the URL swarm who has a valid unexpired URL content signature, the peer can ask an injector or an injector proxy, whichever the peer is connected to, to inject the URL. A peer can remain in the URL swarm after download is finished and until the soonest of the following: max seed ratio is reached, the signature expires, the signature becomes obsolete due to the presence of a newer signature, one week elapses, or the content is deleted from OS cache or due to space constraints. The peer may suspend participation in the swarm due to resource, battery, or policy constraints. When such constraints are lifted, the peer SHOULD resume seeding. Peer Behavior on Dynamic Request If the peer is successful in communicating with the origin, it can simply get the response from the origin and not involve the peer-to-peer network. If that does not work, the peer can use an injector proxy and use the HTTP CONNECT method to reach the origin. In either case, the peer should not split any byte range splitting as dynamic requests often have non-idempotent side effects. The peer can cache the method by which it has received the last dynamic response from a given domain and reuse it to avoid multiple timeouts when origin is not available directly, but available through a proxy. Injector Proxy Behavior The injector proxy can forward peer protocol requests to the injector and may serve HTTP CONNECT method requests. Peer Protocol In one embodiment, the peer protocol is HTTP over LEDBAT, with an additional header. Range requests are used to get parts of the file. The content is authenticated using a public key signature produced by the injector. The signature is sent in X-Sign header. X-Sign authenticates the entire file, but does not authenticate any parts. When an injector first injects the object, the injector starts sending the object before the injector has seen the whole thing, and so the signature cannot be sent at the beginning where headers normally go. In this embodiment, the injector can send the X-Sign header as a trailer with a last empty chunk. When two peers are exchanging data, these peers can have to open two separate connections to send data in each direction. In an alternative embodiment, the direction of change that appears most desirable for the future is an adoption of a subset of BitTorrent v2, for the unlimited granularity of verification. Technically, and in one embodiment, the injector transmits the X-Sign so as inject the file. The client can at that point fetch the file through an untrusted injector proxy. However, given that to produce the X-Sign the injector needs the whole object, it makes sense that the injector also sends the first copy that the client can later seed to other peers. X-Sign Format X-Sign is a signed message that consists of the ASCII characters “sign” (4 bytes), timestamp when the message was generated (4 bytes), and of the hash of the content. The content includes HTTP headers other than X-Sign. One of the headers whenever peer protocol is used is a Content-Location, so that the URL is authenticated. X-Sign is transmitted base-64 encoded. In other words, X−Sign=base64(sign(“sign”+timestamp+hash(headers+content))). Here, in one embodiment, sign( ) and hash( ) are the default primitives in libsodium. Policy Settings An app incorporating NewNode may change the defaults for policy settings. In addition, an app may expose some or all of these settings to a user and allow the user to override the app's defaults. There are, thus, three levels of decision-making: the NewNode specification, which provides the defaults suitable for the widest variety of apps, an app developer, who can change the defaults to what makes the most sense for the specific app and its users, and, finally, the user. The list of policy settings and their defaults is as follows: connect to origin (default: ON) act as a proxy (default: ON) only act as a proxy on Wi-Fi (default: ON) only act as a proxy when plugged into external power (default: ON) encrypt peer connections (default: the inverse of connect to origin) max seed ratio (default: 3) max storage (NO default, use OS cache) In one embodiment, a peer can use dCDN to download from one or more nodes (e.g., multiple peers, injectors, injector, or a combination therein).FIG.3is a flow diagram of an embodiment of a process300to retrieve content from multiple dCDN content sources. InFIG.3, process300begins by receiving dCDN configuration for a peer at block302. In one embodiment, the dCDN configuration can include configuration for whether to connect to the origin, should the peer act as a proxy, act as a proxy if on Wi-Fi, act as a proxy if plugged into external power, whether to encrypt peer connections, maximum seed ratio, maximum storage, maximum number of connections for a download, and/or other types of configurational information. At block304, process300receives a request to download content. In one embodiment, process300receives the request from an application (e.g., a web browser, software updater, media application, and/or another type of application that downloads files). Process300determines the download parameters for the content request at block306. In one embodiment, the download parameters can be number of content bytes to download, maximum number of active connections, and/or any other parameters for managing the download process. In one embodiment, process300determines the content byte value from the header information returned on a request. In this embodiment, process300retrieves the header for the content by using either a GET or a HEADER HTTP request. In one embodiment, process300can use a GET HTTP request if process300may determine that the requested content is relatively small. Alternatively, if process300determines that the requested would be large, process300uses a HEADER HTTP request. In a further embodiment, process300can always use a GET HTTP request. If the requested is large and process300can use further dCDN content sources to retrieve the content, process300can close the connection after the header for the content is retrieved. In addition, process300determines which nodes to use for the download of the content at block306. In one embodiment, process300can utilize one or more nodes (e.g., multiple peers, injectors, injector, or a combination therein) to download the content. In this embodiment, process300chooses which nodes to use from the download based on a variety of factors. For example and in one embodiment, process300can find nodes to download content from the DHT, local discovery, peer exchange, and/or information provided by an injector. In one example and embodiment, process300can discover nodes from the DHT. In this example, process300can search the DHT using the strings “injectors,” “injector proxies,” and/or “peers” to search the DHT for the various types of nodes that can help download content. In another example, process300can maintain a list of nodes that can be used for downloading the content. In a further example, process300can exchange peer information with other peers, where this peer information can be used to select nodes for the downloading content. In another example, process300can receive peer information from an injector in response to process300requesting a download from that injector. In one embodiment, with the node information, process300can select one or more nodes to download the content. In one embodiment, process300can select on a variety of factors (e.g., latency of communications, local vs. remote network location, bandwidth availability, and/or other types of factors based on network characteristics). If process300just has an IP address for a node, process300can just use that node for download of a portion of the content and collect performance information regarding that node. Process300performs a processing loop (blocks308-320) to download different portions of the content using different nodes in the dCDN. At block310, process300attempts to connect to an injector. In one embodiment, process300identifies the injector node using a BitTorrent DHT. In one embodiment, an injector can be a general injector for all types of content, be restricted to a particular subset of content (e.g., there can be an injector for a URL domain or group of domains, or can be for one particular content). In one embodiment, process300connects to the injector using the TRACE option of the HTTP protocol as described above. Process300checks to see if the connection to the injector is available at block312. If there is not a connection available to the injector, execution proceeds to block314below. If there is a connection available to the injector, process300configures the local node as a proxy for the injector at block316. In one embodiment, an injector proxy acts as a proxy for the injector and can transmit the content as a whole or portions of the content to the requestor. At block318, process300downloads the content from either the injector or the injector proxy (or peer). In one embodiment, process300requests the whole content. In another embodiment, process300can requests download of a portion of the content. In this embodiment, which portion to download can be based on a heuristic for the target node (e.g., the peer, injector or injector proxy), machine learning, or some other mechanism, as described above. In one embodiment, process300can use a characteristic of the injector or injector proxy to determine the size of the portion of the content to be downloaded (e.g., delay between the peer and the node, available bandwidth, and/or other types of characteristics). For example and in one embodiment, if process300may decide to download a 2 megabyte (MB) portion of a movie file from an injector node that is in the cloud and a 10 MB portion from a laptop node that is an injector proxy or peer and is in the same local network as the requesting peer. In one embodiment, process300requests the content portion using HTTP over LEDBAT. To download portions of the content, process300uses the range parameter of the HTTP protocol. At block320, the processing loop ends. At block314, process300connects to an injector proxy if process300cannot connect to the injector. In one embodiment, the injector proxy is a proxy for a different injector than the injector originally receiving a connection request from process300. Execution proceeds to block318above. As above, a peer, injector proxy, or injector can receive a request to download content or a portion of the content. In one embodiment, the injector proxy may not have the content and will contact an injector for the content before sending the content to the peer requestor of the content.FIG.4is a flow diagram of an embodiment of a process400to handle a peer request for content at block402. InFIG.4, process400receives a peer request to download content for a URL. In one embodiment, the request can be for the entire content or can be for a portion of the content. In one embodiment, process400receives an HTTP request for the content. At block404, process400opens an origin connection. In one embodiment, the origin connection is a connection to the device represented by the URL. At block406, process400determines if the request is a static or a dynamic request. In one embodiment, process400determines whether the request is static or dynamic based on the type of request received by process400. In this embodiment, if the HTTP request was a GET, process400determines that the request is a static request. Alternatively, if the request was a POST, process400determines that the request is a dynamic request. In one embodiment, static requests are those where the content is public and would be usable by many other people requesting the same URL and dynamic requests are those that return private or personalized content. Process400handles the static requests at block408. In one embodiment, in addition to the origin connection, process400should try to establish another peer connection using a URL swarm as described above. Execution process to block412below. At block410, process400handles the dynamic request. In one embodiment, if process400is successful in communicating with the origin, process400simply gets the response from the origin. If process400cannot connect with the origin, process400finds an injector proxy to retrieve the content from. Furthermore, process400can cache the method process400used to retrieve the content. Execution proceeds to block412below. At block412, process400transmits the requested content to the peer that requested it. FIG.5is a flow diagram of an embodiment of a process500to respond to an injector challenge. InFIG.5, process500announces the injector characteristics at block502. In one embodiment, the injector announcement is performed when the injector starts up. In a further embodiment, the injector announces its IP address and port number in the BitTorrent DHT in the injector swarm. At block504, process receives a challenge from a peer. In one embodiment, the challenge can be used by a peer to determine if the peer can make a connection to the injector. In one embodiment, process500receives a TRACE over HTTP request. Process500sends a response to the TRACE over HTTP request, at block506, by adding an X-Sign header to the echoed request as described above. At block508, process500receives a request to download the content. In one embodiment, the download request can be an HTTP request over LEDBAT protocol as described above. In addition, the request may or may not include a range parameter instructing process500to download just apportion of the requested content. Process500transmits the requested content at block510. FIG.6illustrates one example of a typical computer system, which may be used in conjunction with the embodiments described herein. For example, the system600may be implemented including a NewNode peer106as shown inFIG.1above. Note that whileFIG.6illustrates various components of a computer system, it is not intended to represent any particular architecture or manner of interconnecting the components as such details are not germane to the present invention. It will also be appreciated that network computers and other data processing systems or other consumer electronic devices, which have fewer components or perhaps more components, may also be used with the present invention. As shown inFIG.6, the computer system600, which is a form of a data processing system, includes a bus603which is coupled to a microprocessor(s)605and a ROM (Read Only Memory)607and volatile RAM606and a non-volatile memory611. The microprocessor605may include one or more CPU(s), GPU(s), a specialized processor, and/or a combination thereof. The microprocessor605may retrieve the instructions from the memories607,609,611and execute the instructions to perform operations described above. The bus603interconnects these various components together and also interconnects these components605,607,609, and611to a display controller and display device616and to peripheral devices such as input/output (I/O) devices which may be mice, keyboards, modems, network interfaces, printers and other devices which are well known in the art. Typically, the input/output devices615are coupled to the system through input/output controllers613. The volatile RAM (Random Access Memory)606is typically implemented as dynamic RAM (DRAM), which requires power continually in order to refresh or maintain the data in the memory. The mass storage611is typically a magnetic hard drive or a magnetic optical drive or an optical drive or a DVD RAM or a flash memory or other types of memory systems, which maintain data (e.g. large amounts of data) even after power is removed from the system. Typically, the mass storage611will also be a random access memory although this is not required. WhileFIG.6shows that the mass storage611is a local device coupled directly to the rest of the components in the data processing system, it will be appreciated that the present invention may utilize a non-volatile memory which is remote from the system, such as a network storage device which is coupled to the data processing system through a network interface such as a modem, an Ethernet interface or a wireless network. The bus603may include one or more buses connected to each other through various bridges, controllers and/or adapters as is well known in the art. FIG.7shows an example of a data processing system, which may be used with one embodiment of the present invention. For example, system700may be implemented as a build system76as shown inFIG.1above. The data processing system700shown inFIG.7includes a processing system711, which may be one or more microprocessors, or which may be a system on a chip integrated circuit, and the system also includes memory701for storing data and programs for execution by the processing system. The system700also includes an audio input/output subsystem705, which may include a microphone and a speaker for, for example, playing back music or providing telephone functionality through the speaker and microphone. A display controller and display device709provide a visual user interface for the user; this digital interface may include a graphical user interface which is similar to that shown on a computer when running operating system software with a graphical user interface, or smartphone when running an operating system with a graphical user interface, etc. The system700also includes one or more wireless transceivers703to communicate with another data processing system, such as the system700ofFIG.7. A wireless transceiver may be a WLAN transceiver, an infrared transceiver, a Bluetooth transceiver, and/or a wireless cellular telephony transceiver. It will be appreciated that additional components, not shown, may also be part of the system700in certain embodiments, and in certain embodiments fewer components than shown inFIG.7may also be used in a data processing system. The system700further includes one or more communications ports717to communicate with another data processing system, such as the system900ofFIG.9. The communications port may be a USB port, Firewire port, Bluetooth interface, etc. The data processing system700also includes one or more input devices713, which are provided to allow a user to provide input to the system. These input devices may be a keypad or a keyboard or a touch panel or a multi touch panel. The data processing system700also includes an optional input/output device715which may be a connector for a dock. It will be appreciated that one or more buses, not shown, may be used to interconnect the various components as is well known in the art. The data processing system shown inFIG.7may be a handheld computer or a personal digital assistant (PDA), or a cellular telephone with PDA like functionality, or a handheld computer which includes a cellular telephone, or a media player, such as an iPod, or devices which combine aspects or functions of these devices, such as a media player combined with a PDA and a cellular telephone in one device or an embedded device or other consumer electronic devices. In other embodiments, the data processing system700may be a network computer or an embedded processing device within another device, or other types of data processing systems, which have fewer components or perhaps more components than that shown inFIG.7. At least certain embodiments of the inventions may be part of a digital media player, such as a portable music and/or video media player, which may include a media processing system to present the media, a storage device to store the media and may further include a radio frequency (RF) transceiver (e.g., an RF transceiver for a cellular telephone) coupled with an antenna system and the media processing system. In certain embodiments, media stored on a remote storage device may be transmitted to the media player through the RF transceiver. The media may be, for example, one or more of music or other audio, still pictures, or motion pictures. Portions of what was described above may be implemented with logic circuitry such as a dedicated logic circuit or with a microcontroller or other form of processing core that executes program code instructions. Thus processes taught by the discussion above may be performed with program code such as machine-executable instructions that cause a machine that executes these instructions to perform certain functions. In this context, a “machine” may be a machine that converts intermediate form (or “abstract”) instructions into processor specific instructions (e.g., an abstract execution environment such as a “virtual machine” (e.g., a Java Virtual Machine), an interpreter, a Common Language Runtime, a high-level language virtual machine, etc.), and/or, electronic circuitry disposed on a semiconductor chip (e.g., “logic circuitry” implemented with transistors) designed to execute instructions such as a general-purpose processor and/or a special-purpose processor. Processes taught by the discussion above may also be performed by (in the alternative to a machine or in combination with a machine) electronic circuitry designed to perform the processes (or a portion thereof) without the execution of program code. The present invention also relates to an apparatus for performing the operations described herein. This apparatus may be specially constructed for the required purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), RAMs, EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. A machine readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine readable medium includes read only memory (“ROM”); random access memory (“RAM”); magnetic disk storage media; optical storage media; flash memory devices; etc. An article of manufacture may be used to store program code. An article of manufacture that stores program code may be embodied as, but is not limited to, one or more memories (e.g., one or more flash memories, random access memories (static, dynamic or other)), optical disks, CD-ROMs, DVD ROMs, EPROMs, EEPROMs, magnetic or optical cards or other type of machine-readable media suitable for storing electronic instructions. Program code may also be downloaded from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a propagation medium (e.g., via a communication link (e.g., a network connection)). The preceding detailed descriptions are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the tools used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be kept in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “determining,” “attempting,” “receiving,” “downloading,” “finding,” “creating,” “generating,” “removing,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the operations described. The required structure for a variety of these systems will be evident from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein. The foregoing discussion merely describes some exemplary embodiments of the present invention. One skilled in the art will readily recognize from such discussion, the accompanying drawings and the claims that various modifications can be made without departing from the spirit and scope of the invention. | 69,430 |
11943289 | DETAILED DESCRIPTION Before the present methods and systems are disclosed and described, it is to be understood that the methods and systems are not limited to specific methods, specific components, or to particular implementations. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the specification and the appended claims, the singular forms “a,” “an” and “the” include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” one particular value, and/or to “about” another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent “about,” it will be understood that the particular value forms another embodiment. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint. “Optional” or “optionally” means that the subsequently described event or circumstance may or may not occur, and that the description includes instances where said event or circumstance occurs and instances where it does not. Throughout the description and claims of this specification, the word “comprise” and variations of the word, such as “comprising” and “comprises,” means “including but not limited to,” and is not intended to exclude, for example, other components, integers or steps. “Exemplary” means “an example of” and is not intended to convey an indication of a preferred or ideal embodiment. “Such as” is not used in a restrictive sense, but for explanatory purposes. Disclosed are components that can be used to perform the disclosed methods and systems. These and other components are disclosed herein, and it is understood that when combinations, subsets, interactions, groups, etc. of these components are disclosed that while specific reference of each various individual and collective combinations and permutation of these may not be explicitly disclosed, each is specifically contemplated and described herein, for all methods and systems. This applies to all aspects of this application including, but not limited to, steps in disclosed methods. Thus, if there are a variety of additional steps that can be performed it is understood that each of these additional steps can be performed with any specific embodiment or combination of embodiments of the disclosed methods. The present methods and systems may be understood more readily by reference to the following detailed description of preferred embodiments and the examples included therein and to the Figures and their previous and following description. As will be appreciated by one skilled in the art, the methods and systems may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the methods and systems may take the form of a computer program product on a computer-readable storage medium having computer-readable program instructions (e.g., computer software) embodied in the storage medium. More particularly, the present methods and systems may take the form of web-implemented computer software. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices. Embodiments of the methods and systems are described below with reference to block diagrams and flowchart illustrations of methods, systems, apparatuses and computer program products. It will be understood that each block of the block diagrams and flowchart, illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create a means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks. Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions. The present disclosure relates to providing content to a plurality of users. Specifically, the present disclosure describes methods and systems for conserving bandwidth or accomplishing other aims by altering content transmissions and/or redirecting users to alternate content transmissions. A content provider can provide, for example, several content transmissions for particular content. At least one (or each) of the content transmissions can be provided at a different bit rate than the other content-transmissions. The content provider may desire to, at least temporarily, end or modify a particular content transmission. As another example, the content provider may desire to cause a user to access a different or modified content transmission than the content transmission requested by a user. As an illustration, the content transmission can comprise a multicast content stream. If the number of users accessing, requesting, and/or otherwise receiving the content transmission falls below a threshold, then the content provider can provide a content transmission that is different than the requested content transmission. For example, the content provider can provide a content transmission at different bit rate than the requested content transmission. FIG.1is a block diagram illustrating an example system for providing content. Those skilled in the art will appreciate that present methods may be used in systems that employ both digital and analog equipment. One skilled in the art will appreciate that provided herein is a functional description and that the respective functions can be performed by software, hardware, or a combination of software and hardware. In an exemplary embodiment, the methods and systems disclosed can be located within one or more content device and/or user device. For example, the content device can be configured to provide, alter, end, and/or otherwise manage content transmissions and the users accessing, requesting, and/or receiving such content transmissions. In one aspect, the system100can comprise a content device102configured to provide content to a variety of devices across a network104. For example, the content device102can provide content as one or more content transmissions. A content transmission can comprise a content stream, file transfer, combination thereof, and/or the like. The one or more content transmissions can be multicast content transmissions, broadcast content transmissions, unicast content transmissions, and/or other types of transmissions. A multicast content transmission can comprise a content transmission that is provided to one or more users at the same time. A multicast transmission can comprise a content transmission to a group of addresses. As an example, the content device102can provide the same content with one or more multicast content transmissions at different bit rates. A broadcast content transmission can comprise a content transmission to all possible destinations. In one aspect, the system100can comprise a first group106of user devices. The first group106of user devices can comprise a first number of user devices (e.g., illustrated as small circles within the group). Each of the user devices of the first group106of user devices can access (e.g., tune to) a first content transmission. The first transmission can be provided by the content device102at a first bit rate. In one aspect, the system100can comprise a second group108of user devices. The second group108of user devices can comprise a second number of user devices. Each of the user devices of the second group108of user devices can access (e.g., tune to) a second content transmission. The second content transmission can be provided by the content device102at a second bit rate. In one aspect, the system100can comprise a third group110of user devices. The third group110of user devices can comprise a third number of user devices. Each of the user devices of the third group110can access (e.g., tune to) a third content transmission. The third content transmission can be provided by the content device102at a third bit rate. In one aspect, the system100can comprise a fourth group112of user devices. The fourth group112of user devices can comprise a fourth number of user devices. Each of the user devices of the fourth group112can access (e.g., tune to) a fourth content transmission. The fourth content transmission can be provided by the content device102at a fourth bit rate. In one aspect, the system100can comprise a fifth group114of user devices. The fifth group114of user devices can comprise a fifth number of user devices. Each of the users devices of the fifth group114can access (e.g., tune to) a fifth content transmission. The fifth content transmission can be provided by the content device102at a fifth bit rate. In one aspect, various users and/or devices can be associated with the same bit rate or different bit rates as other users and/or devices. A bit rates can be associated with a user and/or device based on the screen size, processing characteristics, memory characteristics, storage, and/or the like of the device. A bit rate can be associated with a user and/or device based on the device, user location, network connection (e.g., network bandwidth), and/or the like. For example, a mobile device (e.g., smart phone) can be configured for a different bit rate than a stationary device (e.g., computing station, television). In one aspect, the user devices can be configured to switch from one group to another. For example, the user devices can be configured to switch from one content transmission to another content transmission. In one aspect, the content device102can be configured to receive requests for the content transmissions from the user devices. It should be noted that the number of user devices in any given group (e.g., first number, second number, third number, fourth number, and fifth number) can vary as user devices request the content transmissions and/or cease requesting and/or accessing the content transmissions. In one aspect, the content device102can be configured to receive requests for the content transmissions. As explained in further detail herein, the present methods and systems can allow the content device102to provide a particular content transmission even though a different content transmission is requested. The content device102can be configured to switch a user device from one group of users to another by instructing a user device to switch to a content transmission other than the one the user device is currently accessing, requesting, and/or receiving. In another aspect, the content device102can be configured to switch a user device from one content transmission to another without notifying the user device. As another example, the content device102can be configured to alter the properties, parameters, and/or the like of a content transmission while a user device is accessing, requesting, and/or receiving a content transmission. FIG.2is a block diagram illustrating another system200for providing content. The system can comprise, be a part of, and/or implement all or a portion of the system100ofFIG.1. In one aspect, the system200can comprise a first device202configured to provide content, such as video, audio, images, text, and/or the like. As a further example, the content can comprise a live content channel, a content item (e.g., show, program, newscast, sportscast, episode, movie, song), and/or the like. The first device202can comprise the content device102ofFIG.1. The first device202can comprise a computing device, such as a server, termination system (e.g., cable modem termination system), and/or other similar device. For example, the first device202can provide the content across a network204to one or more devices (e.g., the third device220). In one aspect, the network204can comprise a packet switched network (e.g., internet protocol based network), a non-packet switched network (e.g., quadrature amplitude modulation based network), and/or the like. The network204can comprise network adapters, switches, routers, modems, and the like connected through wireless links (e.g., radio frequency, satellite) and/or physical links (e.g., fiber optic cable, coaxial cable, Ethernet cable, or a combination thereof). In one aspect, the network204can be configured to provide communication from telephone, cellular, modem, and/or other electronic devices to and throughout the system200. The network204can be configured to transmit data (e.g., content) by unicast, multicast, broadcast, and/or the like. For example, a unicast transmission can comprise a transmission to one unique address. A multicast transmission can comprise a transmission to a group of addresses. A broadcast transmission can comprise a transmission to ail possible destinations. In one aspect, the first device202can comprise a content unit206configured to provide content. For example, the content unit206can be configured to provide content as one or more unicast streams, multicast streams, broadcast streams, file transfers, and/or the like. In one aspect, the content unit206can be configured to provide the content at one or more bit rates, from one or more locations (e.g., edge servers), according to one or more encoding schemes (e.g., audio codec, video codec), and/or the like. For example, particular content (e.g., content channel, content item) can be provided as one or more content transmissions (e.g., content stream, file transfer). At least one (or each) content transmission can be provided at a different bit rate than other content transmissions of the particular content. An example bit rate can comprise 20 megabits per second (mbps), 12 mbps, 7 mbps, 3 mbps, 1 mbps, 400 kilobits per second (kbps), and/or the like. In one aspect, the content unit206can be configured to provide content transmissions as one or more differential content transmissions. A differential content transmission can comprise a content transmission that can be combined with one or more other content transmissions to form content at a specified bit rate. For example, a differential content transmission can be based on scalable video coding (SVC). In one aspect, the content unit206can be configured to provide different content transmissions to different networks segments. For example, the content unit206can provide a first set of content transmissions to devices in a first multicast domain (e.g., defined by a grouping of network addresses). The content unit206can provide a second set of content transmissions to devices in a second multicast domain. For example, the first set of content transmissions can comprise at least one content transmission that is different and/or not included in the second set of content transmissions. In one aspect, the first device202can comprise a request unit208configured to process requests. For example, a user (e.g., device) can request content from the first device202. The request unit208can receive the request and identify content to fulfill the request. For example, the request can comprise a content identifier configured to identify a specific content location, a collection of content, and/or the like. The content identifier can comprise a universal resource identifier, such as a Hypertext Transfer Protocol link, and/or the like. The content identifier can be indicative of a program, show, episode, movie, interactive content, song, and/or the like. The content identifier can be indicative of a content transmission, such as a content stream, video on demand, file transfer, and/or the like. In one aspect, the request unit208can be configured to provide alternate content-based on a request for content. For example, a user (e.g., device) can request first content. The request unit208can be configured to provide second content in response to a request for the first content. The second content can be identified as the first content. For example, the second content can comprise one or more identifiers used for identifying the first content. As another example, the second content can be associated with a uniform resource locator that is identified as locating the first content. In one aspect, the request for the first content can be a request for a content transmission at a first bit rate. The second content can comprise a content transmission (e.g., comprising the same content channel, content source, and/or content item as the first content transmission) at a second bit rate. In one aspect, the first device202can comprise a migration unit210configured to migrate one or more devices from one content transmission to another content transmission. For example, the migration unit210can select one or more devices for migration based on a parameter associated with a content transmission. For example, the parameter can comprise a number of devices that are at least one of accessing, requesting, and/or receiving a content transmission. The parameter can comprise a type of device, class of device, device history, user information (e.g., subscription tier, preferences, history), characteristic associated with a device that is at least one of accessing, requesting, and/or receiving a content transmission, and/or the like. The parameter can comprise information indicative of device buffer, connection speed, and/or the like. For example, it can be determined how long a device takes to download a data block, what data block the device is currently requesting, whether the device is ready and/or likely (e.g., probability) to switch to a higher or lower content transmission, amount of data in a device buffer, and/or the like. The migration unit210can analyze the parameter. For example, the migration unit210can compare the parameter to a threshold. If the parameter is above, below, and/or equals the threshold, then the migration unit210can migrate one or more devices from one content transmission to another. As an illustration, if the number of devices accessing content (e.g., movie, show, channel) from a 20 mbps content stream is below a threshold, then the devices can be migrated from the 20 mbps content stream to a content stream providing the content with a lower bit rate, such as a 12 mbps content stream. In one aspect, migration of a device from one content transmission to another content transmission can comprise performing one or more actions resulting in a device requesting, accessing, and/or receiving a second content transmission instead of a first content transmission. For example, migration can comprise providing a second content transmission in response to a request for a first content transmission. The second content transmission can be identified as the first content transmission. For example, the second content transmission can comprise one or more identifiers used for identifying the first content transmission. As another example, the second content transmission can be associated with a uniform resource locator that is identified as locating the first content transmission. Migration can comprise altering, adjusting, and/or modifying a property of a first content transmission. For example, the property can comprise a bit rate, a source location (e.g., edge device), encoding scheme (e.g., audio codec, video codec), and/or the like. As an illustration, a first bit rate of the first content transmission can be changed to a second bit rate. Migration can comprise ending transmission of a first content transmission, thereby causing a device to request, access, and/or receive a second content transmission instead of the first content transmission. Migration can comprise instructing a device to access, request, and/or receive a second content transmission instead of the first content transmission. Migration can comprise instructing a device to stop accessing, requesting, and/or receiving a first content transmission (e.g., allowing the device to select another content transmission). The instruction can comprise a (e.g., current time, future time) to fulfill the instruction. In one aspect, migration from the first content stream to the second content stream can be accomplished seamlessly (e.g., without substantial interruption to the view). For example, a user can continue to receive the first content stream while the second content stream is buffered by a user device. In one aspect, the migration unit210can select one or more devices for migration based on one or more characteristics associated with the device. In another aspect, the migration unit210can select a content transmission for the device to migrate to based on one or more characteristics associated with the device. An example characteristic can comprise at least one of a screen size, a bandwidth, a screen resolution, class of device, device history, a location, a client account feature (e.g., class of user, subscription tier, user preference, user history), and/or the like. The characteristic can comprise a measurement of a buffer of the device accessing, requesting, and/or receiving a content transmission. For example, measurement of the buffer can be indicative of network congestion, bandwidth, and/or other characteristics of the device and/or network associated with a device. As a further example, if the buffer is above a threshold, then the migration unit210can select the device for migration to a first content transmission. If the buffer is below a threshold, then the migration unit210can select the device for migration to a second content transmission. The first content transmission can have a higher bit rate than the second content transmission. In one aspect, the migration unit210can determine the size of a buffer of each (or at least one) of a plurality of devices requesting, accessing, and/or receiving a content transmission. The migration unit210can determine whether to end the content transmission based on the determination of the size of the buffers of the plurality of devices. The migration unit210can determine the number of the plurality of devices that are struggling to maintain the bandwidth to access the content transmission. If the number is above a first threshold, then the migration unit210can determine to initiate migration of one or of the plurality of devices to a first (e.g., lower bit rate) content-transmission. If the number is below a second threshold, then the migration unit210can determine to continue providing the content transmission, migrate one or more of the plurality of devices to a second (e.g., higher bit rate) content transmission, and/or the like. In one aspect, the first device202can comprise an instruction unit212configured to provide one or more instructions. For example, the instruction unit212can provide an instruction to a device that, is at least, one of accessing, requesting, and receiving a content transmission from the first device202. The instruction can comprise an instruction that a first content transmission will cease transmission according to specified timing information. The instruction can comprise an instruction to access a second content transmission instead of the first content transmission. The instruction unit212can provide the instruction in response to analysis of a parameter associated with the first content transmission. For example, the instruction unit212can provide the instruction in response to the parameter being above, below, and/or equal to a threshold. As a further example, the instruction unit212can provide the instruction based on the parameter crossing a threshold value. As an illustration, the parameter can comprise a number of users accessing the first content transmission. It should be noted that a threshold can comprise a predefined value, such as a number. The threshold can also vary based on one or more conditions, such as user input, network conditions, and/or the like. For example, a threshold can be higher or lower based on network congestion, server outages, and/or the like. In one aspect, one or more parameters can be compared to one or more thresholds. For example, a parameter can have multiple values (e.g., number, type), and one or more of these values can be compared to one or more thresholds. As a further example, multiple parameters can be compared to one or more thresholds. A person of ordinary skill in the art can determine the values of the thresholds based on design conditions, network policies, user information, and/or the like. In one aspect, the instruction unit212can be configured to receive a response to an instruction provided to a device, such as the third device220. For example, a device can provide a response to the instruction. In one aspect, the response can be a negative response to the instruction. A negative response can comprise one or more reasons, error codes, and/or the like indicating an undesirable consequence to the device if the device complies with the instruction. As an illustration, the instruction can comprise an instruction to switch from a first content transmission to a second content transmission. The device can determine that switching to the second content transmission is undesirable. For example, the device can determine that the device has insufficient bandwidth, processing power, memory, and/or the like for accessing the second content transmission. As another example, the device can determine that switching to the second content transmission can violate a user preference, subscription plan, and/or the like. For example, a user can provide a user override to prevent accessing the second transmission. As an illustration, a user may desire to maintain accessing the first transmission if the user device is recording content for later playback. In another aspect, the response from the device can comprise a positive response. For example, the response can indicate that the device will comply with the instruction. The response can comprise timing information indicative of a time when the device will comply with the decision. The response can comprise content information indicative of a content transmission that the device will access in response to the instruction from the instruction unit212. In one aspect, an option can be provided for the user to override switching to the second content stream. For example, a user interface (e.g., provided by interface unit222of the third device220) can provide an interface element, such as a dialog box, window, button, and/or the like to the user giving the user the option to override switching to the second content stream. The interface element can allow the user to cancel switching, switch back to the first content stream, switch to a different content stream, and/or the like. In one aspect, the instruction unit212can be configured to analyze the response from the device. The instruction unit212can determine whether to end or continue transmission of the first content transmission based on the response. For example, if the response is a negative response the instruction unit212can be configured to continue transmission of the first content transmission. If the response is a positive response, the instruction unit212can be configured to end transmission of the first content transmission. In one aspect, the instruction unit212can be configured to determine (e.g., with downstream switches) if the device is accessing the second content transmission before ending transmission of the first content transmission. In another aspect, the instruction unit212can be configured to wait for, receive, and/or analyze additional instructions from the device, such as information indicating that the device is no longer accessing the first content transmission, is accessing the second content transmission (e.g., in addition to or instead of the first content transmission), and/or the like. In one aspect, the system200can comprise a second device214configured to provide content. In one aspect, the second device214can be communicatively coupled to the first device202through the network204. The second device214can comprise an encoding unit216configured to encode content. For example, the encoding unit216can be configured to compress, encrypt, and/or otherwise modify content. In one aspect, the second device214can comprise a packaging unit218configured to package content. For example, the packaging unit218can comprise a just in time packager configured to package content as a plurality of fragments (e.g., in response to a request). As an illustration, the second device214can receive a content transmission, such as a live content stream. The encoding unit216can encode the content transmission. The packaging unit218can package the encoded content transmission. The second device214can provide the encoded packaged content transmission to the first device202and/or other devices (e.g., the third device220). The first device202can provide the encoded packaged content transmission to one or more other devices across the network. As another example, the first device202can direct other devices (e.g., the third device220) to access, request, and/or receive content from the second device214. In an aspect, the system200can comprise a third device220. The third device220can be communicatively coupled to the first device202, second device214, and/or other device through the network204, In one aspect, the third device220can comprise an edge device. In another aspect, the third device220can comprise a user device. The third device220can be configured to provide content, services, information, applications, and/or the like to one or more users. For example, the third device220can comprise a computer, a smart device (e.g., smart phone, smart watch, smart glasses, smart apparel, smart accessory), a laptop, a tablet, a set top box, a display device (e.g., television, monitor), digital streaming device, proxy, gateway, transportation device (e.g., on board computer, navigation system, vehicle media center), sensor node, and/or the like. In one aspect, the third device220can comprise an interface unit222configured to provide an interface to a user to interact with the third device220and/or remote devices, such as the first device202. The interface unit222can comprise any interface for presenting and/or receiving information to/from the user, such as user feedback. An example interface can comprise a content viewer, such as a web browser (e.g., Internet Explorer®, Mozilla Firefox®, Google Chrome®, Safari®, or the like), media player, application (e.g., web application), mobile application, and/or the like. Other software, hardware, and/or interfaces can be used to provide communication between the user and one or more of the third device220and the first device202. In an aspect, the third device220can comprise a communication unit224. As an example, the communication unit224can request or query various files and/or content transmission from a local source and/or a remote source. As a further example, the communication unit224can transmit and/or receive data to a local or remote device such as the first device202. The communication unit224can comprise hardware and/or software to facilitate communication. For example, the communication unit224can comprise one or more of a modem, transceiver (e.g., wireless transceiver)), digital-to-analog converter, analog-to-digital converter, encoder, decoder, modulator, demodulator, tuner (e.g., QAM tuner, QPSK tuner), and/or the like. In one aspect, the communication unit224can be configured to allow one or more remote devices (e.g., in a local or remote portion of the network204) to control operation of the third device220. In one aspect, the communication unit224can be configured to receive an instruction from another device, such as the first device202, second device214, and/or the like. The instruction can comprise an instruction from the instruction unit212of the first device202. The instruction can comprise timing information, alternate content information, modification information, and/or the like. The timing information can comprise a time and/or date when a content transmission will be modified, will cease transmission, and/or the like. For example, the instruction can comprise an instruction that a first content transmission will cease transmission according to the timing information. The modification information can comprise information indicating one or more modifications that will be made to a content transmission at a future time. For example, the modification information can comprise a bit rate to which the content transmission will be adjusted, content that will be altered and/or supplied through the content transmission, and/or the like. The instruction can comprise an instruction that the transmission will be modified according to the modification information. The alternate content information can comprise an alternate content transmission to access (e.g., instead of the current content transmission). For example, the instruction can comprise an instruction to access an alternate content transmission within a time period specified by the timing information. In one aspect, the third device220can comprise a response unit226configured to analyze the instruction received by the communication unit224. The response unit can be configured to make a determination based on the instruction. For example, the response unit226can be configured to make the determination based on the timing information, alternate content information, modification information, and/or the like. In one aspect, the response unit226can be configured to determine a response to the instruction. For example, the response unit226can be configured to provide a negative response, positive response, no response, and/or the like based on the instruction. The negative response can comprise the negative response described herein. The positive response can comprise the positive response described herein. In one aspect, the third device220can comprise a recording unit228configured to record content from a content transmission. The recording unit228can be configured to detect one or more content transmission received by the third device. The recording unit228can be configured to determine whether to record at least a portion of the content transmission. In one aspect, the recording unit228can determine to record at least a portion of the content transmission based on a prediction (e.g., probability) that a user will request access to the content transmission and/or a recording thereof. For example, the recording unit228can determine whether to record at least a portion of the content transmission based on viewing history, an indication or instruction from a user, social media information, user recommendations, content recording history, and/or the like. The prediction can be based on a user preference, social media information, geographic information (e.g., user location), demographics (e.g., gender, ethnicity, age), behavior of a group of users (e.g., social media contacts, address book contacts, users similar in geographic information and/or demographic information), and/or the like. In one aspect, the recording unit228can be configured to combine the recording of at least a portion of the content transmission with another content transmission. For example, the recorded portion of the content transmission can be at a first bit rate. If a user requests the content transmission at a second bit rate that is higher than the first bit rate, then the recording unit228can combine the recorded portion of the content transmission with a content transmission comprising remaining portions of the content transmission sufficient to generate an instance of the content transmission at the bit rate requested by a user. As an illustration, the recording unit228can record a first differential content transmission (e.g., that is provided based on scalable video coding). If a user requests a content transmission at a second bit rate, the third device220can be configured to request a second differential content transmission. The recording unit228can be configure to combine the first differential content transmission, second differential content transmission, and/or the like to generate a content transmission at the requested second bit rate. FIG.3is a flowchart illustrating an example method300for providing content. At step302, a request for a first content transmission (e.g., first multicast content transmission) can be received. The request can be received by a provider (e.g., device managed by the provider), such as a content provider, service provider, and/or the like. For example, the request can be a request by a user and/or a first device, such as an edge device, user device, and/or the like. The first content transmission can comprise a content stream, file transfer, and/or the like. At step304, a parameter related to a first content transmission can be determined. The parameter can be determined by the provider (e.g., a device managed by the provider). The parameter can comprise a number of users at least one of accessing, requesting, and receiving the first content transmission. The parameter can be based on a measurement of a buffer of at least one device (e.g., first device, second device) accessing the first content transmission. For example, the measurement of the buffer of the at least one device can be indicative of at least one of a bandwidth, memory, processing capacity, and/or the like of the at least one device. The parameter can comprise and/or be indicative of a number of users accessing, requesting, and/or receiving the first content transmission within a specified group. For example, the group can be specified by device type, location, multicast downstream group, and/or the like. In one aspect, the parameter can comprise more than one value. In another aspect, the parameter can be used and/or determined with one or more additional parameters. At step306, the parameter can be compared to a threshold. For example, the threshold can comprise a predefined value, such as a number. The threshold can also vary based on one or more conditions, such as user input, network conditions, and/or the like. For example, a threshold can be higher or lower based on network congestion, server outages, and/or the like. It can be determined if the parameter is above, below, and/or equal to the predefined number. In one aspect, the parameter can be compared to additional thresholds. For example, the parameter can have multiple values (e.g., number, type), and one or more of these values can be compared to one or more thresholds. As a further example, multiple parameters can be compared to one or more thresholds. A person of ordinary skill in the art can determine the values of the thresholds based on design conditions, network policies, user information, and/or the like. At step308, a second content transmission (e.g., second multicast content transmission) can be determined based on the comparison to the threshold. The second content transmission can be determined by the provider (e.g., device managed by the provider). The second content transmission can comprise a content stream, file transfer, and/or the like. For example, if the threshold is above, below, and/or equal to the parameter, then the second content transmission can be determined. In one aspect, the second content transmission can also be determined based on a characteristic associated with a device (e.g., first device) that is at least one of requesting, accessing, and receiving the first content transmission. The second content transmission can be selected for the device (e.g., based on the characteristic). The characteristic can comprise at least one of a screen size, a bandwidth, a screen resolution, class of device, device history, a location, a client account feature, and/or the like. At step310, the second content transmission can be provided. For example, the second content transmission can be provided (e.g., by the provider) to the device (e.g., first device). The second content transmission can be provided in response to the request (e.g., from the first device). In one aspect, the second content transmission can be identified as the first content transmission when the second content transmission is provided in response to the request. For example, the second content transmission can be identified by the same identifier (e.g., network identifier, content identifier, location identifier) as the first content transmission. In another aspect, the second content transmission can be provided instead of the first content transmission. The first content transmission can comprise content at a first bit rate. The second content transmission can comprise the content at a second bit rate. The first bit rate can be different than the second bit rate. For example, the first bit rate can be a bit rate that is lower or higher than the second bit rate. As an illustration, if the characteristic indicates that the device is associated with a bandwidth, memory, processing capacity, and/or the like such that the device is configured to receive a higher bit rate (e.g., or lower bit rate) than the first bit rate, then the second bit rate can comprise a bit rate higher (e.g., or lower bit rate) than the first bit rate. At step312, transmission of the first content transmission can be terminated (e.g., ceased, bring to an end). For example, a device providing (e.g., device managed provider) the first content transmission can discontinue providing the first content transmission across a network. As another example, the transmission can be discontinued for one or more multicast downstream groups. In some scenarios, one or more second downstream transmission groups can continue to receive the first content transmission. FIG.4is a flowchart illustrating another example method400for providing content. At step402, a request for a first content transmission (e.g., first multicast content transmission) at having first bit rate can be received. For example, the request can be a request from a user and/or a device (e.g., first device), such as an edge device, user device, and/or the like. The first content transmission can comprise a content stream, file transfer, and/or the like. At step404, a parameter related to the first content transmission having the first bit rate can be determined. The parameter can comprise a number of devices accessing, requesting, and/or receiving the first content transmission. The parameter can be based on a measurement, of a buffer of at least one device accessing the first content transmission. For example, the measurement of the buffer of the at least one device can be indicative of at least, one of a bandwidth, memory, processing capacity, and/or the like of the at least one device. In one aspect, the parameter can comprise more than one value. In another aspect, the parameter can be used with one or more additional parameters. At step406, the parameter can be compared to a threshold (e.g., or otherwise analyzed). For example, the threshold can comprise a predefined value, such as a number. The threshold can also vary based on one or more conditions, such as user input, network conditions, and/or the like. For example, a threshold can be higher or lower based on network congestion, server outages, and/or the like. It can be determined if the parameter is above, below, and/or equal to the predefined number. In one aspect, the parameter can be compared to additional thresholds. For example, the parameter can have multiple values (e.g., number, type), and one or more of these values can be compared to one or more thresholds. As a further example, multiple parameters can be compared to one or more thresholds. A person of ordinary skill in the art can determine the values of the thresholds based on design conditions, network policies, user information, and/or the like. At step408, the first bit rate can be adjusted to a second bit rate based on the comparison of the parameter to the threshold (e.g., or based on other analysis of the parameter). The first bit rate can be adjusted to a second bit rate based on a characteristic associated with a device (e.g., first device) that is at least one of requesting, accessing, and receiving the first content transmission. The characteristic can comprise at least one of a screen size, a bandwidth, a screen resolution, class of device, device history, a location, and a client account feature. The first bit rate can be different than the second bit rate. For example, the first bit rate can be a bit rate that is lower or higher than the second bit rate. As an illustration, if the characteristic indicates that the device (e.g., first device) is associated with a bandwidth, memory, processing capacity, and/or the like such that the device is configured to receive a higher bit rate (e.g., or lower bit rate) than the first bit rate, then the second bit rate can comprise a bit rate higher (e.g., or lower bit rate) than the first bit rate. In another aspect, adjusting the first bit rate to a second bit rate can comprise merging the first content transmission with a second content transmission having the second bit rate. At step410, the first content transmission having the second bit rate can be provided. For example, the first content transmission having the second bit rate can be provided (e.g., to the device) in response to the request. The first content transmission having the second bit rate can be identified as the first content transmission having the first bit rate. For example, the first content transmission can be identified by the same identifier (e.g., network identifier, content identifier, location identifier) as the first content transmission having the second bit rate. The first content transmission having the second bit rate can be provided instead of the first content transmission having the first bit rate. FIG.5is a flowchart illustrating another example method500for providing content. At step502, a first content transmission (e.g., first multicast content transmission) can be requested. For example, the request can be a request by a user and/or a device, such as an edge device, user device, and/or the like. The first content transmission can comprise a content stream, file transfer, and/or the like. At step504, the first content transmission can be received. For example, the first content transmission can be a received by a user and/or a device, such as an edge device, user device, and/or the like. At step506, an instruction related to the first content transmission can be received. The instruction can comprise an instruction that the first content transmission will cease transmission according to specified timing information. For example, the timing information can comprise a time, date, time stamp, time duration. The timing information can indicate when the first content transmission will cease to be transmitted. The timing information can indicate when a device should cease accessing, requesting, and/or receiving the first content transmission. The timing information can indicate when a device should access another content transmission, such as second content transmission (e.g., second multicast content transmission). The instruction can comprise an instruction to access the second content transmission instead of the first content transmission. The instruction can be received in response to analysis of a parameter, such as a number of users accessing the first content transmission being above or below a threshold. In one aspect, the parameter can comprise one or more values. In another aspect, the parameter can be used with one or more additional parameters. The instruction can be received based on a number of users accessing, requesting, and/or receiving the first content transmission crossing a threshold value. The threshold can comprise predefined values. The threshold can also vary based on one or more conditions, such as user input, network conditions, and/or the like. For example, a threshold can be higher or lower based on network congestion, server outages, and/or the like. It can be determined if the parameter is above, below, and/or equal to the predefined number. In one aspect, the parameter can be compared to additional thresholds. For example, the parameter can have multiple values (e.g., number, type), and one or more of these values can be compared to one or more thresholds. As a further example, multiple parameters can be compared to one or more thresholds. A person of ordinary skill in the art can determine the values of the thresholds based on design conditions, network policies, user information, and/or the like. At step508, a determination can be made based on the instruction. For example it can be determined whether to switch to another content transmission. For example, it can be determined whether to switch to the second content transmission (e.g., second multicast content transmission). In one aspect, the determination can be made based on a characteristic of a device receiving the instruction. The characteristic can comprise at least one of a screen size, a bandwidth, a screen resolution, class of device, device history, a location, and a client account feature (e.g., user preference, subscription tier). The characteristic can comprise a measurement of a buffer of the device accessing the first content transmission. If it is determined to switch to another content transmission, then the method500can proceed to step514. If it is determined not to switch to another content transmission, then the method500can proceed to step510. At step510, permission to continue access to the first content transmission can be requested. For example, a device receiving the instruction can request permission to continue access to the first content transmission. The device can request permission based on the characteristic of the device. For example, the request can comprise the characteristic and/or information indicative of the characteristic. For example, the characteristic can be indicative of at least one of a bandwidth, memory, processing capacity, and/or the like of the device. At step512, permission to continue access to the first content transmission can be received. For example, permission can be received from a device providing the first content transmission, an intermediary device, and/or the like. In some implementations, the method500can proceed to step514. For example, the second content transmission can comprise the first content transmission. At step514, the second content transmission can be requested based on the instruction. For example, requesting the second content transmission based on the instruction can comprise determining the second content transmission based on a characteristic associated with a device that is receiving the first content transmission. The characteristic can comprise at least one of a screen size, a bandwidth, a screen resolution, class of device, device history, a location, and a client account feature. The characteristic can comprise a measurement of a buffer of the device accessing the first content transmission. The first content transmission can comprise content at a first bit rate. The second content transmission can comprise the content at a second bit rate. The first bit rate can be different than the second bit rate. For example, the first bit rate can be a bit rate that is lower or higher than the second bit rate. As an illustration, if the characteristic indicates that the device is associated with a bandwidth, memory, processing capacity, and/or the like such that the device is configured to receive a higher bit rate (e.g., or lower bit rate) than the first bit rate, then the second bit rate can comprise a bit rate higher (e.g., or lower bit rate) than the first bit rate. At step516, the second content transmission can be received. For example, the second transmission can be received by the device requesting the second content transmission. FIG.6is a flowchart illustrating another example method600for providing content. At step602, a first content transmission (e.g., first multicast content transmission) can be requested. The first content transmission can be requested by a device (e.g., edge device, user device). The first content transmission can comprise a content stream, file transfer, and/or the like. At step604, a second content transmission (e.g., second multicast content transmission) can be received. The second content transmission can be received by the device. The second content transmission can comprise a content stream, file transfer, and/or the like. The second content transmission can be received in response to the requesting of the first content transmission. The second content transmission can be received instead of the first content transmission based on a parameter, such as a number of users accessing the first content transmission. In one aspect, the parameter can comprise one or more values. In another aspect, the parameter can be used with one or more additional parameters. For example, the second content transmission can be received instead of the first content transmission based on the parameter (e.g., number of users accessing the first content transmission) being below, above, and/or equal to a threshold or other analysis of the parameter. The threshold can comprise a predefined value, such as a number. The threshold can also vary based on one or more conditions, such as user input, network conditions, and/or the like. For example, a threshold can be higher or lower based on network congestion, server outages, and/or the like. It can be determined if the parameter is above, below, and/or equal to the predefined number. In one aspect, the parameter can be compared to additional thresholds. For example, the parameter can have multiple values (e.g., number, type), and one or more of these values can be compared to one or more thresholds. As a further example, multiple parameters can be compared to one or more thresholds. A person of ordinary skill in the art can determine the values of the thresholds based on design conditions, network policies, user information, and/or the like. The second content transmission can be identified as the first content transmission when the second content transmission is received in response to the requesting of the first content transmission. For example, the second content transmission can be identified by the same identifier (e.g., network identifier, content identifier, location identifier) as the first content transmission. As another example, the device can be unaware that the second content transmission is being received instead of the first content transmission. In one aspect, the second content transmission can be received based on a characteristic associated with a device that is requesting the first content transmission. The second content transmission can be selected for and provided to the device. For example, the characteristic can comprise at least one of a screen size, a bandwidth, a screen resolution, class of device, device history, a location, and a client account feature. The characteristic can be based on a measurement of a buffer of at least one device accessing the first content transmission. For example, the measurement of the buffer of the at least one device can be indicative of at least one of a bandwidth, memory, processing capacity, and/or the like of the at least one device. The first content transmission can comprise content at a first bit rate. The second content transmission can comprise the content at a second bit rate. The first bit rate can be different than the second bit rate. For example, the first bit rate can be a bit rate that is lower or higher than the second bit rate. As an illustration, if the characteristic indicates that the device is associated with a bandwidth, memory, processing capacity, and/or the like such that the device is configured to receive a higher bit rate (e.g., or lower bit rate) than the first bit rate, then the second bit rate can comprise a bit rate higher (e.g., or lower bit rate) than the first bit rate. FIG.7is a flowchart illustrating another example method700for providing content. At step702, a first content transmission (e.g., first multicast content transmission) at a first bit rate can be received. The first content transmission can comprise a content stream, file transfer, and/or the like. For example, the first content transmission can comprise a video on demand content transmission. For example, the first content transmission can comprise a content item, such as a video, movie, show, program, episode, newscast, sportscasts, and/or the like. In one aspect, the first content transmission can comprise a differential content transmission, such as a scalable video coding (SVC) based content transmission. In one aspect, the first content transmission can be provided and/or received in response to a request (e.g., second request) from a first user at a first device. The first content transmission can be received at the first device, a second device, and/or other devices. For example, the first device can be in the same multicast domain as the second device. The first content transmission can comprise a multicast transmission. At step704, a probability that a user (e.g., second user associated with the second device) will request a content item can be determined. The probability can be based on a user history, such as a user viewing history, recording history, and/or the like. The probability can be based on a user preference, social media information, geographic information (e.g., user location), demographics (e.g., gender, ethnicity, age), behavior of a group of users (e.g., social media contacts, address book contacts, users similar in geographic information and/or demographic information), and/or the like. At step706, at least a portion of a content item can be recorded from the first content transmission. For example, recording at least the portion of the content item from the first content transmission can comprise recording the at least the portion of the content item in a recording buffer. As another example, recording at least the portion of the content item from the first content transmission can be performed in response to the probability being above a threshold. The threshold can comprise a predefined value, such as a number. The threshold can also vary based on one or more conditions, such as user input, network conditions, and/or the like. For example, a threshold can be higher or lower based on network congestion, server outages, and/or the like. At step708, a first request for the content item can be received. The first request for the content item can be received after the recording of at least the portion of the content item. The first request for the content item can be from the second user at the second device. At step710, at least one differential content transmission can be requested. For example, the second device can request the at least one differential content transmission in response to receiving the request for the content item. The at least one differential content transmission can comprise a scalable video coding (SVC) based content stream. The at least one differential content transmission can be configured to be combined with the recording of the at least a portion of the content item to form a copy of the content item at a second bit rate. The request for at least one differential content transmission can comprise a request for a first differential content transmission. In one aspect, a second differential content transmission can be received instead of the first differential content transmission. The second differential content transmission can be received instead of the first differential content transmission based on a parameter, such as a number of users at least one of accessing, requesting, and/or receiving the first differential content transmission. The second differential content transmission can be received based on a characteristic associated with a device (e.g., second device) requesting the first differential content transmission. The second differential content transmission can be selected for and provided to the device (e.g., second device). The characteristic can comprise at least one of a screen size, a bandwidth, a screen resolution, class of device, device history, a location, a client account feature, and/or the like. The characteristic can be based on a measurement of a buffer of at least one device (e.g., first device, second device) accessing the first content transmission. In aspect, the second bit rate can be higher than the first bit rate. For example, the first bit rate can be a bit rate that is lower or higher than the second bit rate. As an illustration, if the characteristic indicates that the device (e.g., second device) is associated with a bandwidth, memory, processing capacity, and/or the like such that the device (e.g., second devices) is configured to receive a higher bit rate (e.g., or lower bit rate) than the first bit rate, then the second bit rate can comprise a bit rate higher (e.g., or lower bit rate) than the first bit rate. At step712, at least a portion of the at least one differential content transmission can be combined (e.g., at the second device) with the recording to form the content item. For example, the at least on differential content transmission can be configured to be combined with other differential content transmissions (e.g., of the at least one differential content transmission, or otherwise) to form the complete content item, and/or transmission thereof. At step714, the content item can be provided. For example, the content can be provided (e.g., by the second device) to a display device (e.g., television, screen, display) or other device (e.g., via a network or bus) for consumption by a user. FIG.8is a flowchart illustrating another example method800for providing content. At step802, a request for a first content transmission (e.g., first multicast content transmission) can be received from a device. The first content transmission can be provided to the device in response to the request. The first content transmission can comprise a content stream, file transfer, and/or the like. At step804, the device can be selected from a plurality of devices accessing, requesting, and/or receiving the first content transmission. For example, the device can be selected in response to a parameter, such as a number of users accessing the first content transmission reaching a threshold value. It can be determined if the parameter is above, below, and/or equal to the threshold. The threshold can comprise a predefined value, such as a number. The threshold can also vary based on one or more conditions, such as user input, network conditions, and/or the like. For example, a threshold can be higher or lower based on network congestion, server outages, and/or the like. For example, a threshold can be higher or lower based on network congestion, server outages, and/or the like. In one aspect, the parameter can be compared to additional thresholds. For example, the parameter can have multiple values (e.g., number, type), and one or more of these values can be compared to one or more thresholds. As a further example, multiple parameters can be compared to one or more thresholds. A person of ordinary skill in the art can determine the values of the thresholds based on design conditions, network policies, user information, and/or the like. The device can be selected based on a measurement of a buffer of the device. For example, the measurement of the buffer of the device can be indicative of at least one of a bandwidth, memory, processing capacity, and/or the like of the device. In one aspect, the device can be selected based on a characteristic associated with the device. The characteristic can comprise at least one of a screen size, a bandwidth, a screen resolution, class of device, device history, a location, a buffer availability, a client account feature, and/or the like. At step806, a second content transmission (e.g., second multicast content transmission) can be selected based on a characteristic associated with the selected device. The characteristic can comprise at least one of a screen size, a bandwidth, a screen resolution, class of device, device history, a location, and a client account feature. The second content transmission can be selected based on a measurement of a buffer of the device. At step808, the second content transmission can be provided to the selected device instead of the first content transmission. The second content transmission can be identified as the first content transmission when the second content transmission is provided to the selected device. For example, the second content transmission can be identified by the same identifier (e.g., network identifier, content identifier, location identifier) as the first content transmission. The second content transmission can be provided in response to the request. The first content transmission can comprise content at a first bit rate. The second content transmission can comprise the content at a second bit rate. The first bit rate can be different than the second bit rate. For example, the first bit rate can be a bit rate that is lower or higher than the second bit rate. As an illustration, if the characteristic indicates that the device is associated with a bandwidth, memory, processing capacity, and/or the like such that the device is configured to receive a higher bit rate (e.g., or lower bit rate) than the first bit rate, then the second bit rate can comprise a bit rate higher (e.g., or lower bit rate) than the first bit rate. At step810, transmission of the first content transmission can be ended. For example, a device providing the first content transmission can discontinue providing the first content transmission across a network. For example, the transmission of the first content transmission can be ended for one or more first downstream transmission groups. In some scenarios, one or more second downstream transmission groups can continue to receive the first content transmission. FIG.9is a flowchart illustrating another example method900for providing content. At step902, a first content transmission (e.g., first multicast content transmission) and/or a second content transmission (e.g., second multicast content transmission) can be provided to a plurality of devices. For example, the first content transmission can be provided a first portion of the plurality of devices, and the second content transmission can be provided to a second portion of the plurality of devices. As another example, both the first content transmission and the second transmission can be provided to the plurality of devices (e.g., at the same time, at different, times). The first content transmission can comprise content at a first bit rate. The second content transmission can comprise the content, at a second bit rate. The first bit rate can be different than the second bit rate. The first content transmission can comprise a content stream, file transfer, and/or the like. The second content transmission can comprise a content stream, file transfer, and/or the like. At step904, a parameter indicative of the first content transmission can be determined. For example, the parameter can comprise a number of the plurality of devices accessing, requesting, and/or receiving the first content transmission. In one aspect, the parameter can comprise one or more values. In another aspect, the parameter can be determined with one or more additional parameters. At step906, the parameter (e.g., number of the plurality of devices accessing, requesting, and/or receiving the first content transmission) can be compared to a threshold or otherwise analyzed. The threshold can comprise predefined values. The threshold can also vary based on one or more conditions, such as user input, network conditions, and/or the like. For example, a threshold can be higher or lower based on network congestion, server outages, and/or the like. It can be determined if the parameter is above, below, and/or equal to the predefined number. In one aspect, the parameter can be compared to additional thresholds. For example, the parameter can have multiple values (e.g., number, type), and one or more of these values can be compared to one or more thresholds. As a further example, multiple parameters can be compared to one or more thresholds. A person of ordinary skill in the art can determine the values of the thresholds based on design conditions, network policies, user information, and/or the like. At step908, an instruction related to the first content transmission can be provided to a device of the plurality of devices. The instruction can comprise an instruction that the first content transmission will cease transmission according to specified timing information. For example, the timing information can comprise a time, date, time stamp, time duration. The timing information can indicate when the first content transmission will cease to be transmitted. The timing information can indicate when a device should cease accessing, requesting, and/or receiving the first content transmission. The timing information can indicate when a device should access another content transmission, such as a multicast content transmission. The instruction can comprise an instruction to access a second content transmission instead of the first content transmission. The instruction can be provided in response to the number of the plurality of devices accessing the first content transmission being below a threshold. At step910, a second content transmission (e.g., second multicast content transmission) can be provided to the device of the plurality of devices. For example, the second content transmission can be provided to the device if second content transmission was not already being provided to the device. The second content transmission can be selected for the device based on a characteristic associated with the device. The characteristic can comprise at least one of a screen size, a bandwidth, a screen resolution, class of device, device history, a location, and a client account feature. As an illustration, the first content transmission can comprise content at a first bit rate. The second content transmission can comprise the content at a second bit rate. The first bit rate can be different than the second bit rate. The first bit rate can be a bit rate that is lower or higher than the second bit rate. If the characteristic indicates that the device is associated with a bandwidth, memory, processing capacity, and/or the like such that the device is configured to receive a higher bit rate (e.g., or lower bit rate) than the first bit rate, then the second bit rate can comprise a bit rate higher (e.g., or lower bit rate) than the first bit rate. At step912, transmission of the first content transmission can be ended based on the comparison of the parameter (e.g., number of the plurality of devices accessing, requesting, and/or receiving the first content transmission) to the threshold or other analysis. For example, the transmission of the first content transmission can be ended for one or more first downstream transmission groups. In some scenarios, one or more second downstream transmission groups can continue to receive the first content transmission. In an exemplary aspect, the methods and systems can be implemented on a computer1001as illustrated inFIG.10and described below. By way of example, the content device102ofFIG.1can be a computer as illustrated inFIG.10. As another example, the first device202, second device214, and/or third device220ofFIG.2can be a computer as illustrated inFIG.10. Similarly, the methods and systems disclosed can utilize one or more computers to perform one or more functions in one or more locations.FIG.10is a block diagram illustrating an exemplary operating environment for performing the disclosed methods. This exemplary operating environment is only an example of an operating environment and is not intended to suggest any limitation as to the scope of use or functionality of operating environment architecture. Neither should the operating environment be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment. The present methods and systems can be operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that can be suitable for use with the systems and methods comprise, but are not limited to, personal computers, server computers, laptop devices, and multiprocessor systems. Additional examples comprise set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that comprise any of the above systems or devices, and the like. The processing of the disclosed methods and systems can be performed by software components. The disclosed systems and methods can be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers or other devices. Generally, program modules comprise computer code, routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The disclosed methods can also be practiced in grid-based and distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote computer storage media including memory storage devices. Further, one skilled in the art will appreciate that the systems and methods disclosed herein can be implemented via a general-purpose computing device in the form of a computer1001. The components of the computer1001can comprise, but are not limited to, one or more processors1003, a system memory1012, and a system bus1013that couples various system components including the one or more processors1003to the system memory1012. In one aspect, the system can utilize parallel computing. The system bus1013represents one or more of several possible types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can comprise an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, an Accelerated Graphics Port (AGP) bus, and a Peripheral Component Interconnects (PCI), a PCI-Express bus, a Personal Computer Memory Card Industry Association (PCMCIA), Universal Serial Bus (USB) and the like. The system bus1013, and all buses specified in this description can also be implemented over a wired or wireless network connection and each of the subsystems, including the one or more processors1003, a mass storage device1004, an operating system1005, transmission software1006, transmission data1007, a network adapter1008, system memory1012, an Input/Output Interface1010, a display adapter1009, a display device1011, and a human machine interface1002, can be contained within one or more remote computing devices1014a,b,cat physically separate locations, connected through buses of this form, in effect implementing a fully distributed system. The computer1001typically comprises a variety of computer readable media. Exemplary readable media can be any available media that is accessible by the computer1001and comprises, for example and not meant to be limiting, both volatile and non-volatile media, removable and non-removable media. The system memory1012comprises computer readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM). The system memory1012typically contains data such as transmission data1007and/or program modules such as operating system1005and transmission software1006that are immediately accessible to and/or are presently operated on by the one or more processors1003. In another aspect, the computer1001can also comprise other removable/non-removable, volatile/non-volatile computer storage media. By way of example,FIG.10illustrates a mass storage device1004which can provide non-volatile storage of computer code, computer readable instructions, data structures, program modules, and other data for the computer1001. For example and not meant to be limiting, a mass storage device1004can be a hard disk, a removable magnetic disk, a removable optical disk, magnetic cassettes or other magnetic storage devices, flash memory cards, CD-ROM, digital versatile disks (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory (EEPROM), and the like. Optionally, any number of program modules can be stored on the mass storage device1004, including by way of example, an operating system1005and transmission software1006. Each of the operating system1005and transmission software1006(or some combination thereof) can comprise elements of the programming and the transmission software1006. Transmission data1007can also be stored on the mass storage device1004. Transmission data1007can be stored in any of one or more databases known in the art. Examples of such databases comprise, DB2®, Microsoft® Access, Microsoft® SQL Server, Oracle®, mySQL, PostgreSQL, and the like. The databases can be centralized or distributed across multiple systems. In another aspect, the user can enter commands and information into the computer1001via an input device (not shown). Examples of such input devices comprise, but are not limited to, a keyboard, pointing device (e.g., a “mouse”), a microphone, a joystick, a scanner, tactile input devices such as gloves, and other body coverings, and the like These and other input devices can be connected to the one or more processors1003via a human machine interface1002that is coupled to the system bus1013, but can be connected by other interface and bus structures, such as a parallel port, game port, an IEEE 1394 Port (also known as a Firewire port), a serial port, or a universal serial bus (USB). In yet another aspect, a display device1011can also be connected to the system bus1013via an interface, such as a display adapter1009. It is contemplated that the computer1001can have more than one display adapter1009and the computer1001can have more than one display device1011. For example, a display device can be a monitor, an LCD (Liquid Crystal Display), or a projector. In addition to the display device1011, other output peripheral devices can comprise components such as speakers (not shown) and a printer (not shown) which can be connected to the computer1001via Input/Output Interface1010. Any step and/or result of the methods can be output in any form to an output device. Such output can be any form of visual representation, including, but not limited to, textual, graphical, animation, audio, tactile, and the like. The display device1011and computer1001can be part of one device, or separate devices. The computer1001can operate in a networked environment using logical connections to one or more remote computing devices1014a,b,c. By way of example, a remote computing device can be a personal computer, portable computer, smartphone, a server, a router, a network computer, a peer device or other common network node, and so on. Logical connections between the computer1001and a remote computing device1014a,b,ccan be made via a network1015, such as a local area network (LAN) and/or a general wide area network (WAN). Such network connections can be through a network adapter1008. A network adapter1008can be implemented in both wired and wireless environments. Such networking environments are conventional and commonplace in dwellings, offices, enterprise-wide computer networks, intranets, and the Internet. For purposes of illustration, application programs and other executable program components such as the operating system1005are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computer1001, and are executed by the data processors) of the computer. An implementation of transmission software1006can be stored on or transmitted across some form of computer readable media. Any of the disclosed methods can be performed by computer readable instructions embodied on computer readable media. Computer readable media can be any available media that can be accessed by a computer. By way of example and not meant to be limiting, computer readable media can comprise “computer storage media” and “communications media.” “Computer storage media” comprise volatile and non-volatile, removable and non-removable media implemented in any methods or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Exemplary computer storage media comprises, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. The methods and systems can employ Artificial Intelligence techniques such as machine learning and iterative learning. Examples of such techniques include, but are not limited to, expert systems, case based reasoning, Bayesian networks, behavior based AI, neural networks, fuzzy systems, evolutionary computation (e.g. genetic algorithms), swarm intelligence (e.g. ant algorithms), and hybrid intelligent systems (e.g. Expert inference rules generated through a neural network or production rules from statistical learning). While the methods and systems have been described in connection with preferred embodiments and specific examples, it is not intended that the scope be limited to the particular embodiments set forth, as the embodiments herein are intended in all respects to be illustrative rather than restrictive. Unless otherwise expressly stated, it is in no way intended that any method set forth herein be construed as requiring that its steps be performed in a specific order. Accordingly, where a method claim does not actually recite an order to be followed by its steps or it is not otherwise specifically stated in the claims or descriptions that the steps are to be limited to a specific order, it is no way intended that an order be inferred, in any respect. This holds for any possible non-express basis for interpretation, including: matters of logic with respect to arrangement of steps or operational flow; plain meaning derived from grammatical organization or punctuation; the number or type of embodiments described in the specification. It will be apparent to those skilled in the art that various modifications and variations can be made without departing from the scope or spirit. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims. | 86,002 |
11943290 | DETAILED DESCRIPTION The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention. Systems and/or methods described herein may provide an asynchronous distributed de-duplication algorithm for replicated storage clusters that provides availability, liveness and consistency guarantees for immutable objects. Implementations described herein may use the underlying replication layer of a distributed multi-master data replication system to replicate a content addressable index (also referred to herein as a “global index”) between different storage clusters. Each object of the global index may have a unique content handle (e.g., a hash value or digital signature). In implementations described herein, the removal process of redundant replicas may keep at least one replica alive. Exemplary Network Configuration FIG.1is a diagram of an exemplary system100in which systems and methods described herein may be implemented. System100may include clients110-1through110-N (referred to collectively as clients110, and individually as client110) and storage clusters120-1through120-M (referred to collectively as storage clusters120, and individually as storage cluster120) connected via a network130. Storage clusters120may form a file system140(as shown by the dotted line inFIG.1). Network130may include one or more networks, such as a local area network (LAN), a wide area network (WAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), an intranet, the Internet, a similar or dissimilar network, or a combination of networks. Clients110and storage clusters120may connect to network130via wired and/or wireless connections. Clients110may include one or more types of devices, such as a personal computer, a wireless telephone, a personal digital assistant (PDA), a lap top, or another type of communication device, and/or a thread or process running on one of these devices. In one implementation, a client110includes, or is linked to, an application on whose behalf client110communicates with storage cluster120to read or modify (e.g., write) file data. Storage cluster120may include one or more server devices, or other types of computation or communication devices, that may store, process, search, and/or provide information in a manner described herein. In one implementation, storage cluster120may include one or more servers (e.g., computer systems and/or applications) capable of maintaining a large-scale, random read/write-access data store for files. The data store of storage cluster120may permit an indexing system to quickly update portions of an index if a change occurs. The data store of storage cluster120may include one or more tables (e.g., a document table that may include one row per uniform resource locator (URL), auxiliary tables keyed by values other than URLs, etc.). In one example, storage cluster120may be included in a distributed storage system (e.g., a “Bigtable” as set forth in Chang et al., “Bigtable: A Distributed Storage System for Structured Data,”Proc. of the7th OSDI, pp. 205-218 (November 2006)) for managing structured data (e.g., a random-access storage cluster of documents) that may be designed to scale to a very large size (e.g., petabytes of data across thousands of servers). Although not shown inFIG.1, system100may include a variety of other components, such as one or more dedicated consumer servers or hubs. A consumer server, for example, may store a read-only copy of a data store from one or more storage clusters120for access by clients110. A hub, for example, may store a read-only copy of a data store from one or more storage clusters120for distribution to one or more consumer servers. Exemplary Storage Cluster Configuration FIG.2is a diagram of an exemplary configuration of the file system140. As shown inFIG.2, file system140may include storage clusters120-1,120-2,120-3, and120-4. In one implementation, file system140may be a distributed multi-master data replication system, where each of storage clusters120-1,120-2,120-3, and120-4may act as a master server for the other storage clusters. In file system140, data may be replicated across storage clusters120-1,120-2,120-3, and120-4(e.g., in multiple geographical locations) to increase data availability and reduce network distance from clients (e.g., clients110). Generally, distributed objects and references may be dynamically created, mutated, cloned and deleted in different storage clusters120and an underlying data replication layer (not shown) maintains the write-order fidelity to ensure that all storage clusters120will end up with the same version of data. Thus, the data replication layer respects the order of writes to the same replica for a single object. A global index of all of the objects in the distributed multi-master data replication system may be associated with each storage cluster120. Each stored object may be listed by a unique content handle (such as a hash value, digital signature, etc.) in the global index. Selected storage clusters may each be assigned to be responsible for a distinct range of the content handles in the global index. For example, a single storage cluster120may be responsible for de-duplication of objects associated with particular content handles. Changes to the global index made by one storage cluster may be replicated to other storage clusters. AlthoughFIG.2shows exemplary functional components of file system140, in other implementations, file system140may contain fewer, additional, different, or differently arranged components than depicted inFIG.2. In still other implementations, one or more components of file system140may perform one or more tasks described as being performed by one or more other components of file system140. FIG.3is a diagram of exemplary components of storage cluster120. Storage cluster120may include a bus310, a processor320, a main memory330, a read-only memory (ROM)340, a storage device350, an input device360, an output device370, and a communication interface380. Bus310may include one or more conductors that permit communication among the components of storage cluster120. Processor320may include any type of processor or microprocessor that may interpret and execute instructions. Main memory330may include a random access memory (RAM) or another type of dynamic storage device that may store information and instructions for execution by processor320. ROM340may include a ROM device or another type of static storage device that may store static information and instructions for use by processor320. Storage device350may include a magnetic and/or optical recording medium and its corresponding drive. For example, storage device350may include one or more local disks355that provide persistent storage. In one implementation, storage cluster120may maintain metadata, for objects stored in file system140, within one or more computer-readable mediums, such as main memory330and/or storage device350. For example, storage cluster120may store a global index within storage device350for all the objects stored within a distributed multi-master data replication system. Input device360may include one or more mechanisms that permit an operator to input information to storage cluster120, such as a keyboard, a keypad, a button, a mouse, a pen, etc. Output device370may include one or more mechanisms that output information to the operator, including a display, a light emitting diode (LED), etc. Communication interface380may include any transceiver-like mechanism that enables storage cluster120to communicate with other devices and/or systems. For example, communication interface380may include mechanisms for communicating with other storage clusters120and/or clients110. FIG.4illustrates a functional block diagram of storage cluster120. As shown inFIG.4, storage cluster120may include data store410and de-duplication logic420. In one implementation, as illustrated inFIG.4, data store410may be provided within storage cluster120. In other implementations, some or all of data store410may be stored within one or more other devices of system100in communication with storage cluster120, such as external memory devices or devices associated with an indexing system (not shown). Data store410may include a replicated index store412and a local object store414. Replicated index store412may be included as part of the replication layer of the distributed multi-master data replication system. Replicated index store412may store information associated with the global index. At least a portion of replicated index store412may be replicated on multiple storage clusters120. The number of replicas for each replicated index store412may be user-configurable. Local object store414may store objects locally within storage cluster120. Local object store414may include files, such as images or videos uploaded by clients (e.g., clients110). De-duplication logic420may include logic to remove redundant replicas from storage clusters within the distributed multi-master data replication system (e.g., storage clusters120-1,120-2,120-3, and120-4). De-duplication logic420for each participating storage cluster may be assigned to be responsible for a particular section of the global index. For example, de-duplication logic420may be assigned to a particular range of content handles for the global index. Thus, only one storage cluster within the distributed multi-master data replication system may be able to perform destructive operations (e.g., deletion of replicas) on a replicated object within the system. To facilitate de-duplication, records may be generated by de-duplication logic420and appended to a portion of the global index associated with a particular content handle. Records may include, for example, a “Data” designator for initiating a live replica, a “DeleteRequest” designator for indicating an ongoing delete request for a replica, and a “Deduped” designator for indicating a replica that has been selected for de-duplication. Record formats and uses are described in more detail below. AlthoughFIG.4shows exemplary functional components of storage cluster120, in other implementations, storage cluster120may contain fewer, additional, different, or differently arranged functional components than depicted inFIG.4. In still other implementations, one or more functional components of storage cluster120may perform one or more other tasks described as being performed by one or more other functional components. Exemplary Record Structure FIG.5provides an illustration of an exemplary record structure500for a de-duplication designation record that may be written to the global index in an exemplary implementation. The de-duplication designation record may be associated in the global index with a particular content handle of an object replica. As shown inFIG.5, record structure500may include storage cluster identifier (“ID”) section510, a storage location section520, and designation section530. Storage cluster identification section510may include a unique identification (e.g., “Cluster ID”) for the storage cluster120that is storing the object replica for which the record is being written. Location section520may include an address for the location of the replica within storage cluster120that is identified by storage cluster identification section510. Designation section530may include, for example, a “Data” designator, a “DeleteRequest” designator, or a “Deduped” designator. Record structure500may be listed in the form of “ClusterID:Location:Designation.” For example, a record for a replica may be added to the global index by storage cluster120-1with the record “01:234523/2000:DeleteRequest,” where “01” is the cluster ID for storage cluster120-1, “234523/2000” is the location, within storage cluster120-1at which the replica is stored, and “DeleteRequest” is the designator. A record for another replica of the same object in storage cluster120-2may be “02:234544/1000:Data,” where “02” is the cluster ID for storage cluster120-2, “234544/1000” is the location within storage cluster120-2, and “Data” is the designator. Exemplary Process Flows FIGS.6A and6Bare flowcharts of exemplary processes for managing client-initiated upload/delete operations.FIG.6Adepicts a flowchart for an exemplary process600of uploading an object from a client.FIG.6Bdepicts a flowchart for an exemplary process650of removing an object deleted by a client. In one implementation, processes600and650may be performed by one of storage clusters120. Processes600and650may be implemented in response to client (e.g., client110) activities. For particular examples of processes600and650described below, reference may be made to storage cluster120-1of file system140, where storage cluster120-1includes a cluster ID of “01.” Referring toFIG.6A, process600may begin when an uploaded file is received from a client (block610). For example, storage cluster120-1may receive a new file from one of clients110. The uploaded file may be stored (block620) and a “Data” designator for the uploaded file may be written to the global index (block630). For example, storage cluster120-1may store the uploaded file in a memory (e.g., storage device350) and add a content handle for the object to the global index. Storage cluster120-1may also write a data record (e.g., “01:Location:Data”) to the replicated global index addressed by the content handle of the object. Referring toFIG.6B, process650may begin when a notice of a deleted file is received (block660). For example, storage cluster120-1may receive an indication that one of clients110has deleted a file. A delete request may be initiated (block670) and a “DeleteRequest” designator for the deleted file may be written to the global index (block680). For example, storage cluster120-1may initiate a delete request to asynchronously remove the delete file from file system140. Storage device120-1may also write a “DeleteRequest” record (e.g., “01:Location:DeleteReqeust”) to the replicated global index addressed by the content handle of the object. FIG.7is a flowchart of an exemplary process700for performing de-duplication in a distributed multi-master data replication system (e.g., file system140). In one implementation, process700may be performed by one of storage clusters120. In another implementation, some or all of process700may be performed by another device or a group of devices, including or excluding storage cluster120. Process700may be implemented periodically in each storage cluster120and may include a scan of all or a portion of the objects in the storage cluster120. For particular examples of process700described below, reference may be made to storage clusters120-1and120-2of file system140, where storage cluster120-1includes a cluster ID of “01” and storage cluster120-2includes a cluster ID of “02.” As illustrated inFIG.7, process700may begin with conducting a scan of the global index (block710). For example, storage cluster120-1(using, e.g., de-duplication logic420) may conduct a scan of all or a portion of the objects listed in the global index. The scan may identify, for example, multiple replicas and/or objects marked for deletion. It may be determined if a delete request is encountered (block720). For example, storage cluster120-1may encounter an object in the global index that includes a delete request designator (e.g., “02:Location:DeleteReqeust”) from another storage cluster (e.g., from storage cluster120-2). If it is determined that a delete request is encountered (block720—YES), then the delete request may be processed (block730). For example, storage cluster120-1may process the delete request as described in more detail with respect toFIG.8. If it is determined that a delete request is not encountered (block720—NO), then it may be determined if redundant replicas exist (block740). Redundant replicas may be replicated objects in different locations that have no outstanding delete requests for the object. For example, storage cluster120-1may identify multiple replicas for the same object that correspond to a content handle for which storage cluster120-1is responsible. The multiple replicas may be stored, for example, in different storage clusters (e.g., storage cluster120-1and storage cluster120-2) or in different locations within the same storage cluster. If it is determined that redundant replicas exist (block740—YES), then the redundant replicas(s) may be removed (block750). For example, storage cluster120-1may remove the redundant replica(s) as described in more detail with respect toFIG.9. If it is determined that redundant replicas do not exist (block740-NO), then the process may return to block710, where another scan of the global index may be conducted (block710). FIG.8illustrates exemplary operations associated with the processing of a delete request of block730ofFIG.7. A delete request may be encountered for an object (block810). For example, a scan being conducted by storage cluster120-1may identify a content handle in the global index with a delete request designator previously written by storage cluster120-1to delete a replica in a certain storage cluster (e.g., “02:Location:DeleteRequest”). Assuming that storage cluster120-1is responsible for the content handle, storage cluster120-1may apply operations to determine if the replica can now be de-duplicated. It may be determined if a de-duplication designator exists (block820). For example, storage cluster120-1may review other records in the global index associated with the content handle to determine if a de-duplication designator exists (e.g., “02:Location:Deduped”). If it is determined that a de-duplication designator exists (block820—YES), then the replica and the related records in the global index may be de-duplicated (block830). For example, storage cluster120-1may initiate a delete request to delete the replica in storage cluster120-2(if any) and delete any records (e.g., “02:Location:*”, where “*” may be any designator) from the global index that relate to the content handle for the deleted replica. If it is determined that a de-duplication designator does not exists (block820—NO), then it may be determined if another live replica exists (block840). For example, storage cluster120-1may review the content handle for the global index to determine whether another live replica exists for the object. The global index may include, for example, a data record for that content handle from another storage cluster (e.g., “03:Location:Data”). If another live replica exists (block840—YES), then the replica may be de-duplicated as described above with respect to block830. If another live replica does not exist (block840—NO), then it may be determined if all replicas have delete requests (block850). For example, storage cluster120-1may review the content handle for the global index to determine whether all the replicas associated with the content handle have an outstanding delete request (e.g., “*:*:DeleteRequest”, where “*” may be any ClusterID and any location, respectively). If it is determined that all replicas have delete requests (block850—YES), then the replica may be de-duplicated as described above with respect to block830. If it is determined that all replicas do not have delete requests (block850—NO), then the object may be copied from a storage cluster that initiated a delete request to a different storage cluster and the global index may be updated (block860). For example, in response to the record “02:Location:DeleteRequest,” storage cluster120-1may copy the object from storage cluster120-2to another storage cluster120-3for which there is a de-duplication record (e.g., “03:Location:Deduped”) and no outstanding delete request. Storage cluster120-1may delete the previous de-duplication record (e.g., “03:Location:Deduped”) associated with the replica and write a data designator (e.g., “03:Location:Data”) to the corresponding content handle of the object in the global index. FIG.9illustrates exemplary operations associated with the removing of duplicate references of block750ofFIG.7. Multiple replicas with no delete requests may be identified (block910). For example, storage cluster120-1may review the global index and identify two or more replicas that have no outstanding delete requests corresponding to a content handle for which storage cluster120-01is responsible. Criteria to determine replica(s) to be de-duplicated may be applied (block920). For example, storage cluster120-1may apply criteria to de-duplicate the redundant replica that may be stored within storage cluster120-1. The criteria to de-duplicate redundant replicas may be based on a variety of factors, such as geographic proximity of the replicas, available storage capacity at a storage cluster, or other factors. Storage cluster120-1(e.g., using de-duplication logic420) may apply the criteria to the two or more replicas that have no outstanding delete requests identified above. In some implementations, multiple replicas may be identified to be de-duplicated. In other implementations, storage cluster120-1may leave more than one live replica (e.g., a replica not marked for de-duplication). The global index may be updated to designate de-duplicated replica(s) as “Deduped” (block930). For example, for each de-duplicated replica, storage cluster120-1may delete the previous data record (e.g., “02:Location:Data”) associated with the replica and write a de-duplication designator (e.g., “02:Location:Deduped”) to the corresponding content handle in the global index. De-duplication of the redundant replicas may be accomplished using de-duplication messages that are replicated as a part of the global index. The replicas marked for de-duplication may be stored within storage cluster120-1or within another storage cluster (e.g., storage cluster120-2,120-3,120-4, etc.). In one implementation, storage cluster120-1may delete locally-stored replicas and the corresponding “01:Location:Data” record from the global index and add “01:Location:Deduped” to the global index. Storage cluster120-1may also initiate delete messages, using the replicated global index, to delete replicas stored in other clusters. FIG.10provides a flowchart of an exemplary process1000for optimizing bandwidth consumption and reducing latency in a distributed multi-master data replication system (e.g., file system140). In one implementation, process1000may be performed by one of storage clusters120. In another implementation, some or all of process1000may be performed by another device or group of devices, including or excluding storage cluster120. For particular examples of process1000described below, reference may be made to storage cluster120-1of file system140, where the storage cluster120-1includes a cluster ID of “01.” As illustrated inFIG.1000, process1000may begin with receiving a request for an object (block1010). For example, storage cluster120-1may receive a request from a client (e.g., client110-1) to obtain an object. Object locations may be looked up in the global index (block1020). For example, storage cluster120-1may look up the replica location(s) for the object in the replicated global index using the content handle of the object. The “best” replica location may be identified (block1030). For example, assuming that more than one replica is available, storage cluster120-1may determine the “best” replica to retrieve to minimize network resources. For example, the “best” replica may be the replica that has the closest geographic location to storage cluster120-1. In other implementations, the “best” replica may be based on a combination of available network connectivity, geographic location, and/or other criteria. Thus, in some implementations, the “best” replica for the object may be stored locally within storage cluster120-1. The object may be retrieved from the identified location (block1040). For example, storage cluster120-1may request the “best” replica from the closest available storage cluster and receive the replica to satisfy the client request. Storage cluster120-1may then send the replica to the client. EXAMPLES FIG.11provides a portion1100of an exemplary global index according to an implementation described herein. The index may include, among other information, a content handle column1110and a De-duplication designation record column1120. Assume, in exemplary index portion1100, a distributed multi-master data replication system includes three storage clusters, XX, YY, and ZZ. A de-duplication algorithm may run periodically in each of storage clusters XX, YY, and ZZ and may scan all or a portion of the global index. Also, records (e.g., Data, DeleteRequest, and Deduped) may be written by one of storage clusters XX, YY, or ZZ to the global index associated with a particular object content handle. Modifications to the global index may be replicated to all other participating clusters (e.g., the remaining of storage clusters XX, YY, and ZZ). As shown inFIG.11, index portion1100includes content handles and associated delete designation records for four objects. “Handle11” has records indicating replicas are stored at storage cluster XX (“XX:Location01:Data”) and storage cluster YY (“YY:Location01:Data”), respectively. “Handle21” has a record indicating a replica is stored at storage cluster XX (“XX:Location02:Data”) and another replica at storage cluster YY has an ongoing delete request (“YY:Location:02:DeleteRequest”). “Handle31” has records indicating replicas are stored at storage cluster YY (“XX:Location03:Data”) and storage cluster ZZ (“ZZ:Location01:Data”), respectively. “Handle31” also has two records indicating the replicas have ongoing delete requests at storage cluster YY (“YY:Location03:DeleteRequest”) and storage cluster ZZ (“ZZ:Location01:DeleteRequest”). “Handle41” has records indicating a replica is stored at storage cluster YY (“XX:Location04:Data”) and a record indicating the replica with an ongoing delete request at storage cluster YY (“YY:Location04:DeleteRequest”). Handle41 also has one record indicating de-duplication of a replica has occurred (“ZZ:Location02:Deduped”). The de-duplication algorithm used by the storage clusters can operate using guidelines consistent with the principles described herein. Assume storage cluster XX is assigned responsibility for the portion of the global index including “Handle11,” “Handle21,” “Handle31,” and “Handle41.” When an object is fully uploaded in a storage cluster, the storage cluster may write a data record (e.g., “ClusterID:Location:Data”) to the replicated global index addressed by the content handle of the object. For example, “XX:Location01:Data” and “YY:Location01:Data” illustrate data records for replicas of “Handle11.” Also, “XX:Location02:Data” illustrates a data record for a replica of “Handle21.” Similar data records can be seen for “Handle31” and “Handle41.” When an object is requested in a storage cluster, the storage cluster may look up the replica locations in the replicated global index using the content handle of the object and fetch the replica from the “best” (e.g., closest) cluster. For example, assuming an object corresponding to “Handle11” is requested at storage cluster ZZ and that storage cluster YY is closer to storage cluster ZZ than is storage cluster XX, storage cluster ZZ may request the object replica corresponding to “Handle11” from storage cluster YY. When an object is deleted in a storage cluster, the storage cluster may write “ClusterID:Location:DeleteRequest” to the replicated global index addressed by the content handle of the object. For example, “YY:Location02:DeleteRequest” illustrates a record for a deleted replica of “Handle21” in storage cluster YY. Similarly, “YY:Location03:DeleteRequest” and “ZZ:Location:01:DeleteRequest” illustrate records for deleted replicas of “Handle31” for storage clusters YY and ZZ, respectively. If the scan in a storage cluster encounters multiple replicas that have no outstanding delete requests corresponding to a content handle the storage cluster is responsible for, the storage cluster may delete redundant replicas of the object (possibly leaving more than one live replica). For each deleted replica in another storage cluster, the storage cluster may delete the data record and write a de-duplication record. For example, the scan in storage cluster XX may identify that “Handle11” has records indicating replicas are stored at storage cluster XX (“XX:Location01:Data”) and storage cluster YY (“YY:Location01:Data”), respectively. Based on criteria provided for removing redundant references, storage cluster XX may initiate deletion of the replica at storage cluster YY. Storage cluster XX may delete the record “YY:Location01:Data” shown inFIG.11and write “YY:Location01:Deduped” instead. If the scan in storage cluster XX encounters a delete request (e.g., “ClusterID:Location:DeleteRequest”) for a replica in another storage cluster (e.g., storage cluster YY or ZZ) corresponding to a content handle that storage cluster XX is responsible for, storage cluster XX may apply the following analysis. If there is a “Deduped” record for the same storage cluster and location as the delete request, if there exists another live replica of the object, or if all replicas have outstanding delete requests, the storage cluster XX can delete the replica of the object in storage cluster YY or ZZ (if any) and delete the records “YY:Location:*” or “ZZ:Location:*.” For example, the replica for “Handle21” in storage cluster YY and the record “YY:Location02:DeleteRequest” may be deleted by storage cluster XX since another live object (indicated by the record “XX:Location02:Data”) exists. Similarly, the replica for “Handle31” in storage cluster YY and the record “YY:Location:03:DeleteRequest” may be deleted by storage cluster XX since both replicas in storage cluster YY and storage cluster ZZ have outstanding delete requests. If storage cluster XX cannot delete the replica of the object in storage cluster YY or ZZ (e.g., there is not a “Deduped” record or another live replica of the object, and all replicas do not have outstanding delete requests), storage cluster XX can copy the object from YY or ZZ to another storage cluster for which there is a de-duplication record and no outstanding delete request, deleting the de-duplication record and writing a data record. For example, the replica for “Handle41” in storage cluster YY (“YY:Location04:DeleteRequest”) may trigger storage cluster XX to copy the object associated with “Handle41” to storage cluster ZZ. Storage cluster XX may update the global index to change “ZZ:Location02:Deduped” to “ZZ:Location02:Data.” The correctness of the algorithm is straightforward as all deletion operations on the object are performed only by the scan process in the storage cluster responsible for its content handle. The algorithm also transparently deals with multiple object replicas in the same cluster that have different locations (e.g. XX:Location1 and XX:Location2). CONCLUSION Systems and/or methods described herein may store a global index of objects in a distributed data replication system and replicate the global index and some of the objects throughout the distributed data replication system. A storage cluster may be assigned as the responsible entity for de-duplication within a particular subset of the global index. The storage cluster may conduct a scan of the subset of the global index and identify redundant replicas based on the scan. The storage cluster may de-duplicate the redundant replicas stored locally or in a remote storage cluster. The foregoing description of implementations provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, in another implementation a synchronous version of the de-duplication algorithm may be used in which different storage clusters communicate directly rather than using the replication layer within a distributed data replication system. Also, while series of blocks have been described with regard toFIGS.6A-10, the order of the blocks may be modified in other implementations. Further, non-dependent blocks may be performed in parallel. It will be apparent that embodiments, as described herein, may be implemented in many different forms of software, firmware, and hardware in the implementations illustrated in the figures. The actual software code or specialized control hardware used to implement embodiments described herein is not limiting of the invention. Thus, the operation and behavior of the embodiments were described without reference to the specific software code—it being understood that software and control hardware may be designed to implement the embodiments based on the description herein. Further, certain implementations described herein may be implemented as “logic” or a “component” that performs one or more functions. This logic or component may include hardware, such as a processor, microprocessor, an application specific integrated circuit or a field programmable gate array, or a combination of hardware and software (e.g., software executed by a processor). It should be emphasized that the term “comprises” and/or “comprising” when used in this specification is taken to specify the presence of stated features, integers, steps, or components, but does not preclude the presence or addition of one or more other features, integers, steps, components, or groups thereof. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of the invention. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. Further, the phrase “based on,” as used herein is intended to mean “based, at least in part, on” unless explicitly stated otherwise. | 34,695 |
11943291 | Like reference numerals are used to designate like parts in the accompanying drawings. DETAILED DESCRIPTION The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples. When elements are referred to as being “connected” or “coupled,” the elements can be directly connected or coupled together or one or more intervening elements may also be present. In contrast, when elements are referred to as being “directly connected” or “directly coupled,” there are no intervening elements present. The subject matter may be embodied as devices, systems, methods, and/or computer program products. Accordingly, some or all of the subject matter may be embodied in hardware and/or in software (including firmware, resident software, micro-code, state machines, gate arrays, etc.) Furthermore, the subject matter may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. In the context of this document, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer-usable or computer-readable medium may be for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and may be accessed by an instruction execution system. Note that the computer-usable or computer-readable medium can be paper or other suitable medium upon which the program is printed, as the program can be electronically captured via, for instance, optical scanning of the paper or other suitable medium, then compiled, interpreted, of otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. This is distinct from computer storage media. The term “modulated data signal” can be defined as a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above-mentioned should also be included within the scope of computer-readable media, but not within computer storage media. When the subject matter is embodied in the general context of computer-executable instructions, the embodiment may comprise program modules, executed by one or more systems, computers, or other devices. Generally, program modules include routines, programs, objects, components, data structures, and the like, that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments. FIG.1is a high level block diagram illustrating components of a file synchronization system100. The file synchronization system100includes a sync endpoint110and a sync endpoint150. The sync endpoint110is connected with a sync database120and is associated with a file system130. Likewise the sync endpoint150is connected with a sync database160and is associated with a file system170. The sync endpoint110includes a file sync provider111, a sync metadata component112, a data receiver113, a change updater114and an orchestrator115. Sync endpoint150includes a file sync provider151, an applier component152, a conflict resolver/detector153, a sync applier target154, and a sync metadata component155. For purposes of this discussion the components are arranged in an upload scenario from endpoint110to endpoint150. Before discussing the specific components of a file sync provider111or151, the different types of participants that can provide data will be discussed. A participant is a location where information from a data source is retrieved. A participant could be anything from a web service, to a laptop, to a USB thumb drive. Based on the capabilities of the particular device, the way that a provider integrates synchronization will vary. At the very least, the device is capable of programmatically returning information when requested. Ultimately, what needs to be determined is if the device can enable information to be stored and manipulated either on the existing device or within the current data store, and allow applications to be executed directly from the device. It is important to distinguish the types of participants to know if the participant will be able to store any state information required by the provider, and if it is possible to execute the provider directly from the device. Ideally, the participant model is generic. As such, a full participant could be configured to be either a partial or simple participant. Full participants are devices that allow developers to create applications and new data stores directly on the device. A laptop or a Smartphone are examples of full participants because new applications can be executed directly from the device and new data stores can be created to persist information if required. Partial participants are devices that have the ability to store data either in the existing data store or another data store on the device. These devices, however, do not have the ability to launch executables directly from the device. Some examples of these participants are thumb drives or SD Cards. These devices act like a hard drive where information can be created, updated or deleted. However, they do not typically give an interface that allows applications to be executed on them directly. Simple participants are devices that are only capable of providing information when requested. These devices cannot store or manipulate new data and are unable to support the creation of new applications. RSS Feeds and web services provided by an external organization such as Amazon or EBay are both examples of simple participants. These organizations may give the ability to execute or call web services and get results back, however, they do not give the ability to create data stores for a particular user, and they also do not give the ability to create applications to be executed within their web servers. The file sync provider111and provider151are similar components found both on the sync endpoint110and the sync endpoint150. The file sync provider can be used to synchronize files and folders in many different file systems such as NTFS, FAT, or SMB file systems. Further, the directories to synchronize can be local or remote. They do not have to be of the same file system. An application can use static filters to exclude or include files either by listing them explicitly or by using wildcard characters (such as *.txt). Or the application can set filters that exclude whole subfolders. An application can also register to receive notification of file synchronization progress. The orchestrator115is a component of the system100that is configured to initiate and control a sync session between two endpoints or participants. The orchestrator communicates with both providers111and151to start the synchronization process and reports back to the progress of the synchronization. The actual processes used by the orchestrator are well known in the synchronization process and any process can be used by the orchestrator115. The change updater114is a component of the system100that identifies local changes to the file system that did not occur through the sync since the last time the change updater114ran. The detection/identification of a change can be made by simply comparing the timestamps associated with a corresponding last sync time. Other approaches and methods can be used for determining changes that have been made in a namespace. The sync databases120and160are a component of the system100that stores metadata about the files in the file system. The sync databases120and160provide metadata about particular files that are to be synced between the client and the server. These databases may also be referred to as a metadata store. The sync database120,160provides the ability to store information about the file system and the objects within that file system with respect to state and change information. The metadata for a file system can be broken down into five components (concurrency tracking properties): versions, knowledge, tick count, replica ID and tombstones. For each item that is being synchronized, a small amount of information is stored that describes where and when the item was changed. This metadata is composed of two versions: a creation version and an update version. A version is composed of two components: a tick count assigned by the data store and the replica ID for the data store. As items are updated, the tick count is incremented by the data store and the new current tick count is applied to that item. The replica ID is a unique value that identifies a particular data store. The creation version is the same as the update version when the item is created. Subsequent updates to the item modify the update version. That is the creation version remains constant while the update version changes. There are two primary ways that versioning can be implemented. The first is referred to as in line tracking. In this method change tracking information for an item is updated as the change is made. In the case of a database, for example, a trigger may be used to update a change tracking table immediately after a row is updated. The second method is referred to as asynchronous tracking. In this method, there is an external process that runs and scans for changes. Any updates found are added to the version information. This process may be part of a scheduled process or it may be executed prior to synchronization. This process is typically used when there are no internal mechanisms to automatically update version information when items are updated (for example, when there is no way to inject logic in the update pipeline). A common way to check for changes is to store the state of an item and compare that it to its current state. For example, it might check to see if the last-write-time or file size had changed since the last update. Of course other methods for versioning can be used as well. All change-tracking must occur at least at the level of items. In other words, every item must have an independent version. In the case of file synchronization an item will likely be the file, but it may be other items which can be synchronized, such as a directory. More granular tracking may be desirable in some scenarios as it reduces the potential for data conflicts (two users updating the same item on different replicas). The downside is that it increases the amount of change-tracking information stored. Another concept is the notion of knowledge. Knowledge is a compact representation of changes that the replica is aware of. As version information is updated so does the knowledge for the data store. Providers such as providers111and151use replica knowledge to enumerate changes (determine which changes another replica is not aware of), and to detect conflicts (determine which operations were made without knowledge of each other). Each replica should also maintain tombstone information for each of the items that are deleted. This is important because when synchronization is executed, if the item is no longer there, the provider will have no way of telling that this item has been deleted and therefore cannot propagate the change to other providers. A tombstone can contain the following information a global ID, a update version and a creation version. Because the number of tombstones will grow over time, some implementations may create a process to clean up this store after a period of time in order to save space. That is these deleted files are removed from the list of files that are maintained with metadata information. In order to prevent this from happening the system can implement a method for catching these files. The method starts out by first identifying if the condition has been met, where there is a possibility of a lost tombstone. The replicas maintain another copy of knowledge, which can be considered or referred to as ‘Forgotten Knowledge’. When tombstones are removed/cleaned up from a database, the forgotten knowledge is set/updated. This forgotten knowledge may keep track of what tick counts have been cleaned up through. This can provide a hint as to when tombstones may have been lost. Then, if sync does not happen for a while (a replica becomes stale), the forgotten knowledge helps detect that a replica may be stale. Stale replicas can then initiate a ‘full enumeration sync session’. This full enumeration is a time intensive and expensive sync session whereby all files metadata is transferred between the participants of the sync session. The applier152is a component of the system that applies the specific changes that are indicated as being needed to complete the sync process. These changes are the changes that were noted by the change updater114in the sync process based on the metadata that is associated with each of the files. Depending on the direction of the sync process (upload or download) the applier152will operate on the corresponding sync endpoint. InFIG.1the illustration is of the uploading process where client110is uploading its changes to the client150. Conflict resolver153resolves detected conflicts between a data file that has already been stored and a purported updated version of the data file that is received as part of the syncing process. Fundamentally, a conflict occurs if a change is made to the same item on two replicas between synchronization sessions. Conflicts specifically occur when the source knowledge does not contain the destination version for an item (it is understood that the destination knowledge does not contain any of the source versions sent). If the version is contained in the destination's knowledge then the change is considered obsolete. Replicas are free to implement a variety of policies for the resolution of items in conflict across the synchronization community. In some approaches each of the replicas makes the same resolution regardless of where the conflict occurred or where it was detected. The following are some examples of commonly used resolution policies: Source Wins: Changes made by the local replica always win in the event of a conflict. Destination wins, the remote replica always wins. Specified replica ID always wins, that is no matter who changes an item, the replica with the designated ID always wins. Last-writer wins, this is based on the assumption that all replicas are trusted to make changes, and that wall clocks are synchronized. Therefore the last writer to the file is allowed to win. Merge, in the event of two duplicate items in conflict, the system merges the information from one file into the other. Log conflict, in this approach the system chooses to simply log or defer the conflict. The sync target applier154is a component of the system that applies the indicated changes following the resolution of any conflicts as determined by the conflict resolver. The specific changes are split into two groups. The first group is the actual data that was changed to the file. This information is processed through the applier154and provided to the file system170which causes the appropriate change to be made to the underlying file. The second set of changes are those changes that are made to the sync metadata. These are applied to the sync database160through the metadata component155where the metadata about the particular files are stored. Hosted platforms often implement many instances of a particular service or scale units. Each of the instances of the service may be associated with a particular client or a particular subscription to the service. Traditional hosted platforms can in response to increased loads add additional resources to the instances. However, the additional resources cause an increase in all of the resources that make up the service and not the individual components that make of the service. That is all components of the service are increased as opposed to only the individual components that need more resources. Services represent independent units of lifecycle and management in the platform, and can be deployed and serviced in complete isolation from other services. Communication between services is possible only via public endpoints, whereas communication between roles of services can be done over public, and internal/protected endpoints. From networking point of view each service has single load balanced virtual IP (VIP). Each VIP is a separate entity for detecting and protecting against distributed denial of services (DDoS) attacks. In order to avoid requiring non-default HTTP ports (80,443) remapping by the customers for communicating to different roles endpoints within a single service one approach utilizes separate services exposing endpoints on the default ports. A second approach utilizes a frontdoor service routing requests on the default ports to the roles within a single service based on a received uniform resource locator (URL). Services and roles in the present platform both include logically related groups of components. Separate services are created when their functions or deployment are significantly decoupled from one another, or when functionality of dependent services is leveraged. FIG.2is a block diagram of a platform200implementing services and roles according to one approach. The services include a subscription service220, a core service210, a monitoring service230, analytics service240, a metadata store250, a management data store260, a file store280and runtime state store270. However, in other approaches additional services can be added to the platform depending on the needs and desires of the particular end user. Client devices290connect to the platform to access these services. The core service210implements a number of different roles within the service. These roles include a frontdoor role211, a management role212, a sync/recall role213, a background processing role214, a data access discovery role215, and a backup/restore role216. It should be noted thatFIG.2illustrates a single instance of the core service210and each role within the core service210. However, in the present platform the core service210can exist on any number of nodes and these nodes can be distributed at different locations around the globe or within a data center. A request for any particular role can be serviced by anyone of these particular instances of the node. Further, none of the roles presented herein is tied to a particular instance of a file store280or metadata table. Even further, it should be noted that depending on needs of the platform certain roles can be expanded to be on more nodes without the need to scale out the remaining roles as well. The frontdoor role211is in one approach configured to implement a thin frontdoor based on a gatekeeper pattern. A gatekeeper pattern is designed to protect applications and services by using a dedicated host instance that acts as a broker between clients and the application or service. It validates and sanitizes requests, and passes requests and data between them. This provides an additional layer of security, and can limit the attack surface of the system. The frontdoor role211performs the validation of the incoming requests and routes those requests to the management role212, sync role213, data access role215or the backup/restore role216based on the content of the request. The frontdoor role211or sync role213also queues requests for long running tasks to the background processing role214in a background task queue. The frontdoor role211in some approaches also implements additional security measures or approaches to enhance the protection of the client data and the overall security of the system. The frontdoor role211will interact with a security library218to implement the additional security. The security library218can implement a security mechanism such as mutual authentication, SSL/TLS encryption, RMS data encryption, shared secret, distributed denial of service (DDoS) protection, and shared access signature. Other security mechanisms can be employed. The management role212is in one approach configured to provide provisioning, management, and configuration capabilities to the other services and roles in the platform. The management role212provides service endpoints for administrative access to the core service and the ability to modify the service through command inputs. The associated data for the management role212is stored in the management data store260. The sync/recall role, referred to herein as the sync role213, is a component of the platform that is configured to serve the synchronization and file recall foreground requests from the sync and recall clients, (e.g. client devices290) that are provided to it from the frontdoor role211in a stateless manner. The client device290makes a sync request to the frontdoor role211which then provides the request to the sync role213. The sync role213interfaces with a load balancer that balances the request for efficiency and to ensure that the response is returned prior to a predetermined timeout time. The sync role213during the sync accesses data from the metadata store250, the file store280, and a runtime state store270that are located on the background of the platform. There can be any number of sync roles213operating on the platform at a given time. This allows for efficient processing of sync requests. Anyone of these sync roles213is capable of handling the request. As such a request from a client device290could go to different instances of the sync role213at different times. As mentioned previously the sync role213is a stateless role with respect to the file service and file stores280. The core service further includes a load balancer that implements load balancing of the frontdoor requests based on a stateless sync/recall processing as well as an efficient caching system. If care is not taken over time each instance of the sync role213will end up caching data for every request from client devices290that connect or communicate with it. That is every partnership will result in additional caching. Thus, the load balancer implements a caching policy that keeps the sync role213from having its CPU and memory utilization from exceeding a threshold limit and also ensuring that the internal data structures stay below a threshold level. The load balancer routes, in one approach, requests using a round robin policy. That is the next sync role213for the service that has not had a request is assigned to process a request. The load balancer can also employ heuristics to optimize the location of the cache used locally and the resource usage. For example requests for the sync role213can be routed based on a hash of tenant or sync folders that provide affinity to the request without introducing statefullness to the request. The background processing role214is a component of the system that handles long running tasks, as well as offloaded tasks from the sync role213that the sync role213offloads in an attempt to balance its workload. These are tasks that take a significantly long period of time and the operation of which if done in the foreground could impact the overall performance of the system. Long running tasks can include tasks such as change detection using enumeration, back-up and restore of systems, etc. The background processing role214receives from the frontdoor role211the various tasks to be performed through the background task queue. The background task queue implements sharding and priority queues for managing the tasks that are given to the background processing role214. It uses performance metrics related to the throughput of the task. The priority queues are used primarily for requests/tasks that require low latency such as file recall. The results created by the background processing role214are reported to the calling client device290asynchronously through a notification to the device. The data access discovery role is a component of the system that provides location and data access capability to the platform. This can include providing for secure access to the data. This secure access can be achieved using a REST interface and shared access signature keys. The backup/restore role216is a component of the system that allows for the maintaining of recovery data that can recover a client device290in the event of a disaster, client device replacement, data loss, or other failure. Backup data can be provided from the device to the system using this role. Recovery of the client device290will cause the data stored in to be pushed to the client device290to recover the client device290. Monitoring service230is a component of the system that provides service status, and diagnostics and troubleshooting capabilities as combined view of last mile, outside in (active), and inside out (passive) monitoring of the system components. The underlying hosted platform provides monitoring of the platform infrastructure (such as a datacenter and a network), and platform services. Additional diagnostics and troubleshooting can be handled by the monitoring service230and are executed in the background. Analytics service240is a component of the system that allows for the telemetry analysis of the system. Specifically the analytics service240can provide a portal through which the administrator can view business and operational analytics capabilities. This can allow the administrator to make data driven decisions about live site or business aspects of service. The analytics service240receives data from multiple data sources for post-processing, reporting, and machine learning. These data sources assist in generating the analysis. Metadata store250is a component of system that handles the metadata for both the syncing process and files themselves. The metadata store250implements replica and item metadata storage, secondary indexes, locking, snapshot isolation and garbage collection. The secondary indexes support query patterns for a variety of sync scenarios, such as range queries. Locking is provided to ensure that a single writer at a time can access a particular file or a particular replica where the file resides. These accesses occur when processing change batches or during enumeration of a particular namespace281. Snapshot isolation consumes committed data and prevents garbage collection until the various endpoints have consumed the data that has been committed. The metadata store250also provides cross-table consistency. Knowledge and item data must be committed together. This ensures that the full picture for a particular replica is known. That is has the replica changed or not and what the state of the replica is. The management data store260is a component of the system that manages the placement of the data within the file store280and corresponding namespace281, as well as any other data provided by the client device290for the purposes of management. As the file store280is shared among a variety of different users and customers each namespace281must be kept separate from other namespaces281. The management data store260maintains a table for each namespace281-1,281-2,281-N (collectively281) that is managed by the hosting system. Each table represents the configuration for the particular tenant's replicas and the namespace281for each replica stored in the file store280. This configuration ensures that the sync represents the correct configuration of the file store280and that the metadata also reflects this configuration. The file store280is a component of the system where the actual data for the names pace281resides. The file store280can store the data in containers. Each user has a corresponding container in the file store280that corresponds to the sync folder maintained in the management data store260. These user containers may be shared with a number of different users and devices as well. Access to the container may occur from multiple endpoints at the same or nearly the same time. A single container is maintained for the generic share. Again the generic share corresponds to the sync folder on the management data store260. Unlike traditional file store280sin a hosted environment the users and devices can write directly to the corresponding containers without having to go through the sync role213to perform these reads and writes. The various endpoints (users and devices) are provided with information that allows them to know the actual location on the file store280where the data resides, such as a uniform resource identifier (URI) or uniform naming convention (UNC). Previous approaches have required the use of the intermediary to access the file store280as the location of the file store280to the device was not known precisely. However, access to the file store280for a particular container or namespace281can still be done through the sync role213or other core service210roles as traditionally done. Thus, depending on the capabilities of the application or device the file may be accessed through either method. As such, legacy applications are able to use the file store280without modification. The data that is stored in the file store280and the particular containers is stored in a stateless manner. That is the client manages any transient state necessary for any client interactions with the file store280. The file store280does not maintain any of this information with respect to this transient state of the data in its own system. Before a file names pace281can be synchronized to the cloud endpoint, storage must be allocated or provisioned for the files, directories and metadata. The present approach provisions a single file share or container for each sync namespace281. In this way multiple names paces281can be hosted on the cloud, but each namespace281is able to remain separated from one another. In one approach the file share is an Azure File Share. However, other versions and types of file shares can be used. A file share is a unit of file storage that represents the root of a hierarchical namespace281of folders and files. The share can be accessed through an application programming interface (API), such as the Azure File REST API, and also through protocols, such as the CIFS/SMB protocol. By mapping a sync names pace281to a file share a number of advantages can be recognized. First the file share allows for direct sync-to-share namespace281root mapping. Other provisioning options such as user-to-share mapping or tenant-to-share mapping require that the individual sync namespace281sare carved out underneath a file share root. Second is snapshot isolation. The present approach leverages the file share-level snapshot feature of the hosting service. This supports the ability to create and maintain an efficient copy of the state of the share at a single point in time. This is important for supporting backup-restore, migration, high-availability and other functionality where a consistent view of the namespace281is desired. Third is security isolation. The present approach leverages a shared access signature (SAS) authorization feature of the host service. This supports an option of controlling access to the share at the root level on a per-namespace281basis. Share level access control can be in place of or in addition to finer grained access control at the file or folder level. The hosting system can implement two different approaches for determining when to provision the file share for the particular namespace281. One approach is to use explicit provisioning through a management console to create a names pace281sync partnership with a specific user's names pace281. A second approach is to implicitly provision the namespace281when the first attempt to sync with the names pace281. Once the sync namespace281has been provisioned with a file share the namespace281can be synchronized between a participant and the cloud endpoint. The sync solution uses a file synchronization protocol between two parties (endpoints) of the sync partnership. The process of synchronization can follow the process discussed above with respect toFIG.1. However, it should be noted that the protocol involves the exchange of metadata about the state of the files/folders inside the namespace281on each of the endpoints followed by one or more upload and download sessions where file and folder metadata and data are transferred and created on each endpoint until the state of the namespace281on each endpoint matches. In the case where the files have changed on both sides of the sync partnership since the last sync session, conflicts are detected which may result in one or both sets of changes being retained. The runtime state store270is a component of the system that maintains the state of the files and a sync status of the files. The runtime state store270enables the particular sync role213to remain stateless with the file system and the file store280. When the particular sync role213needs state to perform a particular task the runtime state store270provides the needed state relationship for the sync process to continue. Because the state is maintained away from the sync role213any sync role213can perform any process of the sync. The client device290sare any devices that can connect to the sync role213for the purposes of syncing their data with the data hosted and stored on the file store280. The client device290scan include servers located on premise, mobile phones, laptops, tablets or any other device that interfaces with the file store280or another instance of the core service210. Further, devices can also be virtual versions of the devices where the device is hosted on another platform. The client device290scan interact and write to the file store280directly or can go through the sync role213to access the file store280and the particular containers contained therein. Each client device290also has its own version of the sync engine292. This sync engine is the gateway for the client device290to initiate a sync upload or download with the sync role213. From the client's perspective the sync process is no different than in systems where the only way to access data is through the intermediary. File sync activity in the present system can be periodic and driven by scheduled or can be on-demand directives from the client endpoints of the sync partnership. File access activity can occur at any time as the client device290scan directly access the file store without having to use the intermediary sync role to access the cloud based files. FIG.3is a flow diagram illustrating a process for syncing files between a client device290and a remote file service according to one illustrative embodiment. The synchronization process begins when the client device290requests a synchronization session with the core service210. This is illustrated at step310. In some approaches the request is generated by the client device290. In other approaches the request is generated by the core service210. In this approach the core service210sends a message to the client device290instructing the client device290to make a sync request. The sync requests can be on-demand sync requests or they can be periodic sync requests. The timing of the periodic sync requests can be set by an administrator to ensure the consistency of the data across all of the sync clients. For example, a periodic sync request may be done every day or every hour depending on the level of activity in the corresponding files. The synchronization request is received by the core service210and is provided to the frontdoor role211of the core service210to determine if the request can be processed. This is illustrated at step320. Again as discussed earlier the frontdoor role211does not have direct access to the corresponding files in the files store, limited access to the any persistent storage of the host system and can load balance the requests that are received. At this step in the process the frontdoor role211implements its gatekeeper pattern in protecting the exposure of the client's data. The frontdoor role211verifies that the request from the client is a proper request and contains the proper credentials. If the request does not meet the requirements for access to the underlying data store or service the frontdoor role211does not process the requests any further. If the request does meet the requirements for access the frontdoor role211routes the request to the correct service role and to the shards to support the resource affinity necessary to maintain a stateless synchronization. The frontdoor role211analyzes the request and determines if the particular request is for a long running task or not. This is illustrated at step330. A long running task are tasks such as enumeration, back-up and restore of systems, etc. that user a significant amount or resources or that their execution exceeds a predetermined threshold amount of time to complete. These long running tasks are sent to the background task queue to be processed from there. The frontdoor role211may at this time attempt to determine the size of the request by sending the request to a sync role213for determination of the size of the particular request. The sync role213can read from the metadata store250to determine what files have changed. Based on the number of files that have changed the frontdoor role211can determine that the particular sync request is a long running request or a normal request. In some instances the frontdoor can determine on its own that the request is for a long or short running task. For example, if a sync request is for a particular file or a particular folder then the frontdoor role211could determine that the task is a short running task. Whereas if the request was for a series of folders or directories then frontdoor role211could determine that the request is a long running task. In other instances, the frontdoor role211simply passes the sync request to the sync role213and allows the sync role to determine if the requests are short running or long running, and should be processed by the sync role213or passed to the background task processing role214. The background tasks processing role receives the long running tasks from the background task queue that was populated by the frontdoor role211. This is illustrated at step335. The background tasks processing role takes the next task in the queue and determines if the request or tasks needs to be sharded. Sharding is a type of database partitioning that separates very large databases into smaller, faster, more easily managed parts called data shards. In this instance the file store280has been divided into horizontal partitions or shards. Each shard has the same schema, but holds its own distinct subset of the data. A shard is a data store in its own right, running on a different instance of the file store280's storage nodes. In this instance the request will be split into different parts to reach the appropriate portions of the file store280where the underlying data is located. The background tasks processing role will then process the request to the corresponding file store280to retrieve or update the data that is contained. The client device290will be updated of the status of these background tasks asynchronously by the background tasks processing role. In one approach the background processing role214processes the request, and stores results in the runtime state store270. The frontdoor211can retrieve the result of the operation from run time state store270, so that the response can be provided to the client device290. In some cases, the frontdoor211checks the runtime state for a small period of time, to see if the task completes in a medium (on the order of 30 seconds) amount of time, so it can return the result immediately to the client. This is useful in situations where the heuristic for identifying that a task is long running is wrong, and it actually executes quickly. In other cases, when the frontdoor211has waited long enough, it returns a ‘pending’ result back to the client, with a URL representing the pending result. The client290can continue waiting for the result by polling the pending result URL. The frontdoor211, when receiving this pending result URL, can check the runtime state store270to see if the task has reported a result. If one is found, the result of the long-running task is returned to the client290. The further processing of the long running task is similar to the process of a short running task and follows with steps350-370below. If the request is determined to be a short running task the sync request is passed to a sync role213for processing. This is illustrated at step340. The frontdoor role211can select the particular sync role213to receive the request based on a number of factors. When multiple nodes of the sync role213exist the frontdoor role211may simply choose the sync node based on a round robin approach. That is, for example, if the first sync role213node was selected previously the second sync node in the group of nodes would be selected for the next sync request. In other approaches the frontdoor role211in conjunction with the load balancer may look at the loads currently experienced on the nodes and the size of the sync requests and select a low usage sync role213node for the processing of the request. The frontdoor role211load balances the request and then sends the request to the selected sync role213. In some approaches due to the size of the sync request or the speed needed for request to be serviced in the sync request may be broken apart or sharded into several batches by the frontdoor role211. Each of these batches may be sent to a different sync role213for processing. In some approaches a request may be sent to the background processing role214instead. The sync role213receives the request for the synchronization from the frontdoor role211and begins to process the synchronization request. This is illustrated at step350. At this point the sync role213needs to determine what files have changed and therefore which files will require synchronization with. The sync role213builds or identifies a batch of files to be uploaded or downloaded from the file store280based on the changes. This information as to which files are to be modified by the sync process is provided back to the frontdoor role211which provides this information to the client device290. In some approaches the sync role213can provide this information directly back to the client device290. Included with the files that are to be modified either by upload or download the sync role213may provide the universal resource identifier such that the client device290can read to or write directly to the file store280as part of the sync process. In this way any sync role213can handle the requests for the sync process as it does not need to maintain state with the files during the process. For upload sync the sync role213causes a staging folder or area to be created in the file store280. The staging folder is a holding area where the newer versions of the files to be synced are temporarily held until the sync role213can commit the files to the file store280through the syncing process. In some approaches the sync role213can determine that the particular request that was sent to it will exceed a predetermined threshold of resource usage. In this approach the sync role213can redirect the request to the background processing role214for processing. In other approaches the sync role213can shard the request itself and send to other sync roles. The client device290receives the batch information of what files to upload or download to or from the file store280and transfers the files indicated in the batch. This is illustrated at step360. At this step, and depending on the upload or download sync the file system for the client device290either uploads the files to the staging folder in the file store280or downloads the corresponding files from the file store280. This upload/download of the files is performed directly with the file store280or through the user of a different file transfer protocol, and not through the core service210or the sync role213. In this way the particular roles in the core service210are not required to maintain state with the files themselves during this process. State is maintained only with the file store280. When files are uploaded to the file store280they are uploaded to a staging folder in the file store280. Each batch that was created by the sync role213may have its own staging area. Once the files are uploaded to the staging area, or the download is completed the client device290sends a message to the sync role213that indicates that the upload/download has been completed by the client device290. In some approaches the files in the staging folder and the corresponding version of the files in the file store280are not locked from reading and writing during the upload process. If the files in the file store280change before the sync can be done, such as from another device accessing the file store280though the direct access feature that file will not be synced or committed at this time, but may be held back until a later sync. The sync role213responds to the indication that the upload/download has been completed by committing the changes. This is illustrated at step370. For a download the sync role213provides change batches to the client allowing the client to download the files content and apply the changes to its local file store and local metadata store. In some approaches the sync role213commits the changes by updating the sync request to completed and in some approaches updating the metadata store250to indicate that a particular client has been updated with this information. With an upload the sync role213causes the files in the staging area to overwrite or replace the corresponding file in the file store280as well as updating the metadata store250. The sync role213causes the file in the file store280to be renamed to temporary file and then the file in the staging area is renamed to the file store280as the new version of the file. This allows for the files to be updated and the batch to be processed even if the particular servicing sync node were to experience failure during the sync process, as any sync node can pick up the files from the staging folder and continue the synchronization process. FIG.4illustrates a component diagram of a computing device according to one embodiment. The computing device400can be utilized to implement one or more computing devices, computer processes, or software modules described herein. In one example, the computing device400can be utilized to process calculations, execute instructions, receive and transmit digital signals. In another example, the computing device400can be utilized to process calculations, execute instructions, receive and transmit digital signals, receive and transmit search queries, and hypertext, compile computer code, as required by the system of the present embodiments. Further, computing device400can be a distributed computing device where components of computing device400are located on different computing devices that are connected to each other through network or other forms of connections. Additionally, computing device400can be a cloud based computing device. The computing device400can be any general or special purpose computer now known or to become known capable of performing the steps and/or performing the functions described herein, either in software, hardware, firmware, or a combination thereof. In its most basic configuration, computing device400typically includes at least one central processing unit (CPU)402and memory404. Depending on the exact configuration and type of computing device, memory404may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. Additionally, computing device400may also have additional features/functionality. For example, computing device400may include multiple CPU's. The described methods may be executed in any manner by any processing unit in computing device400. For example, the described process may be executed by both multiple CPU's in parallel. Computing device400may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated inFIG.6by storage406. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Memory404and storage406are all examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by computing device400. Any such computer storage media may be part of computing device400. Computing device400may also contain communications device(s)412that allow the device to communicate with other devices. Communications device(s)412is an example of communication media. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. The term computer-readable media as used herein includes both computer storage media and communication media. The described methods may be encoded in any computer-readable media in any form, such as data, computer-executable instructions, and the like. Computing device400may also have input device(s)410such as keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s)408such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at length. Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively the local computer may download pieces of the software as needed, or distributively process by executing some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like. | 53,288 |
11943292 | While the invention is amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit the invention to the particular embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. DETAILED DESCRIPTION Aspects of the present disclosure relate to distributed system workload management, and, more specifically, to user tenancy management in distributed systems. An open source container environment may host multiple tenants; various mechanisms may be employed to manage hosting multiple tenants. In accordance with some embodiments of the present disclosure, an environment may share one application between multiple tenants, thereby reducing the need for duplication of application instances for each tenant that may use such an application. In accordance with some embodiments of the present disclosure, a controller-based application may be adopted by tenants. In accordance with some embodiments of the present disclosure, a controller may be scaled up or down based on tenant need and/or requirements; replications may be generated and/or terminated based on tenant demand. In some embodiments, one replication may be used per tenant; in some embodiments, each such replication may be a cluster for a specific tenant and may be referred to as a tenant cluster. In some embodiments of the present disclosure, information may be injected into replications. For example, a pod mutation webhook may inject application programming interface (API) information into a replication (e.g., a related tenant cluster). In some embodiments of the present disclosure, generation and/or termination of a replication may be done manually, automatically, on demand, and/or on a schedule. For example, a super cluster may automatically generate a new replication every day at a specific time to perform a specific task and terminate the replication upon the completion of that task. For example, a super cluster may automatically generate a scheduled replication and also generate another replication because a user logged into a system and requested a replication. In accordance with the present disclosure, an on-demand approach may be used. For example, a tenant cluster administrator may require a functionality supported by a controller; the functionality may already have been manually installed in the super cluster. In some embodiments, the functionality may have been installed in the super cluster by generating a custom resource (CR). In some embodiments, the required functionality may be dismissed by deleting the CR. Deleting the CR may result in any resources that the CR was using once again becoming available to the super cluster for redeployment for another application. In accordance with the present disclosure, an automatic approach may be used. For example, a user may engage a super cluster for a tenancy, and the super cluster may automatically generate a new tenancy for the user. In some embodiments, the super cluster may check every registered custom resource definition (CRD) to identify which, if any, are tenant sensitive. In some embodiments, the super cluster may determine which, if any, registered CRDs may scale up a related controller replication. The super cluster may identify a tenant sensitive, scalable CRD and use that CRD to automatically generate a new replication. A system in accordance with the present disclosure may include a memory and a processor in communication with the memory. The processor may be configured to perform operations. The operations may include registering a custom resource definition for a tenant with a host and scaling a controller for the customer resource definition. The operations may include generating a replication using the customer resource definition, injecting information into the replication, and syncing a status of the custom resource definition between the host and the tenant. In some embodiments of the present disclosure, the operations may include notifying the host of requirements for the tenant. In some embodiments, the host may generate the replication using the requirements. In some embodiments of the present disclosure, the information may be injected into the replication using a pod webhook. In some embodiments of the present disclosure, an event-driven autoscaling tool scales the controller. In some embodiments, the event-driven autoscaling tool may be an open-source event driven autoscaler (EDA) such as a Kubernetes® event-driven autoscaler (KEDA). In some embodiments of the present disclosure, the operations may include enhancing the custom resource definition for a sensitivity of the tenant. In some embodiments of the present disclosure, the operations may include enabling communication between the replication and a server. In some embodiments of the present disclosure, the information may be application programming interface information. FIG.1illustrates a super cluster system100in accordance with some embodiments of the present disclosure. The super cluster system100includes a target controller120, a syncer130, and a tenant cluster140. The super cluster system100includes a pod mutation webhook112in communication with a replication cluster122in the target controller120. The pod mutation webhook112may watch the super cluster system100for information to inject into the replication cluster122. For example, the pod mutation webhook112may watch for a change in a custom resource specification and/or directions from a control node (e.g., via an API server) and inject any new, additional, and/or changed tenant cluster information into the replication cluster122. The pod mutation webhook112may perform one or more tasks. For example, the pod mutation webhook112may adopt one or more HostAliases; the pod mutation webhook112may adopt HostAliases, for example, to change, update, or otherwise alter components of the host cluster and/or the replication cluster122of the target controller120. The pod mutation webhook112may adopt HostAliases to enable watching the API of the tenant cluster140. In some embodiments, the pod mutation webhook112may update a credential token bond for the pod. In some embodiments, the credential token may be stored in a secret such that updating the credential token bond can be achieved by replacing the secret in the pod definition. The super cluster system100includes an event-driven autoscaler114. The event-driven autoscaler114may scale the target controller120to meet a resource request or dismiss a resource that is no longer in use. The replication cluster122is shown in dashed lines to indicate that it was newly added to the super cluster system100. The replication cluster122may be a transient component, that is, generated on demand and dismissed when no longer in use. The event-driven autoscaler114is shown in direct communication with the replication cluster122. In some embodiments, the event-driven autoscaler114may be in indirect communication with the replication cluster122, in direct communication with the tenant controller120, and/or in indirect communication with the tenant controller120. The scaling (e.g., whether and how much to scale a controller and/or replication up or down) may depend on one or more manual and/or automatic triggers. For example, a new user may notify the super cluster system100of a desire for a new tenancy, and the event-driven autoscaler114may generate a new replication cluster122to satisfy the request. In another example, a user may complete the task the replication cluster122was generated for, and the event-driven autoscaler114may identify the task completion and dismiss the replication cluster122. The target controller120is in communication with a tenant cluster140. The tenant cluster140includes a controller requirement142for a custom resource (CR) in communication with a syncer130. The controller requirement142may be a CRD. The controller requirement142may communicate to the super cluster of one or more needs of the tenant cluster140; for example, the controller requirement142may notify a host cluster that the tenant cluster140is in need of a certain controller (e.g., target controller120). The controller requirement142may communicate with a syncer130. The syncer130may watch for CR related information and/or updates in the tenant cluster140and/or the hosting super cluster. The syncer130may, upon discovering a change (e.g., a new resource request or use requirement) may sync the tenant cluster140to the host cluster. The tenant cluster140may generate a new controller requirement150for a new or requested custom resource (CR). The new controller requirement150may communicate with a controller116and the event-driven auto-scaler114. FIG.2depicts a cluster service request diagram200in accordance with some embodiments of the present disclosure. The cluster service request diagram200includes a super cluster202receiving and processing a request from a tenant cluster212. The tenant cluster212may make214a CRD. The CRD may be, for example, a CR controller requirement (e.g., new controller requirement150ofFIG.1). The tenant cluster212may communicate the CRD to the super cluster202to notify the super cluster202of a required controller service. The super cluster202may use a syncer (e.g., syncer130ofFIG.1) to sync222the CRD from the tenant cluster212. In some embodiments, the syncer may relay to the tenant cluster212that the communication has been received, the status of a resource request, and/or similar information. The super cluster202may find224a target custom controller. In some embodiments, a syncer (e.g., syncer130ofFIG.1) and/or a controller (e.g., controller116ofFIG.1) may be used to find the target controller. In some embodiments, the controller may be targeted because it is related to the tenant cluster and/or the CRD (e.g., the controller is recognized as having capabilities that match the CRD requirements). The super cluster202may send226the information (e.g., the CRD requirements) to the CR. The super cluster202may scale228the target controller. In some embodiments, an event-driven autoscaler (e.g., the event-driven autoscaler114ofFIG.1or a KEDA) may be used to scale228the target controller. The super cluster202may generate232a new replication and inject234tenant cluster information into the new replication. The information may include, for example, API information such as the tenant cluster API address and/or credential information. In some embodiments, a pod webhook (e.g., the pod mutation webhook112ofFIG.1) may be used to inject234the tenant cluster information into the new replication. The super cluster202may sync236the target controller-related CRD to the tenant cluster212and update238the status of the CRD. In some embodiments, a controller (e.g., controller116ofFIG.1) may be used to update238the status of the CRD. The super cluster202may sync242the CRD status with the tenant cluster212. In some embodiments, a controller (e.g., controller116ofFIG.1) may be used to sync242the status of the CRD with the tenant cluster212. The super cluster202and/or tenant cluster212may then identify244that the service is ready such that the readied service may be used. In some embodiments, achieving the appropriate amount of resources may require multiple iterations; thus, a loop237may be used to repeatedly scale228the target controller up until an objective scale is achieved. For example, an objective may be to scale up228a controller to be able to sustain three resources identified as necessary in the CRD. FIG.3illustrates cluster service termination diagram300in accordance with some embodiments of the present disclosure. The cluster service termination diagram300includes a super cluster302receiving and processing a request from a tenant cluster312. The tenant cluster312may identify that a CRD is no longer needed. The tenant cluster312may relay a request to the super cluster302to delete314the CRD and/or otherwise notify the super cluster302that the CRD is no longer needed. Deleting a CRD may, for example, free up resources which may be reallocated elsewhere. The super cluster302may sync322the request to delete the CRD. In some embodiments, a syncer (e.g., syncer130ofFIG.1) may be used to sync322the deletion request from the tenant cluster312to the super cluster302. The super cluster302may scale down328the target controller. In some embodiments, an event-driven autoscaler (e.g., the event-driven autoscaler114ofFIG.1or a KEDA) may be used to scale down328the target controller. In some embodiments, the scale down328of the target controller may take multiple iterations; thus, a loop329may be used to repeatedly scale down328the target controller until an objective scale is achieved. For example, an objective may be to scale down328all related controllers and/or related controller functionalities. The super cluster302may remove334the target controller related CRD from the tenant cluster312. In some embodiments, the syncer (e.g., syncer130ofFIG.1) may be used to remove334the target controller related CRD from the tenant cluster312. FIG.4depicts an open source container environment400in accordance with some embodiments of the present disclosure. The open source container environment400includes a command line tool402and a custom resource specification404in communication with a cluster410. In some embodiments, the cluster410may be a super cluster. Specifically, the command line tool402and the custom resource specification404are in communication with an API server422in a control node420on the cluster410. The API server422is in communication with a custom resource430on the cluster410as well as an operator440on the cluster410. The operator440manages and runs a custom controller442. The custom controller442monitors the custom resource430and may reconcile the custom resource430to the custom resource specification404as appropriate. In some embodiments of the present disclosure, each custom controller registered may have at least one CRD. In some embodiments of the present disclosure, custom controllers may be used; in some embodiments, the custom controllers may be scanned and listed. For example, the custom controllers may be scanned and listed for preparing initial functionalities for a new replication for a new tenant. A custom controller may be made tenant aware. For example, to make a custom controller tenant aware, one or more extended fields may be appended to the CRD; in some embodiments of the present disclosure, two extended fields may be used. The extended field(s) appended to the CRD may be read by the API server. For example, the extended fields may include a tenant-sensitive field and a controller-selector field. In some embodiments, a tenant-sensitive field may be set to a default value of false. If the value is set to true and the application for the current tenant matches below a selector, a mean controller scaling may be triggered and performed. Thus, a new control plane may be generated for a new tenant. In some embodiments, a controller-selector field may be set to a default value of null wherein null results in no selection. The controller-selector field may use labels for deployments and/or logic expressions. For example, a CRD named “Policy” may have a group called “open-cluster-management.policy” that may be tenant sensitive. When a new tenant is generated, the add-on application “Policy controller” of advanced cluster management (ACM) may be enabled for the new tenant. In some embodiments, the add-on application may be automatically enabled. A computer implemented method in accordance with the present disclosure may include registering a custom resource definition for a tenant with a host and scaling a controller for the customer resource definition. The method may include generating a replication using the customer resource definition, injecting information into the replication, and syncing a status of the custom resource definition between the host and the tenant. In some embodiments of the present disclosure, the method may further include notifying the host of requirements for the tenant. In some embodiments, the host may generate the replication using the requirements. In some embodiments of the present disclosure, the information may be injected into the replication using a pod webhook. In some embodiments of the present disclosure, an event-driven autoscaling tool may scale the controller. In some embodiments of the present disclosure, the method may include enhancing the custom resource definition for a sensitivity of the tenant. In some embodiments of the present disclosure, the method may include enabling communication between the replication and a server. In some embodiments of the present disclosure, the information may be application programming interface information. FIG.5illustrates a computer-implemented method500in accordance with some embodiments of the present disclosure. The method500may be performed in an open-source container environment (e.g., the super cluster system100ofFIG.1). The method500includes registering510a CRD. In some embodiments, the CRD may be registered, for example, by a tenant cluster (e.g., tenant cluster212ofFIG.2) with a host cluster (e.g., super cluster202). The method500includes scaling540a controller. In some embodiments, the controller may be a target controller that may be scaled with an event-driven autoscaler (e.g., the event-driven autoscaler114ofFIG.1or a KEDA). The method500includes generating550a replication. In some embodiments, the replication may be generated by a host cluster (e.g., super cluster202ofFIG.2). The method500includes injecting560information into the replication. The information injected into the replication may include, for example, API information such as the tenant cluster API address and/or credential information. In some embodiments, a host cluster (e.g., super cluster202ofFIG.2) and/or a pod webhook (e.g., the pod mutation webhook112ofFIG.1) may be used to inject the tenant cluster information into the new replication. The method500includes syncing580the status of the CRD. The CRD status may be synced between a host cluster and a tenant cluster such that the tenant cluster is notified that the service is ready. In some embodiments, the syncing580may be done by a host cluster (e.g., super cluster202ofFIG.2); in some embodiments, the host cluster may use a controller (e.g., controller116ofFIG.1) to sync the status of the CRD. In some embodiments, the tenant cluster (e.g., tenant cluster212) may monitor the status of the CRD and sync580the status of the CRD when a change occurs. FIG.6depicts a computer-implemented method600in accordance with some embodiments of the present disclosure. The method600may be performed in an open-source container environment (e.g., the super cluster system100ofFIG.1). The method600includes registering610a CRD. In some embodiments, the CRD may be registered, for example, by a tenant cluster (e.g., tenant cluster212ofFIG.2) with a host cluster (e.g., super cluster202). The method600includes notifying620the host cluster of tenancy requirements of a CRD. A tenant cluster (e.g., tenant cluster140) may notify the host cluster (e.g., super cluster202) of any tenancy requirements via a command line tool (e.g., command line tool402) and/or a custom resource specification (e.g., a custom resource specification404) in communication with a cluster (e.g., cluster410) such as a super cluster (e.g., super cluster202). The method600includes enhancing630the CRD definition for tenant sensitivity. In some embodiments, a controller (e.g., a custom controller) may be made tenant aware. For example, to make a custom controller tenant aware, one or more extended fields may be appended to the CRD; the extended field(s) appended to the CRD may be read by the API server. In some embodiments, the extended fields may include, for example, a tenant-sensitive field and a controller-selector field. In some embodiments, a tenant-sensitive field may be set to a default value of false: if the value is set to true and the application for the current tenant matches below a selector, a mean controller scaling may be triggered and performed; thus, a new control plane may be generated for a new tenant. For example, a CRD may be tenant sensitive; when a new tenant is generated, an add-on application of an ACM may be enabled for the new tenant. The method600includes scaling640a controller. In some embodiments, the controller may be a target controller that may be scaled with an event-driven autoscaler (e.g., the event-driven autoscaler114ofFIG.1or a KEDA). The method600includes generating650a replication. In some embodiments, the replication may be generated by a host cluster (e.g., super cluster202ofFIG.2). The method600includes injecting660information into the replication. The information injected into the replication may include, for example, API information such as the tenant cluster API address and/or credential information. In some embodiments, a host cluster (e.g., super cluster202ofFIG.2) and/or a pod webhook (e.g., the pod mutation webhook112ofFIG.1) may be used to inject the tenant cluster information into the new replication. The method600includes enabling670communication between the replication and the server. Enabling670communication between a replication and a server may include, for example, appending one or more fields (e.g., a tenant-sensitive field and/or a controller-selector field) to the CRD which may be read by an API server. Enabling670communication between a replication and a server may include, for example, syncing a target controller related CRD (e.g., controller requirement150) to a tenant cluster (e.g., tenant cluster140). The method600includes syncing680the status of the CRD. The CRD status may be synced between a host cluster and a tenant cluster such that the tenant cluster is notified that the service is ready. In some embodiments, the syncing680may be done by a host cluster (e.g., super cluster202ofFIG.2); in some embodiments, the host cluster may use a controller (e.g., controller116ofFIG.1) to sync the status of the CRD. In some embodiments, the tenant cluster (e.g., tenant cluster212) may monitor the status of the CRD and sync680the status of the CRD when a change occurs. A computer program product in accordance with the present disclosure may include a computer readable storage medium having program instructions embodied therewith. The program instructions may be executable by a processor to cause the processor to perform a function. The function may include registering a custom resource definition for a tenant with a host and scaling a controller for the customer resource definition. The function may include generating a replication using the customer resource definition, injecting information into the replication, and syncing a status of the custom resource definition between the host and the tenant. In some embodiments of the present disclosure, the function may include notifying the host of requirements for the tenant. In some embodiments, the host may generate the replication using the requirements. In some embodiments of the present disclosure, the information may be injected into the replication using a pod webhook. In some embodiments of the present disclosure, an event-driven autoscaling tool scales the controller. In some embodiments of the present disclosure, the function may include enhancing the custom resource definition for a sensitivity of the tenant. In some embodiments of the present disclosure, the function may include enabling communication between the replication and a server. In some embodiments of the present disclosure, the information may be application programming interface information. It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present disclosure are capable of being implemented in conjunction with any other type of computing environment currently known or that which may be later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of portion independence in that the consumer generally has no control or knowledge over the exact portion of the provided resources but may be able to specify portion at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly release to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. Service models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but the consumer has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software which may include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications, and the consumer possibly has limited control of select networking components (e.g., host firewalls). Deployment models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and/or compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes. FIG.7illustrates a cloud computing environment710in accordance with embodiments of the present disclosure. As shown, cloud computing environment710includes one or more cloud computing nodes700with which local computing devices used by cloud consumers such as, for example, personal digital assistant (PDA) or cellular telephone700A, desktop computer700B, laptop computer700C, and/or automobile computer system700N may communicate. Nodes700may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as private, community, public, or hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment710to offer infrastructure, platforms, and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices700A-N shown inFIG.7are intended to be illustrative only and that computing nodes700and cloud computing environment710can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). FIG.8illustrates abstraction model layers800provided by cloud computing environment710(FIG.7) in accordance with embodiments of the present disclosure. It should be understood in advance that the components, layers, and functions shown inFIG.8are intended to be illustrative only and embodiments of the disclosure are not limited thereto. As depicted below, the following layers and corresponding functions are provided. Hardware and software layer815includes hardware and software components. Examples of hardware components include: mainframes802; RISC (Reduced Instruction Set Computer) architecture-based servers804; servers806; blade servers808; storage devices811; and networks and networking components812. In some embodiments, software components include network application server software814and database software816. Virtualization layer820provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers822; virtual storage824; virtual networks826, including virtual private networks; virtual applications and operating systems828; and virtual clients830. In one example, management layer840may provide the functions described below. Resource provisioning842provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and pricing844provide cost tracking as resources and are utilized within the cloud computing environment as well as billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks as well as protection for data and other resources. User portal846provides access to the cloud computing environment for consumers and system administrators. Service level management848provides cloud computing resource allocation and management such that required service levels are met. Service level agreement (SLA) planning and fulfillment850provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer860provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation862; software development and lifecycle management864; virtual classroom education delivery866; data analytics processing868; transaction processing870; and extend controller for multitenancy872. FIG.9illustrates a high-level block diagram of an example computer system901that may be used in implementing one or more of the methods, tools, and modules, and any related functions, described herein (e.g., using one or more processor circuits or computer processors of the computer) in accordance with embodiments of the present disclosure. In some embodiments, the major components of the computer system901may comprise a processor902with one or more central processing units (CPUs)902A,902B,902C, and902D, a memory subsystem904, a terminal interface912, a storage interface916, an I/O (Input/Output) device interface914, and a network interface918, all of which may be communicatively coupled, directly or indirectly, for inter-component communication via a memory bus903, an I/O bus908, and an I/O bus interface unit910. The computer system901may contain one or more general-purpose programmable CPUs902A,902B,902C, and902D, herein generically referred to as the CPU902. In some embodiments, the computer system901may contain multiple processors typical of a relatively large system; however, in other embodiments, the computer system901may alternatively be a single CPU system. Each CPU902may execute instructions stored in the memory subsystem904and may include one or more levels of on-board cache. System memory904may include computer system readable media in the form of volatile memory, such as random access memory (RAM)922or cache memory924. Computer system901may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system926can be provided for reading from and writing to a non-removable, non-volatile magnetic media, such as a “hard drive.” Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), or an optical disk drive for reading from or writing to a removable, non-volatile optical disc such as a CD-ROM, DVD-ROM, or other optical media can be provided. In addition, memory904can include flash memory, e.g., a flash memory stick drive or a flash drive. Memory devices can be connected to memory bus903by one or more data media interfaces. The memory904may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of various embodiments. One or more programs/utilities928, each having at least one set of program modules930, may be stored in memory904. The programs/utilities928may include a hypervisor (also referred to as a virtual machine monitor), one or more operating systems, one or more application programs, other program modules, and program data. Each of the operating systems, one or more application programs, other program modules, and program data, or some combination thereof, may include an implementation of a networking environment. Programs928and/or program modules930generally perform the functions or methodologies of various embodiments. Although the memory bus903is shown inFIG.9as a single bus structure providing a direct communication path among the CPUs902, the memory subsystem904, and the I/O bus interface910, the memory bus903may, in some embodiments, include multiple different buses or communication paths, which may be arranged in any of various forms, such as point-to-point links in hierarchical, star, or web configurations, multiple hierarchical buses, parallel and redundant paths, or any other appropriate type of configuration. Furthermore, while the I/O bus interface910and the I/O bus908are shown as single respective units, the computer system901may, in some embodiments, contain multiple I/O bus interface units910, multiple I/O buses908, or both. Further, while multiple I/O interface units910are shown, which separate the I/O bus908from various communications paths running to the various I/O devices, in other embodiments some or all of the I/O devices may be connected directly to one or more system I/O buses908. In some embodiments, the computer system901may be a multi-user mainframe computer system, a single-user system, a server computer, or similar device that has little or no direct user interface but receives requests from other computer systems (clients). Further, in some embodiments, the computer system901may be implemented as a desktop computer, portable computer, laptop or notebook computer, tablet computer, pocket computer, telephone, smartphone, network switches or routers, or any other appropriate type of electronic device. It is noted thatFIG.9is intended to depict the representative major components of an exemplary computer system901. In some embodiments, however, individual components may have greater or lesser complexity than as represented inFIG.9, components other than or in addition to those shown inFIG.9may be present, and the number, type, and configuration of such components may vary. The present disclosure may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide, or other transmission media (e.g., light pulses passing through a fiber-optic cable) or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network, and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on a remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other device to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Although the present disclosure has been described in terms of specific embodiments, it is anticipated that alterations and modifications thereof will become apparent to the skilled in the art. The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application, or the technical improvement over technologies found in the marketplace or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. Therefore, it is intended that the following claims be interpreted as covering all such alterations and modifications as fall within the true spirit and scope of the disclosure. | 46,446 |
11943293 | DESCRIPTION OF EMBODIMENTS Example methods, apparatus, and products for restoring a storage system from a replication target in accordance with embodiments of the present disclosure are described with reference to the accompanying drawings, beginning withFIG.1A.FIG.1Aillustrates an example system for data storage, in accordance with some implementations. System100(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system100may include the same, more, or fewer elements configured in the same or different manner in other implementations. System100includes a number of computing devices164A-B. Computing devices (also referred to as “client devices” herein) may be embodied, for example, a server in a data center, a workstation, a personal computer, a notebook, or the like. Computing devices164A-B may be coupled for data communications to one or more storage arrays102A-B through a storage area network (‘SAN’)158or a local area network (‘LAN’)160. The SAN158may be implemented with a variety of data communications fabrics, devices, and protocols. For example, the fabrics for SAN158may include Fibre Channel, Ethernet, Infiniband, Serial Attached Small Computer System Interface (‘SAS’), or the like. Data communications protocols for use with SAN158may include Advanced Technology Attachment (‘ATA’), Fibre Channel Protocol, Small Computer System Interface (‘SCSI’), Internet Small Computer System Interface (‘iSCSI’), HyperSCSI, Non-Volatile Memory Express (‘NVMe’) over Fabrics, or the like. It may be noted that SAN158is provided for illustration, rather than limitation. Other data communication couplings may be implemented between computing devices164A-B and storage arrays102A-B. The LAN160may also be implemented with a variety of fabrics, devices, and protocols. For example, the fabrics for LAN160may include Ethernet (802.3), wireless (802.11), or the like. Data communication protocols for use in LAN160may include Transmission Control Protocol (‘TCP’), User Datagram Protocol (‘UDP’), Internet Protocol (‘IP’), HyperText Transfer Protocol (‘HTTP’), Wireless Access Protocol (‘WAP’), Handheld Device Transport Protocol (‘HDTP’), Session Initiation Protocol (‘SIP’), Real Time Protocol (‘RTP’), or the like. Storage arrays102A-B may provide persistent data storage for the computing devices164A-B. Storage array102A may be contained in a chassis (not shown), and storage array102B may be contained in another chassis (not shown), in implementations. Storage array102A and102B may include one or more storage array controllers110A-D (also referred to as “controller” herein). A storage array controller110A-D may be embodied as a module of automated computing machinery comprising computer hardware, computer software, or a combination of computer hardware and software. In some implementations, the storage array controllers110A-D may be configured to carry out various storage tasks. Storage tasks may include writing data received from the computing devices164A-B to storage array102A-B, erasing data from storage array102A-B, retrieving data from storage array102A-B and providing data to computing devices164A-B, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as Redundant Array of Independent Drives (‘RAID’) or RAID-like data redundancy operations, compressing data, encrypting data, and so forth. Storage array controller110A-D may be implemented in a variety of ways, including as a Field Programmable Gate Array (‘FPGA’), a Programmable Logic Chip (‘PLC’), an Application Specific Integrated Circuit (‘ASIC’), System-on-Chip (‘SOC’), or any computing device that includes discrete components such as a processing device, central processing unit, computer memory, or various adapters. Storage array controller110A-D may include, for example, a data communications adapter configured to support communications via the SAN158or LAN160. In some implementations, storage array controller110A-D may be independently coupled to the LAN160. In implementations, storage array controller110A-D may include an I/O controller or the like that couples the storage array controller110A-D for data communications, through a midplane (not shown), to a persistent storage resource170A-B (also referred to as a “storage resource” herein). The persistent storage resource170A-B main include any number of storage drives171A-F (also referred to as “storage devices” herein) and any number of non-volatile Random Access Memory (‘NVRAM’) devices (not shown). In some implementations, the NVRAM devices of a persistent storage resource170A-B may be configured to receive, from the storage array controller110A-D, data to be stored in the storage drives171A-F. In some examples, the data may originate from computing devices164A-B. In some examples, writing data to the NVRAM device may be carried out more quickly than directly writing data to the storage drive171A-F. In implementations, the storage array controller110A-D may be configured to utilize the NVRAM devices as a quickly accessible buffer for data destined to be written to the storage drives171A-F. Latency for write requests using NVRAM devices as a buffer may be improved relative to a system in which a storage array controller110A-D writes data directly to the storage drives171A-F. In some implementations, the NVRAM devices may be implemented with computer memory in the form of high bandwidth, low latency RAM. The NVRAM device is referred to as “non-volatile” because the NVRAM device may receive or include a unique power source that maintains the state of the RAM after main power loss to the NVRAM device. Such a power source may be a battery, one or more capacitors, or the like. In response to a power loss, the NVRAM device may be configured to write the contents of the RAM to a persistent storage, such as the storage drives171A-F. In implementations, storage drive171A-F may refer to any device configured to record data persistently, where “persistently” or “persistent” refers to a device's ability to maintain recorded data after loss of power. In some implementations, storage drive171A-F may correspond to non-disk storage media. For example, the storage drive171A-F may be one or more solid-state drives (‘SSDs’), flash memory based storage, any type of solid-state non-volatile memory, or any other type of non-mechanical storage device. In other implementations, storage drive171A-F may include mechanical or spinning hard disk, such as hard-disk drives (‘HDD’). In some implementations, the storage array controllers110A-D may be configured for offloading device management responsibilities from storage drive171A-F in storage array102A-B. For example, storage array controllers110A-D may manage control information that may describe the state of one or more memory blocks in the storage drives171A-F. The control information may indicate, for example, that a particular memory block has failed and should no longer be written to, that a particular memory block contains boot code for a storage array controller110A-D, the number of program-erase (‘P/E’) cycles that have been performed on a particular memory block, the age of data stored in a particular memory block, the type of data that is stored in a particular memory block, and so forth. In some implementations, the control information may be stored with an associated memory block as metadata. In other implementations, the control information for the storage drives171A-F may be stored in one or more particular memory blocks of the storage drives171A-F that are selected by the storage array controller110A-D. The selected memory blocks may be tagged with an identifier indicating that the selected memory block contains control information. The identifier may be utilized by the storage array controllers110A-D in conjunction with storage drives171A-F to quickly identify the memory blocks that contain control information. For example, the storage controllers110A-D may issue a command to locate memory blocks that contain control information. It may be noted that control information may be so large that parts of the control information may be stored in multiple locations, that the control information may be stored in multiple locations for purposes of redundancy, for example, or that the control information may otherwise be distributed across multiple memory blocks in the storage drive171A-F. In implementations, storage array controllers110A-D may offload device management responsibilities from storage drives171A-F of storage array102A-B by retrieving, from the storage drives171A-F, control information describing the state of one or more memory blocks in the storage drives171A-F. Retrieving the control information from the storage drives171A-F may be carried out, for example, by the storage array controller110A-D querying the storage drives171A-F for the location of control information for a particular storage drive171A-F. The storage drives171A-F may be configured to execute instructions that enable the storage drive171A-F to identify the location of the control information. The instructions may be executed by a controller (not shown) associated with or otherwise located on the storage drive171A-F and may cause the storage drive171A-F to scan a portion of each memory block to identify the memory blocks that store control information for the storage drives171A-F. The storage drives171A-F may respond by sending a response message to the storage array controller110A-D that includes the location of control information for the storage drive171A-F. Responsive to receiving the response message, storage array controllers110A-D may issue a request to read data stored at the address associated with the location of control information for the storage drives171A-F. In other implementations, the storage array controllers110A-D may further offload device management responsibilities from storage drives171A-F by performing, in response to receiving the control information, a storage drive management operation. A storage drive management operation may include, for example, an operation that is typically performed by the storage drive171A-F (e.g., the controller (not shown) associated with a particular storage drive171A-F). A storage drive management operation may include, for example, ensuring that data is not written to failed memory blocks within the storage drive171A-F, ensuring that data is written to memory blocks within the storage drive171A-F in such a way that adequate wear leveling is achieved, and so forth. In implementations, storage array102A-B may implement two or more storage array controllers110A-D. For example, storage array102A may include storage array controllers110A and storage array controllers110B. At a given instance, a single storage array controller110A-D (e.g., storage array controller110A) of a storage system100may be designated with primary status (also referred to as “primary controller” herein), and other storage array controllers110A-D (e.g., storage array controller110B) may be designated with secondary status (also referred to as “secondary controller” herein). The primary controller may have particular rights, such as permission to alter data in persistent storage resource170A-B (e.g., writing data to persistent storage resource170A-B). At least some of the rights of the primary controller may supersede the rights of the secondary controller. For instance, the secondary controller may not have permission to alter data in persistent storage resource170A-B when the primary controller has the right. The status of storage array controllers110A-D may change. For example, storage array controller110A may be designated with secondary status, and storage array controller110B may be designated with primary status. In some implementations, a primary controller, such as storage array controller110A, may serve as the primary controller for one or more storage arrays102A-B, and a second controller, such as storage array controller110B, may serve as the secondary controller for the one or more storage arrays102A-B. For example, storage array controller110A may be the primary controller for storage array102A and storage array102B, and storage array controller110B may be the secondary controller for storage array102A and102B. In some implementations, storage array controllers110C and110D (also referred to as “storage processing modules”) may neither have primary or secondary status. Storage array controllers110C and110D, implemented as storage processing modules, may act as a communication interface between the primary and secondary controllers (e.g., storage array controllers110A and110B, respectively) and storage array102B. For example, storage array controller110A of storage array102A may send a write request, via SAN158, to storage array102B. The write request may be received by both storage array controllers110C and110D of storage array102B. Storage array controllers110C and110D facilitate the communication, e.g., send the write request to the appropriate storage drive171A-F. It may be noted that in some implementations storage processing modules may be used to increase the number of storage drives controlled by the primary and secondary controllers. In implementations, storage array controllers110A-D are communicatively coupled, via a midplane (not shown), to one or more storage drives171A-F and to one or more NVRAM devices (not shown) that are included as part of a storage array102A-B. The storage array controllers110A-D may be coupled to the midplane via one or more data communication links and the midplane may be coupled to the storage drives171A-F and the NVRAM devices via one or more data communications links. The data communications links described herein are collectively illustrated by data communications links108A-D and may include a Peripheral Component Interconnect Express (‘PCIe’) bus, for example. FIG.1Billustrates an example system for data storage, in accordance with some implementations. Storage array controller101illustrated inFIG.1Bmay be similar to the storage array controllers110A-D described with respect toFIG.1A. In one example, storage array controller101may be similar to storage array controller110A or storage array controller110B. Storage array controller101includes numerous elements for purposes of illustration rather than limitation. It may be noted that storage array controller101may include the same, more, or fewer elements configured in the same or different manner in other implementations. It may be noted that elements ofFIG.1Amay be included below to help illustrate features of storage array controller101. Storage array controller101may include one or more processing devices104and random access memory (‘RAM’)111. Processing device104(or controller101) represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device104(or controller101) may be a complex instruction set computing (‘CISC’) microprocessor, reduced instruction set computing (‘RISC’) microprocessor, very long instruction word (‘VLIW’) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device104(or controller101) may also be one or more special-purpose processing devices such as an ASIC, an FPGA, a digital signal processor (‘DSP’), network processor, or the like. The processing device104may be connected to the RAM111via a data communications link106, which may be embodied as a high speed memory bus such as a Double-Data Rate 4 (‘DDR4’) bus. Stored in RAM111is an operating system112. In some implementations, instructions113are stored in RAM111. Instructions113may include computer program instructions for performing operations in a direct-mapped flash storage system. In one embodiment, a direct-mapped flash storage system is one that addresses data blocks within flash drives directly and without an address translation performed by the storage controllers of the flash drives. In implementations, storage array controller101includes one or more host bus adapters103A-C that are coupled to the processing device104via a data communications link105A-C. In implementations, host bus adapters103A-C may be computer hardware that connects a host system (e.g., the storage array controller) to other network and storage arrays. In some examples, host bus adapters103A-C may be a Fibre Channel adapter that enables the storage array controller101to connect to a SAN, an Ethernet adapter that enables the storage array controller101to connect to a LAN, or the like. Host bus adapters103A-C may be coupled to the processing device104via a data communications link105A-C such as, for example, a PCIe bus. In implementations, storage array controller101may include a host bus adapter114that is coupled to an expander115. The expander115may be used to attach a host system to a larger number of storage drives. The expander115may, for example, be a SAS expander utilized to enable the host bus adapter114to attach to storage drives in an implementation where the host bus adapter114is embodied as a SAS controller. In implementations, storage array controller101may include a switch116coupled to the processing device104via a data communications link109. The switch116may be a computer hardware device that can create multiple endpoints out of a single endpoint, thereby enabling multiple devices to share a single endpoint. The switch116may, for example, be a PCIe switch that is coupled to a PCIe bus (e.g., data communications link109) and presents multiple PCIe connection points to the midplane. In implementations, storage array controller101includes a data communications link107for coupling the storage array controller101to other storage array controllers. In some examples, data communications link107may be a QuickPath Interconnect (QPI) interconnect. A traditional storage system that uses traditional flash drives may implement a process across the flash drives that are part of the traditional storage system. For example, a higher level process of the storage system may initiate and control a process across the flash drives. However, a flash drive of the traditional storage system may include its own storage controller that also performs the process. Thus, for the traditional storage system, a higher level process (e.g., initiated by the storage system) and a lower level process (e.g., initiated by a storage controller of the storage system) may both be performed. To resolve various deficiencies of a traditional storage system, operations may be performed by higher level processes and not by the lower level processes. For example, the flash storage system may include flash drives that do not include storage controllers that provide the process. Thus, the operating system of the flash storage system itself may initiate and control the process. This may be accomplished by a direct-mapped flash storage system that addresses data blocks within the flash drives directly and without an address translation performed by the storage controllers of the flash drives. The operating system of the flash storage system may identify and maintain a list of allocation units across multiple flash drives of the flash storage system. The allocation units may be entire erase blocks or multiple erase blocks. The operating system may maintain a map or address range that directly maps addresses to erase blocks of the flash drives of the flash storage system. Direct mapping to the erase blocks of the flash drives may be used to rewrite data and erase data. For example, the operations may be performed on one or more allocation units that include a first data and a second data where the first data is to be retained and the second data is no longer being used by the flash storage system. The operating system may initiate the process to write the first data to new locations within other allocation units and erasing the second data and marking the allocation units as being available for use for subsequent data. Thus, the process may only be performed by the higher level operating system of the flash storage system without an additional lower level process being performed by controllers of the flash drives. Advantages of the process being performed only by the operating system of the flash storage system include increased reliability of the flash drives of the flash storage system as unnecessary or redundant write operations are not being performed during the process. One possible point of novelty here is the concept of initiating and controlling the process at the operating system of the flash storage system. In addition, the process can be controlled by the operating system across multiple flash drives. This is in contrast to the process being performed by a storage controller of a flash drive. A storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection. FIG.1Cillustrates a third example system117for data storage in accordance with some implementations. System117(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system117may include the same, more, or fewer elements configured in the same or different manner in other implementations. In one embodiment, system117includes a dual Peripheral Component Interconnect (‘PCI’) flash storage device118with separately addressable fast write storage. System117may include a storage controller119. In one embodiment, storage controller119A-D may be a CPU, ASIC, FPGA, or any other circuitry that may implement control structures necessary according to the present disclosure. In one embodiment, system117includes flash memory devices (e.g., including flash memory devices120a-n), operatively coupled to various channels of the storage device controller119. Flash memory devices120a-nmay be presented to the controller119A-D as an addressable collection of Flash pages, erase blocks, and/or control elements sufficient to allow the storage device controller119A-D to program and retrieve various aspects of the Flash. In one embodiment, storage device controller119A-D may perform operations on flash memory devices120a-nincluding storing and retrieving data content of pages, arranging and erasing any blocks, tracking statistics related to the use and reuse of Flash memory pages, erase blocks, and cells, tracking and predicting error codes and faults within the Flash memory, controlling voltage levels associated with programming and retrieving contents of Flash cells, etc. In one embodiment, system117may include RAM121to store separately addressable fast-write data. In one embodiment, RAM121may be one or more separate discrete devices. In another embodiment, RAM121may be integrated into storage device controller119A-D or multiple storage device controllers. The RAM121may be utilized for other purposes as well, such as temporary program memory for a processing device (e.g., a CPU) in the storage device controller119. In one embodiment, system117may include a stored energy device122, such as a rechargeable battery or a capacitor. Stored energy device122may store energy sufficient to power the storage device controller119, some amount of the RAM (e.g., RAM121), and some amount of Flash memory (e.g., Flash memory120a-120n) for sufficient time to write the contents of RAM to Flash memory. In one embodiment, storage device controller119A-D may write the contents of RAM to Flash Memory if the storage device controller detects loss of external power. In one embodiment, system117includes two data communications links123a,123b. In one embodiment, data communications links123a,123bmay be PCI interfaces. In another embodiment, data communications links123a,123bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Data communications links123a,123bmay be based on non-volatile memory express (‘NVMe’) or NVMe over fabrics (‘NVMf’) specifications that allow external connection to the storage device controller119A-D from other components in the storage system117. It should be noted that data communications links may be interchangeably referred to herein as PCI buses for convenience. System117may also include an external power source (not shown), which may be provided over one or both data communications links123a,123b, or which may be provided separately. An alternative embodiment includes a separate Flash memory (not shown) dedicated for use in storing the content of RAM121. The storage device controller119A-D may present a logical device over a PCI bus which may include an addressable fast-write logical device, or a distinct part of the logical address space of the storage device118, which may be presented as PCI memory or as persistent storage. In one embodiment, operations to store into the device are directed into the RAM121. On power failure, the storage device controller119A-D may write stored content associated with the addressable fast-write logical storage to Flash memory (e.g., Flash memory120a-n) for long-term persistent storage. In one embodiment, the logical device may include some presentation of some or all of the content of the Flash memory devices120a-n, where that presentation allows a storage system including a storage device118(e.g., storage system117) to directly address Flash memory pages and directly reprogram erase blocks from storage system components that are external to the storage device through the PCI bus. The presentation may also allow one or more of the external components to control and retrieve other aspects of the Flash memory including some or all of: tracking statistics related to use and reuse of Flash memory pages, erase blocks, and cells across all the Flash memory devices; tracking and predicting error codes and faults within and across the Flash memory devices; controlling voltage levels associated with programming and retrieving contents of Flash cells; etc. In one embodiment, the stored energy device122may be sufficient to ensure completion of in-progress operations to the Flash memory devices120a-120n, The stored energy device122may power storage device controller119A-D and associated Flash memory devices (e.g.,120a-n) for those operations, as well as for the storing of fast-write RAM to Flash memory. Stored energy device122may be used to store accumulated statistics and other parameters kept and tracked by the Flash memory devices120a-nand/or the storage device controller119. Separate capacitors or stored energy devices (such as smaller capacitors near or embedded within the Flash memory devices themselves) may be used for some or all of the operations described herein. Various schemes may be used to track and optimize the life span of the stored energy component, such as adjusting voltage levels over time, partially discharging the storage energy device122to measure corresponding discharge characteristics, etc. If the available energy decreases over time, the effective available capacity of the addressable fast-write storage may be decreased to ensure that it can be written safely based on the currently available stored energy. FIG.1Dillustrates a third example system124for data storage in accordance with some implementations. In one embodiment, system124includes storage controllers125a,125b. In one embodiment, storage controllers125a,125bare operatively coupled to Dual PCI storage devices119a,119band119c,119d, respectively. Storage controllers125a,125bmay be operatively coupled (e.g., via a storage network130) to some number of host computers127a-n. In one embodiment, two storage controllers (e.g.,125aand125b) provide storage services, such as a SCS block storage array, a file server, an object server, a database or data analytics service, etc. The storage controllers125a,125bmay provide services through some number of network interfaces (e.g.,126a-d) to host computers127a-noutside of the storage system124. Storage controllers125a,125bmay provide integrated services or an application entirely within the storage system124, forming a converged storage and compute system. The storage controllers125a,125bmay utilize the fast write memory within or across storage devices119a-dto journal in progress operations to ensure the operations are not lost on a power failure, storage controller removal, storage controller or storage system shutdown, or some fault of one or more software or hardware components within the storage system124. In one embodiment, controllers125a,125boperate as PCI masters to one or the other PCI buses128a,128b. In another embodiment,128aand128bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Other storage system embodiments may operate storage controllers125a,125bas multi-masters for both PCI buses128a,128b. Alternately, a PCI/NVMe/NVMf switching infrastructure or fabric may connect multiple storage controllers. Some storage system embodiments may allow storage devices to communicate with each other directly rather than communicating only with storage controllers. In one embodiment, a storage device controller119amay be operable under direction from a storage controller125ato synthesize and transfer data to be stored into Flash memory devices from data that has been stored in RAM (e.g., RAM121ofFIG.1C). For example, a recalculated version of RAM content may be transferred after a storage controller has determined that an operation has fully committed across the storage system, or when fast-write memory on the device has reached a certain used capacity, or after a certain amount of time, to ensure improve safety of the data or to release addressable fast-write capacity for reuse. This mechanism may be used, for example, to avoid a second transfer over a bus (e.g.,128a,128b) from the storage controllers125a,125b. In one embodiment, a recalculation may include compressing data, attaching indexing or other metadata, combining multiple data segments together, performing erasure code calculations, etc. In one embodiment, under direction from a storage controller125a,125b, a storage device controller119a,119bmay be operable to calculate and transfer data to other storage devices from data stored in RAM (e.g., RAM121ofFIG.1C) without involvement of the storage controllers125a,125b. This operation may be used to mirror data stored in one controller125ato another controller125b, or it could be used to offload compression, data aggregation, and/or erasure coding calculations and transfers to storage devices to reduce load on storage controllers or the storage controller interface129a,129bto the PCI bus128a,128b. A storage device controller119A-D may include mechanisms for implementing high availability primitives for use by other parts of a storage system external to the Dual PCI storage device118. For example, reservation or exclusion primitives may be provided so that, in a storage system with two storage controllers providing a highly available storage service, one storage controller may prevent the other storage controller from accessing or continuing to access the storage device. This could be used, for example, in cases where one controller detects that the other controller is not functioning properly or where the interconnect between the two storage controllers may itself not be functioning properly. In one embodiment, a storage system for use with Dual PCI direct mapped storage devices with separately addressable fast write storage includes systems that manage erase blocks or groups of erase blocks as allocation units for storing data on behalf of the storage service, or for storing metadata (e.g., indexes, logs, etc.) associated with the storage service, or for proper management of the storage system itself. Flash pages, which may be a few kilobytes in size, may be written as data arrives or as the storage system is to persist data for long intervals of time (e.g., above a defined threshold of time). To commit data more quickly, or to reduce the number of writes to the Flash memory devices, the storage controllers may first write data into the separately addressable fast write storage on one or more storage devices. In one embodiment, the storage controllers125a,125bmay initiate the use of erase blocks within and across storage devices (e.g.,118) in accordance with an age and expected remaining lifespan of the storage devices, or based on other statistics. The storage controllers125a,125bmay initiate garbage collection and data migration between storage devices in accordance with pages that are no longer needed as well as to manage Flash page and erase block lifespans and to manage overall system performance. In one embodiment, the storage system124may utilize mirroring and/or erasure coding schemes as part of storing data into addressable fast write storage and/or as part of writing data into allocation units associated with erase blocks. Erasure codes may be used across storage devices, as well as within erase blocks or allocation units, or within and across Flash memory devices on a single storage device, to provide redundancy against single or multiple storage device failures or to protect against internal corruptions of Flash memory pages resulting from Flash memory operations or from degradation of Flash memory cells. Mirroring and erasure coding at various levels may be used to recover from multiple types of failures that occur separately or in combination. The embodiments depicted with reference toFIGS.2A-Gillustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster. The storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata. Erasure coding refers to a method of data protection or reconstruction in which data is stored across a set of different locations, such as disks, storage nodes or geographic locations. Flash memory is one type of solid-state memory that may be integrated with the embodiments, although the embodiments may be extended to other types of solid-state memory or other storage medium, including non-solid state memory. Control of storage locations and workloads are distributed across the storage locations in a clustered peer-to-peer system. Tasks such as mediating communications between the various storage nodes, detecting when a storage node has become unavailable, and balancing I/Os (inputs and outputs) across the various storage nodes, are all handled on a distributed basis. Data is laid out or distributed across multiple storage nodes in data fragments or stripes that support data recovery in some embodiments. Ownership of data can be reassigned within a cluster, independent of input and output patterns. This architecture described in more detail below allows a storage node in the cluster to fail, with the system remaining operational, since the data can be reconstructed from other storage nodes and thus remain available for input and output operations. In various embodiments, a storage node may be referred to as a cluster node, a blade, or a server. The storage cluster may be contained within a chassis, i.e., an enclosure housing one or more storage nodes. A mechanism to provide power to each storage node, such as a power distribution bus, and a communication mechanism, such as a communication bus that enables communication between the storage nodes are included within the chassis. The storage cluster can run as an independent system in one location according to some embodiments. In one embodiment, a chassis contains at least two instances of both the power distribution and the communication bus which may be enabled or disabled independently. The internal communication bus may be an Ethernet bus, however, other technologies such as PCIe, InfiniBand, and others, are equally suitable. The chassis provides a port for an external communication bus for enabling communication between multiple chassis, directly or through a switch, and with client systems. The external communication may use a technology such as Ethernet, InfiniBand, Fibre Channel, etc. In some embodiments, the external communication bus uses different communication bus technologies for inter-chassis and client communication. If a switch is deployed within or between chassis, the switch may act as a translation between multiple protocols or technologies. When multiple chassis are connected to define a storage cluster, the storage cluster may be accessed by a client using either proprietary interfaces or standard interfaces such as network file system (‘NFS’), common internet file system (‘CIFS’), small computer system interface (‘SCSI’) or hypertext transfer protocol (‘HTTP’). Translation from the client protocol may occur at the switch, chassis external communication bus or within each storage node. In some embodiments, multiple chassis may be coupled or connected to each other through an aggregator switch. A portion and/or all of the coupled or connected chassis may be designated as a storage cluster. As discussed above, each chassis can have multiple blades, each blade has a media access control (‘MAC’) address, but the storage cluster is presented to an external network as having a single cluster IP address and a single MAC address in some embodiments. Each storage node may be one or more storage servers and each storage server is connected to one or more non-volatile solid state memory units, which may be referred to as storage units or storage devices. One embodiment includes a single storage server in each storage node and between one to eight non-volatile solid state memory units, however this one example is not meant to be limiting. The storage server may include a processor, DRAM and interfaces for the internal communication bus and power distribution for each of the power buses. Inside the storage node, the interfaces and storage unit share a communication bus, e.g., PCI Express, in some embodiments. The non-volatile solid state memory units may directly access the internal communication bus interface through a storage node communication bus, or request the storage node to access the bus interface. The non-volatile solid state memory unit contains an embedded CPU, solid state storage controller, and a quantity of solid state mass storage, e.g., between 2-32 terabytes (‘TB’) in some embodiments. An embedded volatile storage medium, such as DRAM, and an energy reserve apparatus are included in the non-volatile solid state memory unit. In some embodiments, the energy reserve apparatus is a capacitor, super-capacitor, or battery that enables transferring a subset of DRAM contents to a stable storage medium in the case of power loss. In some embodiments, the non-volatile solid state memory unit is constructed with a storage class memory, such as phase change or magnetoresistive random access memory (‘MRAM’) that substitutes for DRAM and enables a reduced power hold-up apparatus. One of many features of the storage nodes and non-volatile solid state storage is the ability to proactively rebuild data in a storage cluster. The storage nodes and non-volatile solid state storage can determine when a storage node or non-volatile solid state storage in the storage cluster is unreachable, independent of whether there is an attempt to read data involving that storage node or non-volatile solid state storage. The storage nodes and non-volatile solid state storage then cooperate to recover and rebuild the data in at least partially new locations. This constitutes a proactive rebuild, in that the system rebuilds data without waiting until the data is needed for a read access initiated from a client system employing the storage cluster. These and further details of the storage memory and operation thereof are discussed below. FIG.2Ais a perspective view of a storage cluster161, with multiple storage nodes150and internal solid-state memory coupled to each storage node to provide network attached storage or storage area network, in accordance with some embodiments. A network attached storage, storage area network, or a storage cluster, or other storage memory, could include one or more storage clusters161, each having one or more storage nodes150, in a flexible and reconfigurable arrangement of both the physical components and the amount of storage memory provided thereby. The storage cluster161is designed to fit in a rack, and one or more racks can be set up and populated as desired for the storage memory. The storage cluster161has a chassis138having multiple slots142. It should be appreciated that chassis138may be referred to as a housing, enclosure, or rack unit. In one embodiment, the chassis138has fourteen slots142, although other numbers of slots are readily devised. For example, some embodiments have four slots, eight slots, sixteen slots, thirty-two slots, or other suitable number of slots. Each slot142can accommodate one storage node150in some embodiments. Chassis138includes flaps148that can be utilized to mount the chassis138on a rack. Fans144provide air circulation for cooling of the storage nodes150and components thereof, although other cooling components could be used, or an embodiment could be devised without cooling components. A switch fabric146couples storage nodes150within chassis138together and to a network for communication to the memory. In an embodiment depicted in herein, the slots142to the left of the switch fabric146and fans144are shown occupied by storage nodes150, while the slots142to the right of the switch fabric146and fans144are empty and available for insertion of storage node150for illustrative purposes. This configuration is one example, and one or more storage nodes150could occupy the slots142in various further arrangements. The storage node arrangements need not be sequential or adjacent in some embodiments. Storage nodes150are hot pluggable, meaning that a storage node150can be inserted into a slot142in the chassis138, or removed from a slot142, without stopping or powering down the system. Upon insertion or removal of storage node150from slot142, the system automatically reconfigures in order to recognize and adapt to the change. Reconfiguration, in some embodiments, includes restoring redundancy and/or rebalancing data or load. Each storage node150can have multiple components. In the embodiment shown here, the storage node150includes a printed circuit board159populated by a CPU156, i.e., processor, a memory154coupled to the CPU156, and a non-volatile solid state storage152coupled to the CPU156, although other mountings and/or components could be used in further embodiments. The memory154has instructions which are executed by the CPU156and/or data operated on by the CPU156. As further explained below, the non-volatile solid state storage152includes flash or, in further embodiments, other types of solid-state memory. Referring toFIG.2A, storage cluster161is scalable, meaning that storage capacity with non-uniform storage sizes is readily added, as described above. One or more storage nodes150can be plugged into or removed from each chassis and the storage cluster self-configures in some embodiments. Plug-in storage nodes150, whether installed in a chassis as delivered or later added, can have different sizes. For example, in one embodiment a storage node150can have any multiple of 4 TB, e.g., 8 TB, 12 TB, 16 TB, 32 TB, etc. In further embodiments, a storage node150could have any multiple of other storage amounts or capacities. Storage capacity of each storage node150is broadcast, and influences decisions of how to stripe the data. For maximum storage efficiency, an embodiment can self-configure as wide as possible in the stripe, subject to a predetermined requirement of continued operation with loss of up to one, or up to two, non-volatile solid state storage units152or storage nodes150within the chassis. FIG.2Bis a block diagram showing a communications interconnect173and power distribution bus172coupling multiple storage nodes150. Referring back toFIG.2A, the communications interconnect173can be included in or implemented with the switch fabric146in some embodiments. Where multiple storage clusters161occupy a rack, the communications interconnect173can be included in or implemented with a top of rack switch, in some embodiments. As illustrated inFIG.2B, storage cluster161is enclosed within a single chassis138. External port176is coupled to storage nodes150through communications interconnect173, while external port174is coupled directly to a storage node. External power port178is coupled to power distribution bus172. Storage nodes150may include varying amounts and differing capacities of non-volatile solid state storage152as described with reference toFIG.2A. In addition, one or more storage nodes150may be a compute only storage node as illustrated inFIG.2B. Authorities168are implemented on the non-volatile solid state storages152, for example as lists or other data structures stored in memory. In some embodiments the authorities are stored within the non-volatile solid state storage152and supported by software executing on a controller or other processor of the non-volatile solid state storage152. In a further embodiment, authorities168are implemented on the storage nodes150, for example as lists or other data structures stored in the memory154and supported by software executing on the CPU156of the storage node150. Authorities168control how and where data is stored in the non-volatile solid state storages152in some embodiments. This control assists in determining which type of erasure coding scheme is applied to the data, and which storage nodes150have which portions of the data. Each authority168may be assigned to a non-volatile solid state storage152. Each authority may control a range of inode numbers, segment numbers, or other data identifiers which are assigned to data by a file system, by the storage nodes150, or by the non-volatile solid state storage152, in various embodiments. Every piece of data, and every piece of metadata, has redundancy in the system in some embodiments. In addition, every piece of data and every piece of metadata has an owner, which may be referred to as an authority. If that authority is unreachable, for example through failure of a storage node, there is a plan of succession for how to find that data or that metadata. In various embodiments, there are redundant copies of authorities168. Authorities168have a relationship to storage nodes150and non-volatile solid state storage152in some embodiments. Each authority168, covering a range of data segment numbers or other identifiers of the data, may be assigned to a specific non-volatile solid state storage152. In some embodiments the authorities168for all of such ranges are distributed over the non-volatile solid state storages152of a storage cluster. Each storage node150has a network port that provides access to the non-volatile solid state storage(s)152of that storage node150. Data can be stored in a segment, which is associated with a segment number and that segment number is an indirection for a configuration of a RAID (redundant array of independent disks) stripe in some embodiments. The assignment and use of the authorities168thus establishes an indirection to data. Indirection may be referred to as the ability to reference data indirectly, in this case via an authority168, in accordance with some embodiments. A segment identifies a set of non-volatile solid state storage152and a local identifier into the set of non-volatile solid state storage152that may contain data. In some embodiments, the local identifier is an offset into the device and may be reused sequentially by multiple segments. In other embodiments the local identifier is unique for a specific segment and never reused. The offsets in the non-volatile solid state storage152are applied to locating data for writing to or reading from the non-volatile solid state storage152(in the form of a RAID stripe). Data is striped across multiple units of non-volatile solid state storage152, which may include or be different from the non-volatile solid state storage152having the authority168for a particular data segment. If there is a change in where a particular segment of data is located, e.g., during a data move or a data reconstruction, the authority168for that data segment should be consulted, at that non-volatile solid state storage152or storage node150having that authority168. In order to locate a particular piece of data, embodiments calculate a hash value for a data segment or apply an inode number or a data segment number. The output of this operation points to a non-volatile solid state storage152having the authority168for that particular piece of data. In some embodiments there are two stages to this operation. The first stage maps an entity identifier (ID), e.g., a segment number, inode number, or directory number to an authority identifier. This mapping may include a calculation such as a hash or a bit mask. The second stage is mapping the authority identifier to a particular non-volatile solid state storage152, which may be done through an explicit mapping. The operation is repeatable, so that when the calculation is performed, the result of the calculation repeatably and reliably points to a particular non-volatile solid state storage152having that authority168. The operation may include the set of reachable storage nodes as input. If the set of reachable non-volatile solid state storage units changes the optimal set changes. In some embodiments, the persisted value is the current assignment (which is always true) and the calculated value is the target assignment the cluster will attempt to reconfigure towards. This calculation may be used to determine the optimal non-volatile solid state storage152for an authority in the presence of a set of non-volatile solid state storage152that are reachable and constitute the same cluster. The calculation also determines an ordered set of peer non-volatile solid state storage152that will also record the authority to non-volatile solid state storage mapping so that the authority may be determined even if the assigned non-volatile solid state storage is unreachable. A duplicate or substitute authority168may be consulted if a specific authority168is unavailable in some embodiments. With reference toFIGS.2A and2B, two of the many tasks of the CPU156on a storage node150are to break up write data, and reassemble read data. When the system has determined that data is to be written, the authority168for that data is located as above. When the segment ID for data is already determined the request to write is forwarded to the non-volatile solid state storage152currently determined to be the host of the authority168determined from the segment. The host CPU156of the storage node150, on which the non-volatile solid state storage152and corresponding authority168reside, then breaks up or shards the data and transmits the data out to various non-volatile solid state storage152. The transmitted data is written as a data stripe in accordance with an erasure coding scheme. In some embodiments, data is requested to be pulled, and in other embodiments, data is pushed. In reverse, when data is read, the authority168for the segment ID containing the data is located as described above. The host CPU156of the storage node150on which the non-volatile solid state storage152and corresponding authority168reside requests the data from the non-volatile solid state storage and corresponding storage nodes pointed to by the authority. In some embodiments the data is read from flash storage as a data stripe. The host CPU156of storage node150then reassembles the read data, correcting any errors (if present) according to the appropriate erasure coding scheme, and forwards the reassembled data to the network. In further embodiments, some or all of these tasks can be handled in the non-volatile solid state storage152. In some embodiments, the segment host requests the data be sent to storage node150by requesting pages from storage and then sending the data to the storage node making the original request. In some systems, for example in UNIX-style file systems, data is handled with an index node or inode, which specifies a data structure that represents an object in a file system. The object could be a file or a directory, for example. Metadata may accompany the object, as attributes such as permission data and a creation timestamp, among other attributes. A segment number could be assigned to all or a portion of such an object in a file system. In other systems, data segments are handled with a segment number assigned elsewhere. For purposes of discussion, the unit of distribution is an entity, and an entity can be a file, a directory or a segment. That is, entities are units of data or metadata stored by a storage system. Entities are grouped into sets called authorities. Each authority has an authority owner, which is a storage node that has the exclusive right to update the entities in the authority. In other words, a storage node contains the authority, and that the authority, in turn, contains entities. A segment is a logical container of data in accordance with some embodiments. A segment is an address space between medium address space and physical flash locations, i.e., the data segment number, are in this address space. Segments may also contain meta-data, which enable data redundancy to be restored (rewritten to different flash locations or devices) without the involvement of higher level software. In one embodiment, an internal format of a segment contains client data and medium mappings to determine the position of that data. Each data segment is protected, e.g., from memory and other failures, by breaking the segment into a number of data and parity shards, where applicable. The data and parity shards are distributed, i.e., striped, across non-volatile solid state storage152coupled to the host CPUs156(SeeFIGS.2E and2G) in accordance with an erasure coding scheme. Usage of the term segments refers to the container and its place in the address space of segments in some embodiments. Usage of the term stripe refers to the same set of shards as a segment and includes how the shards are distributed along with redundancy or parity information in accordance with some embodiments. A series of address-space transformations takes place across an entire storage system. At the top are the directory entries (file names) which link to an inode. Modes point into medium address space, where data is logically stored. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Segment addresses are then translated into physical flash locations. Physical flash locations have an address range bounded by the amount of flash in the system in accordance with some embodiments. Medium addresses and segment addresses are logical containers, and in some embodiments use a 128 bit or larger identifier so as to be practically infinite, with a likelihood of reuse calculated as longer than the expected life of the system. Addresses from logical containers are allocated in a hierarchical fashion in some embodiments. Initially, each non-volatile solid state storage unit152may be assigned a range of address space. Within this assigned range, the non-volatile solid state storage152is able to allocate addresses without synchronization with other non-volatile solid state storage152. Data and metadata is stored by a set of underlying storage layouts that are optimized for varying workload patterns and storage devices. These layouts incorporate multiple redundancy schemes, compression formats and index algorithms. Some of these layouts store information about authorities and authority masters, while others store file metadata and file data. The redundancy schemes include error correction codes that tolerate corrupted bits within a single storage device (such as a NAND flash chip), erasure codes that tolerate the failure of multiple storage nodes, and replication schemes that tolerate data center or regional failures. In some embodiments, low density parity check (‘LDPC’) code is used within a single storage unit. Reed-Solomon encoding is used within a storage cluster, and mirroring is used within a storage grid in some embodiments. Metadata may be stored using an ordered log structured index (such as a Log Structured Merge Tree), and large data may not be stored in a log structured layout. In order to maintain consistency across multiple copies of an entity, the storage nodes agree implicitly on two things through calculations: (1) the authority that contains the entity, and (2) the storage node that contains the authority. The assignment of entities to authorities can be done by pseudo randomly assigning entities to authorities, by splitting entities into ranges based upon an externally produced key, or by placing a single entity into each authority. Examples of pseudorandom schemes are linear hashing and the Replication Under Scalable Hashing (‘RUSH’) family of hashes, including Controlled Replication Under Scalable Hashing (‘CRUSH’). In some embodiments, pseudo-random assignment is utilized only for assigning authorities to nodes because the set of nodes can change. The set of authorities cannot change so any subjective function may be applied in these embodiments. Some placement schemes automatically place authorities on storage nodes, while other placement schemes rely on an explicit mapping of authorities to storage nodes. In some embodiments, a pseudorandom scheme is utilized to map from each authority to a set of candidate authority owners. A pseudorandom data distribution function related to CRUSH may assign authorities to storage nodes and create a list of where the authorities are assigned. Each storage node has a copy of the pseudorandom data distribution function, and can arrive at the same calculation for distributing, and later finding or locating an authority. Each of the pseudorandom schemes requires the reachable set of storage nodes as input in some embodiments in order to conclude the same target nodes. Once an entity has been placed in an authority, the entity may be stored on physical devices so that no expected failure will lead to unexpected data loss. In some embodiments, rebalancing algorithms attempt to store the copies of all entities within an authority in the same layout and on the same set of machines. Examples of expected failures include device failures, stolen machines, datacenter fires, and regional disasters, such as nuclear or geological events. Different failures lead to different levels of acceptable data loss. In some embodiments, a stolen storage node impacts neither the security nor the reliability of the system, while depending on system configuration, a regional event could lead to no loss of data, a few seconds or minutes of lost updates, or even complete data loss. In the embodiments, the placement of data for storage redundancy is independent of the placement of authorities for data consistency. In some embodiments, storage nodes that contain authorities do not contain any persistent storage. Instead, the storage nodes are connected to non-volatile solid state storage units that do not contain authorities. The communications interconnect between storage nodes and non-volatile solid state storage units consists of multiple communication technologies and has non-uniform performance and fault tolerance characteristics. In some embodiments, as mentioned above, non-volatile solid state storage units are connected to storage nodes via PCI express, storage nodes are connected together within a single chassis using Ethernet backplane, and chassis are connected together to form a storage cluster. Storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet or other long-distance networking links, such as a “metro scale” link or private link that does not traverse the internet. Authority owners have the exclusive right to modify entities, to migrate entities from one non-volatile solid state storage unit to another non-volatile solid state storage unit, and to add and remove copies of entities. This allows for maintaining the redundancy of the underlying data. When an authority owner fails, is going to be decommissioned, or is overloaded, the authority is transferred to a new storage node. Transient failures make it non-trivial to ensure that all non-faulty machines agree upon the new authority location. The ambiguity that arises due to transient failures can be achieved automatically by a consensus protocol such as Paxos, hot-warm failover schemes, via manual intervention by a remote system administrator, or by a local hardware administrator (such as by physically removing the failed machine from the cluster, or pressing a button on the failed machine). In some embodiments, a consensus protocol is used, and failover is automatic. If too many failures or replication events occur in too short a time period, the system goes into a self-preservation mode and halts replication and data movement activities until an administrator intervenes in accordance with some embodiments. As authorities are transferred between storage nodes and authority owners update entities in their authorities, the system transfers messages between the storage nodes and non-volatile solid state storage units. With regard to persistent messages, messages that have different purposes are of different types. Depending on the type of the message, the system maintains different ordering and durability guarantees. As the persistent messages are being processed, the messages are temporarily stored in multiple durable and non-durable storage hardware technologies. In some embodiments, messages are stored in RAM, NVRAM and on NAND flash devices, and a variety of protocols are used in order to make efficient use of each storage medium. Latency-sensitive client requests may be persisted in replicated NVRAM, and then later NAND, while background rebalancing operations are persisted directly to NAND. Persistent messages are persistently stored prior to being transmitted. This allows the system to continue to serve client requests despite failures and component replacement. Although many hardware components contain unique identifiers that are visible to system administrators, manufacturer, hardware supply chain and ongoing monitoring quality control infrastructure, applications running on top of the infrastructure address virtualize addresses. These virtualized addresses do not change over the lifetime of the storage system, regardless of component failures and replacements. This allows each component of the storage system to be replaced over time without reconfiguration or disruptions of client request processing, i.e., the system supports non-disruptive upgrades. In some embodiments, the virtualized addresses are stored with sufficient redundancy. A continuous monitoring system correlates hardware and software status and the hardware identifiers. This allows detection and prediction of failures due to faulty components and manufacturing details. The monitoring system also enables the proactive transfer of authorities and entities away from impacted devices before failure occurs by removing the component from the critical path in some embodiments. FIG.2Cis a multiple level block diagram, showing contents of a storage node150and contents of a non-volatile solid state storage152of the storage node150. Data is communicated to and from the storage node150by a network interface controller (‘NIC’)202in some embodiments. Each storage node150has a CPU156, and one or more non-volatile solid state storage152, as discussed above. Moving down one level inFIG.2C, each non-volatile solid state storage152has a relatively fast non-volatile solid state memory, such as nonvolatile random access memory (‘NVRAM’)204, and flash memory206. In some embodiments, NVRAM204may be a component that does not require program/erase cycles (DRAM, MRAM, PCM), and can be a memory that can support being written vastly more often than the memory is read from. Moving down another level inFIG.2C, the NVRAM204is implemented in one embodiment as high speed volatile memory, such as dynamic random access memory (DRAM)216, backed up by energy reserve218. Energy reserve218provides sufficient electrical power to keep the DRAM216powered long enough for contents to be transferred to the flash memory206in the event of power failure. In some embodiments, energy reserve218is a capacitor, super-capacitor, battery, or other device, that supplies a suitable supply of energy sufficient to enable the transfer of the contents of DRAM216to a stable storage medium in the case of power loss. The flash memory206is implemented as multiple flash dies222, which may be referred to as packages of flash dies222or an array of flash dies222. It should be appreciated that the flash dies222could be packaged in any number of ways, with a single die per package, multiple dies per package (i.e., multichip packages), in hybrid packages, as bare dies on a printed circuit board or other substrate, as encapsulated dies, etc. In the embodiment shown, the non-volatile solid state storage152has a controller212or other processor, and an input output (I/O) port210coupled to the controller212. I/O port210is coupled to the CPU156and/or the network interface controller202of the flash storage node150. Flash input output (I/O) port220is coupled to the flash dies222, and a direct memory access unit (DMA)214is coupled to the controller212, the DRAM216and the flash dies222. In the embodiment shown, the I/O port210, controller212, DMA unit214and flash I/O port220are implemented on a programmable logic device (‘PLD’)208, e.g., an FPGA. In this embodiment, each flash die222has pages, organized as sixteen kB (kilobyte) pages224, and a register226through which data can be written to or read from the flash die222. In further embodiments, other types of solid-state memory are used in place of, or in addition to flash memory illustrated within flash die222. Storage clusters161, in various embodiments as disclosed herein, can be contrasted with storage arrays in general. The storage nodes150are part of a collection that creates the storage cluster161. Each storage node150owns a slice of data and computing required to provide the data. Multiple storage nodes150cooperate to store and retrieve the data. Storage memory or storage devices, as used in storage arrays in general, are less involved with processing and manipulating the data. Storage memory or storage devices in a storage array receive commands to read, write, or erase data. The storage memory or storage devices in a storage array are not aware of a larger system in which they are embedded, or what the data means. Storage memory or storage devices in storage arrays can include various types of storage memory, such as RAM, solid state drives, hard disk drives, etc. The storage units152described herein have multiple interfaces active simultaneously and serving multiple purposes. In some embodiments, some of the functionality of a storage node150is shifted into a storage unit152, transforming the storage unit152into a combination of storage unit152and storage node150. Placing computing (relative to storage data) into the storage unit152places this computing closer to the data itself. The various system embodiments have a hierarchy of storage node layers with different capabilities. By contrast, in a storage array, a controller owns and knows everything about all of the data that the controller manages in a shelf or storage devices. In a storage cluster161, as described herein, multiple controllers in multiple storage units152and/or storage nodes150cooperate in various ways (e.g., for erasure coding, data sharding, metadata communication and redundancy, storage capacity expansion or contraction, data recovery, and so on). FIG.2Dshows a storage server environment, which uses embodiments of the storage nodes150and storage units152ofFIGS.2A-C. In this version, each storage unit152has a processor such as controller212(seeFIG.2C), an FPGA, flash memory206, and NVRAM204(which is super-capacitor backed DRAM216, seeFIGS.2B and2C) on a PCIe (peripheral component interconnect express) board in a chassis138(seeFIG.2A). The storage unit152may be implemented as a single board containing storage, and may be the largest tolerable failure domain inside the chassis. In some embodiments, up to two storage units152may fail and the device will continue with no data loss. The physical storage is divided into named regions based on application usage in some embodiments. The NVRAM204is a contiguous block of reserved memory in the storage unit152DRAM216, and is backed by NAND flash. NVRAM204is logically divided into multiple memory regions written for two as spool (e.g., spool region). Space within the NVRAM204spools is managed by each authority168independently. Each device provides an amount of storage space to each authority168. That authority168further manages lifetimes and allocations within that space. Examples of a spool include distributed transactions or notions. When the primary power to a storage unit152fails, onboard super-capacitors provide a short duration of power hold up. During this holdup interval, the contents of the NVRAM204are flushed to flash memory206. On the next power-on, the contents of the NVRAM204are recovered from the flash memory206. As for the storage unit controller, the responsibility of the logical “controller” is distributed across each of the blades containing authorities168. This distribution of logical control is shown inFIG.2Das a host controller242, mid-tier controller244and storage unit controller(s)246. Management of the control plane and the storage plane are treated independently, although parts may be physically co-located on the same blade. Each authority168effectively serves as an independent controller. Each authority168provides its own data and metadata structures, its own background workers, and maintains its own lifecycle. FIG.2Eis a blade252hardware block diagram, showing a control plane254, compute and storage planes256,258, and authorities168interacting with underlying physical resources, using embodiments of the storage nodes150and storage units152ofFIGS.2A-Cin the storage server environment ofFIG.2D. The control plane254is partitioned into a number of authorities168which can use the compute resources in the compute plane256to run on any of the blades252. The storage plane258is partitioned into a set of devices, each of which provides access to flash206and NVRAM204resources. In one embodiment, the compute plane256may perform the operations of a storage array controller, as described herein, on one or more devices of the storage plane258(e.g., a storage array). In the compute and storage planes256,258ofFIG.2E, the authorities168interact with the underlying physical resources (i.e., devices). From the point of view of an authority168, its resources are striped over all of the physical devices. From the point of view of a device, it provides resources to all authorities168, irrespective of where the authorities happen to run. Each authority168has allocated or has been allocated one or more partitions260of storage memory in the storage units152, e.g., partitions260in flash memory206and NVRAM204. Each authority168uses those allocated partitions260that belong to it, for writing or reading user data. Authorities can be associated with differing amounts of physical storage of the system. For example, one authority168could have a larger number of partitions260or larger sized partitions260in one or more storage units152than one or more other authorities168. FIG.2Fdepicts elasticity software layers in blades252of a storage cluster, in accordance with some embodiments. In the elasticity structure, elasticity software is symmetric, i.e., each blade's compute module270runs the three identical layers of processes depicted inFIG.2F. Storage managers274execute read and write requests from other blades252for data and metadata stored in local storage unit152NVRAM204and flash206. Authorities168fulfill client requests by issuing the necessary reads and writes to the blades252on whose storage units152the corresponding data or metadata resides. Endpoints272parse client connection requests received from switch fabric146supervisory software, relay the client connection requests to the authorities168responsible for fulfillment, and relay the authorities'168responses to clients. The symmetric three-layer structure enables the storage system's high degree of concurrency. Elasticity scales out efficiently and reliably in these embodiments. In addition, elasticity implements a unique scale-out technique that balances work evenly across all resources regardless of client access pattern, and maximizes concurrency by eliminating much of the need for inter-blade coordination that typically occurs with conventional distributed locking. Still referring toFIG.2F, authorities168running in the compute modules270of a blade252perform the internal operations required to fulfill client requests. One feature of elasticity is that authorities168are stateless, i.e., they cache active data and metadata in their own blades'252DRAMs for fast access, but the authorities store every update in their NVRAM204partitions on three separate blades252until the update has been written to flash206. All the storage system writes to NVRAM204are in triplicate to partitions on three separate blades252in some embodiments. With triple-mirrored NVRAM204and persistent storage protected by parity and Reed-Solomon RAID checksums, the storage system can survive concurrent failure of two blades252with no loss of data, metadata, or access to either. Because authorities168are stateless, they can migrate between blades252. Each authority168has a unique identifier. NVRAM204and flash206partitions are associated with authorities'168identifiers, not with the blades252on which they are running in some embodiments. Thus, when an authority168migrates, the authority168continues to manage the same storage partitions from its new location. When a new blade252is installed in an embodiment of the storage cluster, the system automatically rebalances load by: partitioning the new blade's252storage for use by the system's authorities168, migrating selected authorities168to the new blade252, starting endpoints272on the new blade252and including them in the switch fabric's146client connection distribution algorithm. From their new locations, migrated authorities168persist the contents of their NVRAM204partitions on flash206, process read and write requests from other authorities168, and fulfill the client requests that endpoints272direct to them. Similarly, if a blade252fails or is removed, the system redistributes its authorities168among the system's remaining blades252. The redistributed authorities168continue to perform their original functions from their new locations. FIG.2Gdepicts authorities168and storage resources in blades252of a storage cluster, in accordance with some embodiments. Each authority168is exclusively responsible for a partition of the flash206and NVRAM204on each blade252. The authority168manages the content and integrity of its partitions independently of other authorities168. Authorities168compress incoming data and preserve it temporarily in their NVRAM204partitions, and then consolidate, RAID-protect, and persist the data in segments of the storage in their flash206partitions. As the authorities168write data to flash206, storage managers274perform the necessary flash translation to optimize write performance and maximize media longevity. In the background, authorities168“garbage collect,” or reclaim space occupied by data that clients have made obsolete by overwriting the data. It should be appreciated that since authorities'168partitions are disjoint, there is no need for distributed locking to execute client and writes or to perform background functions. The embodiments described herein may utilize various software, communication and/or networking protocols. In addition, the configuration of the hardware and/or software may be adjusted to accommodate various protocols. For example, the embodiments may utilize Active Directory, which is a database based system that provides authentication, directory, policy, and other services in a WINDOWS' environment. In these embodiments, LDAP (Lightweight Directory Access Protocol) is one example application protocol for querying and modifying items in directory service providers such as Active Directory. In some embodiments, a network lock manager (‘NLM’) is utilized as a facility that works in cooperation with the Network File System (‘NFS’) to provide a System V style of advisory file and record locking over a network. The Server Message Block (‘SMB’) protocol, one version of which is also known as Common Internet File System (‘CIFS’), may be integrated with the storage systems discussed herein. SMP operates as an application-layer network protocol typically used for providing shared access to files, printers, and serial ports and miscellaneous communications between nodes on a network. SMB also provides an authenticated inter-process communication mechanism. AMAZON™ S3 (Simple Storage Service) is a web service offered by Amazon Web Services, and the systems described herein may interface with Amazon S3 through web services interfaces (REST (representational state transfer), SOAP (simple object access protocol), and BitTorrent). A RESTful API (application programming interface) breaks down a transaction to create a series of small modules. Each module addresses a particular underlying part of the transaction. The control or permissions provided with these embodiments, especially for object data, may include utilization of an access control list (‘ACL’). The ACL is a list of permissions attached to an object and the ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. The systems may utilize Internet Protocol version 6 (‘IPv6’), as well as IPv4, for the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. The routing of packets between networked systems may include Equal-cost multi-path routing (‘ECMP’), which is a routing strategy where next-hop packet forwarding to a single destination can occur over multiple “best paths” which tie for top place in routing metric calculations. Multi-path routing can be used in conjunction with most routing protocols, because it is a per-hop decision limited to a single router. The software may support Multi-tenancy, which is an architecture in which a single instance of a software application serves multiple customers. Each customer may be referred to as a tenant. Tenants may be given the ability to customize some parts of the application, but may not customize the application's code, in some embodiments. The embodiments may maintain audit logs. An audit log is a document that records an event in a computing system. In addition to documenting what resources were accessed, audit log entries typically include destination and source addresses, a timestamp, and user login information for compliance with various regulations. The embodiments may support various key management policies, such as encryption key rotation. In addition, the system may support dynamic root passwords or some variation dynamically changing passwords. FIG.3Asets forth a diagram of a storage system306that is coupled for data communications with a cloud services provider302in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Amay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2G. In some embodiments, the storage system306depicted inFIG.3Amay be embodied as a storage system that includes imbalanced active/active controllers, as a storage system that includes balanced active/active controllers, as a storage system that includes active/active controllers where less than all of each controller's resources are utilized such that each controller has reserve resources that may be used to support failover, as a storage system that includes fully active/active controllers, as a storage system that includes dataset-segregated controllers, as a storage system that includes dual-layer architectures with front-end controllers and back-end integrated storage controllers, as a storage system that includes scale-out clusters of dual-controller arrays, as well as combinations of such embodiments. In the example depicted inFIG.3A, the storage system306is coupled to the cloud services provider302via a data communications link304. The data communications link304may be embodied as a dedicated data communications link, as a data communications pathway that is provided through the use of one or data communications networks such as a wide area network (‘WAN’) or LAN, or as some other mechanism capable of transporting digital information between the storage system306and the cloud services provider302. Such a data communications link304may be fully wired, fully wireless, or some aggregation of wired and wireless data communications pathways. In such an example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using one or more data communications protocols. For example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using the handheld device transfer protocol (‘HDTP’), hypertext transfer protocol (‘HTTP’), internet protocol (‘IP’), real-time transfer protocol (‘RTP’), transmission control protocol (‘TCP’), user datagram protocol (‘UDP’), wireless application protocol (‘WAP’), or other protocol. The cloud services provider302depicted inFIG.3Amay be embodied, for example, as a system and computing environment that provides a vast array of services to users of the cloud services provider302through the sharing of computing resources via the data communications link304. The cloud services provider302may provide on-demand access to a shared pool of configurable computing resources such as computer networks, servers, storage, applications and services, and so on. The shared pool of configurable resources may be rapidly provisioned and released to a user of the cloud services provider302with minimal management effort. Generally, the user of the cloud services provider302is unaware of the exact computing resources utilized by the cloud services provider302to provide the services. Although in many cases such a cloud services provider302may be accessible via the Internet, readers of skill in the art will recognize that any system that abstracts the use of shared resources to provide services to a user through any data communications link may be considered a cloud services provider302. In the example depicted inFIG.3A, the cloud services provider302may be configured to provide a variety of services to the storage system306and users of the storage system306through the implementation of various service models. For example, the cloud services provider302may be configured to provide services through the implementation of an infrastructure as a service (‘IaaS’) service model, through the implementation of a platform as a service (‘PaaS’) service model, through the implementation of a software as a service (‘SaaS’) service model, through the implementation of an authentication as a service (‘AaaS’) service model, through the implementation of a storage as a service model where the cloud services provider302offers access to its storage infrastructure for use by the storage system306and users of the storage system306, and so on. Readers will appreciate that the cloud services provider302may be configured to provide additional services to the storage system306and users of the storage system306through the implementation of additional service models, as the service models described above are included only for explanatory purposes and in no way represent a limitation of the services that may be offered by the cloud services provider302or a limitation as to the service models that may be implemented by the cloud services provider302. In the example depicted inFIG.3A, the cloud services provider302may be embodied, for example, as a private cloud, as a public cloud, or as a combination of a private cloud and public cloud. In an embodiment in which the cloud services provider302is embodied as a private cloud, the cloud services provider302may be dedicated to providing services to a single organization rather than providing services to multiple organizations. In an embodiment where the cloud services provider302is embodied as a public cloud, the cloud services provider302may provide services to multiple organizations. In still alternative embodiments, the cloud services provider302may be embodied as a mix of a private and public cloud services with a hybrid cloud deployment. Although not explicitly depicted inFIG.3A, readers will appreciate that a vast amount of additional hardware components and additional software components may be necessary to facilitate the delivery of cloud services to the storage system306and users of the storage system306. For example, the storage system306may be coupled to (or even include) a cloud storage gateway. Such a cloud storage gateway may be embodied, for example, as hardware-based or software-based appliance that is located on-premise with the storage system306. Such a cloud storage gateway may operate as a bridge between local applications that are executing on the storage array306and remote, cloud-based storage that is utilized by the storage array306. Through the use of a cloud storage gateway, organizations may move primary iSCSI or NAS to the cloud services provider302, thereby enabling the organization to save space on their on-premises storage systems. Such a cloud storage gateway may be configured to emulate a disk array, a block-based device, a file server, or other storage system that can translate the SCSI commands, file server commands, or other appropriate command into REST-space protocols that facilitate communications with the cloud services provider302. In order to enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud migration process may take place during which data, applications, or other elements from an organization's local systems (or even from another cloud environment) are moved to the cloud services provider302. In order to successfully migrate data, applications, or other elements to the cloud services provider's302environment, middleware such as a cloud migration tool may be utilized to bridge gaps between the cloud services provider's302environment and an organization's environment. Such cloud migration tools may also be configured to address potentially high network costs and long transfer times associated with migrating large volumes of data to the cloud services provider302, as well as addressing security concerns associated with sensitive data to the cloud services provider302over data communications networks. In order to further enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud orchestrator may also be used to arrange and coordinate automated tasks in pursuit of creating a consolidated process or workflow. Such a cloud orchestrator may perform tasks such as configuring various components, whether those components are cloud components or on-premises components, as well as managing the interconnections between such components. The cloud orchestrator can simplify the inter-component communication and connections to ensure that links are correctly configured and maintained. In the example depicted inFIG.3A, and as described briefly above, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the usage of a SaaS service model, eliminating the need to install and run the application on local computers, which may simplify maintenance and support of the application. Such applications may take many forms in accordance with various embodiments of the present disclosure. For example, the cloud services provider302may be configured to provide access to data analytics applications to the storage system306and users of the storage system306. Such data analytics applications may be configured, for example, to receive vast amounts of telemetry data phoned home by the storage system306. Such telemetry data may describe various operating characteristics of the storage system306and may be analyzed for a vast array of purposes including, for example, to determine the health of the storage system306, to identify workloads that are executing on the storage system306, to predict when the storage system306will run out of various resources, to recommend configuration changes, hardware or software upgrades, workflow migrations, or other actions that may improve the operation of the storage system306. The cloud services provider302may also be configured to provide access to virtualized computing environments to the storage system306and users of the storage system306. Such virtualized computing environments may be embodied, for example, as a virtual machine or other virtualized computer hardware platforms, virtual storage devices, virtualized computer network resources, and so on. Examples of such virtualized environments can include virtual machines that are created to emulate an actual computer, virtualized desktop environments that separate a logical desktop from a physical machine, virtualized file systems that allow uniform access to different types of concrete file systems, and many others. For further explanation,FIG.3Bsets forth a diagram of a storage system306in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Bmay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2Gas the storage system may include many of the components described above. The storage system306depicted inFIG.3Bmay include a vast amount of storage resources308, which may be embodied in many forms. For example, the storage resources308can include nano-RAM or another form of nonvolatile random access memory that utilizes carbon nanotubes deposited on a substrate, 3D crosspoint non-volatile memory, flash memory including single-level cell (‘SLC’) NAND flash, multi-level cell (‘MLC’) NAND flash, triple-level cell (‘TLC’) NAND flash, quad-level cell (‘QLC’) NAND flash, or others. Likewise, the storage resources308may include non-volatile magnetoresistive random-access memory (‘MRAM’), including spin transfer torque (‘STT’) MRAM. The example storage resources308may alternatively include non-volatile phase-change memory (‘PCM’), quantum memory that allows for the storage and retrieval of photonic quantum information, resistive random-access memory (‘ReRAM’), storage class memory (‘SCM’), or other form of storage resources, including any combination of resources described herein. Readers will appreciate that other forms of computer memories and storage devices may be utilized by the storage systems described above, including DRAM, SRAM, EEPROM, universal memory, and many others. The storage resources308depicted inFIG.3Amay be embodied in a variety of form factors, including but not limited to, dual in-line memory modules (‘DIMMs’), non-volatile dual in-line memory modules (‘NVDIMMs’), M.2, U.2, and others. The storage resources308depicted inFIG.3Amay include various forms of SCM. SCM may effectively treat fast, non-volatile memory (e.g., NAND flash) as an extension of DRAM such that an entire dataset may be treated as an in-memory dataset that resides entirely in DRAM. SCM may include non-volatile media such as, for example, NAND flash. Such NAND flash may be accessed utilizing NVMe that can use the PCIe bus as its transport, providing for relatively low access latencies compared to older protocols. In fact, the network protocols used for SSDs in all-flash arrays can include NVMe using Ethernet (ROCE, NVME TCP), Fibre Channel (NVMe FC), InfiniBand (iWARP), and others that make it possible to treat fast, non-volatile memory as an extension of DRAM. In view of the fact that DRAM is often byte-addressable and fast, non-volatile memory such as NAND flash is block-addressable, a controller software/hardware stack may be needed to convert the block data to the bytes that are stored in the media. Examples of media and software that may be used as SCM can include, for example, 3D XPoint, Intel Memory Drive Technology, Samsung's Z-SSD, and others. The example storage system306depicted inFIG.3Bmay implement a variety of storage architectures. For example, storage systems in accordance with some embodiments of the present disclosure may utilize block storage where data is stored in blocks, and each block essentially acts as an individual hard drive. Storage systems in accordance with some embodiments of the present disclosure may utilize object storage, where data is managed as objects. Each object may include the data itself, a variable amount of metadata, and a globally unique identifier, where object storage can be implemented at multiple levels (e.g., device level, system level, interface level). Storage systems in accordance with some embodiments of the present disclosure utilize file storage in which data is stored in a hierarchical structure. Such data may be saved in files and folders, and presented to both the system storing it and the system retrieving it in the same format. The example storage system306depicted inFIG.3Bmay be embodied as a storage system in which additional storage resources can be added through the use of a scale-up model, additional storage resources can be added through the use of a scale-out model, or through some combination thereof. In a scale-up model, additional storage may be added by adding additional storage devices. In a scale-out model, however, additional storage nodes may be added to a cluster of storage nodes, where such storage nodes can include additional processing resources, additional networking resources, and so on. The storage system306depicted inFIG.3Balso includes communications resources310that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306, including embodiments where those resources are separated by a relatively vast expanse. The communications resources310may be configured to utilize a variety of different protocols and data communication fabrics to facilitate data communications between components within the storage systems as well as computing devices that are outside of the storage system. For example, the communications resources310can include fibre channel (‘FC’) technologies such as FC fabrics and FC protocols that can transport SCSI commands over FC network, FC over ethernet (‘FCoE’) technologies through which FC frames are encapsulated and transmitted over Ethernet networks, InfiniBand (‘IB’) technologies in which a switched fabric topology is utilized to facilitate transmissions between channel adapters, NVM Express (‘NVMe’) technologies and NVMe over fabrics (‘NVMeoF’) technologies through which non-volatile storage media attached via a PCI express (‘PCIe’) bus may be accessed, and others. In fact, the storage systems described above may, directly or indirectly, make use of neutrino communication technologies and devices through which information (including binary information) is transmitted using a beam of neutrinos. The communications resources310can also include mechanisms for accessing storage resources308within the storage system306utilizing serial attached SCSI (‘SAS’), serial ATA (‘SATA’) bus interfaces for connecting storage resources308within the storage system306to host bus adapters within the storage system306, internet small computer systems interface (‘iSCSI’) technologies to provide block-level access to storage resources308within the storage system306, and other communications resources that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306. The storage system306depicted inFIG.3Balso includes processing resources312that may be useful in executing computer program instructions and performing other computational tasks within the storage system306. The processing resources312may include one or more ASICs that are customized for some particular purpose as well as one or more CPUs. The processing resources312may also include one or more DSPs, one or more FPGAs, one or more systems on a chip (‘SoCs’), or other form of processing resources312. The storage system306may utilize the storage resources312to perform a variety of tasks including, but not limited to, supporting the execution of software resources314that will be described in greater detail below. The storage system306depicted inFIG.3Balso includes software resources314that, when executed by processing resources312within the storage system306, may perform a vast array of tasks. The software resources314may include, for example, one or more modules of computer program instructions that when executed by processing resources312within the storage system306are useful in carrying out various data protection techniques to preserve the integrity of data that is stored within the storage systems. Readers will appreciate that such data protection techniques may be carried out, for example, by system software executing on computer hardware within the storage system, by a cloud services provider, or in other ways. Such data protection techniques can include, for example, data archiving techniques that cause data that is no longer actively used to be moved to a separate storage device or separate storage system for long-term retention, data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe with the storage system, data replication techniques through which data stored in the storage system is replicated to another storage system such that the data may be accessible via multiple storage systems, data snapshotting techniques through which the state of data within the storage system is captured at various points in time, data and database cloning techniques through which duplicate copies of data and databases may be created, and other data protection techniques. The software resources314may also include software that is useful in implementing software-defined storage (‘SDS’). In such an example, the software resources314may include one or more modules of computer program instructions that, when executed, are useful in policy-based provisioning and management of data storage that is independent of the underlying hardware. Such software resources314may be useful in implementing storage virtualization to separate the storage hardware from the software that manages the storage hardware. The software resources314may also include software that is useful in facilitating and optimizing I/O operations that are directed to the storage resources308in the storage system306. For example, the software resources314may include software modules that perform carry out various data reduction techniques such as, for example, data compression, data deduplication, and others. The software resources314may include software modules that intelligently group together I/O operations to facilitate better usage of the underlying storage resource308, software modules that perform data migration operations to migrate from within a storage system, as well as software modules that perform other functions. Such software resources314may be embodied as one or more software containers or in many other ways. For further explanation,FIG.3Csets forth an example of a cloud-based storage system318in accordance with some embodiments of the present disclosure. In the example depicted inFIG.3C, the cloud-based storage system318is created entirely in a cloud computing environment316such as, for example, Amazon Web Services (‘AWS’), Microsoft Azure, Google Cloud Platform, IBM Cloud, Oracle Cloud, and others. The cloud-based storage system318may be used to provide services similar to the services that may be provided by the storage systems described above. For example, the cloud-based storage system318may be used to provide block storage services to users of the cloud-based storage system318, the cloud-based storage system318may be used to provide storage services to users of the cloud-based storage system318through the use of solid-state storage, and so on. The cloud-based storage system318depicted inFIG.3Cincludes two cloud computing instances320,322that each are used to support the execution of a storage controller application324,326. The cloud computing instances320,322may be embodied, for example, as instances of cloud computing resources (e.g., virtual machines) that may be provided by the cloud computing environment316to support the execution of software applications such as the storage controller application324,326. In one embodiment, the cloud computing instances320,322may be embodied as Amazon Elastic Compute Cloud (‘EC2’) instances. In such an example, an Amazon Machine Image (‘AMI’) that includes the storage controller application324,326may be booted to create and configure a virtual machine that may execute the storage controller application324,326. In the example method depicted inFIG.3C, the storage controller application324,326may be embodied as a module of computer program instructions that, when executed, carries out various storage tasks. For example, the storage controller application324,326may be embodied as a module of computer program instructions that, when executed, carries out the same tasks as the controllers110A,110B inFIG.1Adescribed above such as writing data received from the users of the cloud-based storage system318to the cloud-based storage system318, erasing data from the cloud-based storage system318, retrieving data from the cloud-based storage system318and providing such data to users of the cloud-based storage system318, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as RAID or RAID-like data redundancy operations, compressing data, encrypting data, deduplicating data, and so forth. Readers will appreciate that because there are two cloud computing instances320,322that each include the storage controller application324,326, in some embodiments one cloud computing instance320may operate as the primary controller as described above while the other cloud computing instance322may operate as the secondary controller as described above. Readers will appreciate that the storage controller application324,326depicted inFIG.3Cmay include identical source code that is executed within different cloud computing instances320,322. Consider an example in which the cloud computing environment316is embodied as AWS and the cloud computing instances are embodied as EC2 instances. In such an example, the cloud computing instance320that operates as the primary controller may be deployed on one of the instance types that has a relatively large amount of memory and processing power while the cloud computing instance322that operates as the secondary controller may be deployed on one of the instance types that has a relatively small amount of memory and processing power. In such an example, upon the occurrence of a failover event where the roles of primary and secondary are switched, a double failover may actually be carried out such that: 1) a first failover event where the cloud computing instance322that formerly operated as the secondary controller begins to operate as the primary controller, and 2) a third cloud computing instance (not shown) that is of an instance type that has a relatively large amount of memory and processing power is spun up with a copy of the storage controller application, where the third cloud computing instance begins operating as the primary controller while the cloud computing instance322that originally operated as the secondary controller begins operating as the secondary controller again. In such an example, the cloud computing instance320that formerly operated as the primary controller may be terminated. Readers will appreciate that in alternative embodiments, the cloud computing instance320that is operating as the secondary controller after the failover event may continue to operate as the secondary controller and the cloud computing instance322that operated as the primary controller after the occurrence of the failover event may be terminated once the primary role has been assumed by the third cloud computing instance (not shown). Readers will appreciate that while the embodiments described above relate to embodiments where one cloud computing instance320operates as the primary controller and the second cloud computing instance322operates as the secondary controller, other embodiments are within the scope of the present disclosure. For example, each cloud computing instance320,322may operate as a primary controller for some portion of the address space supported by the cloud-based storage system318, each cloud computing instance320,322may operate as a primary controller where the servicing of I/O operations directed to the cloud-based storage system318are divided in some other way, and so on. In fact, in other embodiments where costs savings may be prioritized over performance demands, only a single cloud computing instance may exist that contains the storage controller application. The cloud-based storage system318depicted inFIG.3Cincludes cloud computing instances340a,340b,340nwith local storage330,334,338. The cloud computing instances340a,340b,340ndepicted inFIG.3Cmay be embodied, for example, as instances of cloud computing resources that may be provided by the cloud computing environment316to support the execution of software applications. The cloud computing instances340a,340b,340nofFIG.3Cmay differ from the cloud computing instances320,322described above as the cloud computing instances340a,340b,340nofFIG.3Chave local storage330,334,338resources whereas the cloud computing instances320,322that support the execution of the storage controller application324,326need not have local storage resources. The cloud computing instances340a,340b,340nwith local storage330,334,338may be embodied, for example, as EC2 M5 instances that include one or more SSDs, as EC2 R5 instances that include one or more SSDs, as EC2 I3 instances that include one or more SSDs, and so on. In some embodiments, the local storage330,334,338must be embodied as solid-state storage (e.g., SSDs) rather than storage that makes use of hard disk drives. In the example depicted inFIG.3C, each of the cloud computing instances340a,340b,340nwith local storage330,334,338can include a software daemon328,332,336that, when executed by a cloud computing instance340a,340b,340ncan present itself to the storage controller applications324,326as if the cloud computing instance340a,340b,340nwere a physical storage device (e.g., one or more SSDs). In such an example, the software daemon328,332,336may include computer program instructions similar to those that would normally be contained on a storage device such that the storage controller applications324,326can send and receive the same commands that a storage controller would send to storage devices. In such a way, the storage controller applications324,326may include code that is identical to (or substantially identical to) the code that would be executed by the controllers in the storage systems described above. In these and similar embodiments, communications between the storage controller applications324,326and the cloud computing instances340a,340b,340nwith local storage330,334,338may utilize iSCSI, NVMe over TCP, messaging, a custom protocol, or in some other mechanism. In the example depicted inFIG.3C, each of the cloud computing instances340a,340b,340nwith local storage330,334,338may also be coupled to block-storage342,344,346that is offered by the cloud computing environment316. The block-storage342,344,346that is offered by the cloud computing environment316may be embodied, for example, as Amazon Elastic Block Store (‘EBS’) volumes. For example, a first EBS volume may be coupled to a first cloud computing instance340a, a second EBS volume may be coupled to a second cloud computing instance340b, and a third EBS volume may be coupled to a third cloud computing instance340n. In such an example, the block-storage342,344,346that is offered by the cloud computing environment316may be utilized in a manner that is similar to how the NVRAM devices described above are utilized, as the software daemon328,332,336(or some other module) that is executing within a particular cloud computing instance340a,340b,340nmay, upon receiving a request to write data, initiate a write of the data to its attached EBS volume as well as a write of the data to its local storage330,334,338resources. In some alternative embodiments, data may only be written to the local storage330,334,338resources within a particular cloud computing instance340a,340b,340n. In an alternative embodiment, rather than using the block-storage342,344,346that is offered by the cloud computing environment316as NVRAM, actual RAM on each of the cloud computing instances340a,340b,340nwith local storage330,334,338may be used as NVRAM, thereby decreasing network utilization costs that would be associated with using an EBS volume as the NVRAM. In the example depicted inFIG.3C, the cloud computing instances340a,340b,340nwith local storage330,334,338may be utilized, by cloud computing instances320,322that support the execution of the storage controller application324,326to service I/O operations that are directed to the cloud-based storage system318. Consider an example in which a first cloud computing instance320that is executing the storage controller application324is operating as the primary controller. In such an example, the first cloud computing instance320that is executing the storage controller application324may receive (directly or indirectly via the secondary controller) requests to write data to the cloud-based storage system318from users of the cloud-based storage system318. In such an example, the first cloud computing instance320that is executing the storage controller application324may perform various tasks such as, for example, deduplicating the data contained in the request, compressing the data contained in the request, determining where to the write the data contained in the request, and so on, before ultimately sending a request to write a deduplicated, encrypted, or otherwise possibly updated version of the data to one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. Either cloud computing instance320,322, in some embodiments, may receive a request to read data from the cloud-based storage system318and may ultimately send a request to read data to one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. Readers will appreciate that when a request to write data is received by a particular cloud computing instance340a,340b,340nwith local storage330,334,338, the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay be configured to not only write the data to its own local storage330,334,338resources and any appropriate block-storage342,344,346that are offered by the cloud computing environment316, but the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay also be configured to write the data to cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340n. The cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340nmay be embodied, for example, as Amazon Simple Storage Service (‘S3’) storage that is accessible by the particular cloud computing instance340a,340b,340n. In other embodiments, the cloud computing instances320,322that each include the storage controller application324,326may initiate the storage of the data in the local storage330,334,338of the cloud computing instances340a,340b,340nand the cloud-based object storage348. Readers will appreciate that, as described above, the cloud-based storage system318may be used to provide block storage services to users of the cloud-based storage system318. While the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nmay support block-level access, the cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340nsupports only object-based access. In order to address this, the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay be configured to take blocks of data, package those blocks into objects, and write the objects to the cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340n. Consider an example in which data is written to the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nin 1 MB blocks. In such an example, assume that a user of the cloud-based storage system318issues a request to write data that, after being compressed and deduplicated by the storage controller application324,326results in the need to write 5 MB of data. In such an example, writing the data to the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nis relatively straightforward as 5 blocks that are 1 MB in size are written to the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such an example, the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay be configured to: 1) create a first object that includes the first 1 MB of data and write the first object to the cloud-based object storage348, 2) create a second object that includes the second 1 MB of data and write the second object to the cloud-based object storage348, 3) create a third object that includes the third 1 MB of data and write the third object to the cloud-based object storage348, and so on. As such, in some embodiments, each object that is written to the cloud-based object storage348may be identical (or nearly identical) in size. Readers will appreciate that in such an example, metadata that is associated with the data itself may be included in each object (e.g., the first 1 MB of the object is data and the remaining portion is metadata associated with the data). Readers will appreciate that the cloud-based object storage348may be incorporated into the cloud-based storage system318to increase the durability of the cloud-based storage system318. Continuing with the example described above where the cloud computing instances340a,340b,340nare EC2 instances, readers will understand that EC2 instances are only guaranteed to have a monthly uptime of 99.9% and data stored in the local instance store only persists during the lifetime of the EC2 instance. As such, relying on the cloud computing instances340a,340b,340nwith local storage330,334,338as the only source of persistent data storage in the cloud-based storage system318may result in a relatively unreliable storage system. Likewise, EBS volumes are designed for 99.999% availability. As such, even relying on EBS as the persistent data store in the cloud-based storage system318may result in a storage system that is not sufficiently durable. Amazon S3, however, is designed to provide 99.999999999% durability, meaning that a cloud-based storage system318that can incorporate S3 into its pool of storage is substantially more durable than various other options. Readers will appreciate that while a cloud-based storage system318that can incorporate S3 into its pool of storage is substantially more durable than various other options, utilizing S3 as the primary pool of storage may result in storage system that has relatively slow response times and relatively long I/O latencies. As such, the cloud-based storage system318depicted inFIG.3Cnot only stores data in S3 but the cloud-based storage system318also stores data in local storage330,334,338resources and block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n, such that read operations can be serviced from local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n, thereby reducing read latency when users of the cloud-based storage system318attempt to read data from the cloud-based storage system318. In some embodiments, all data that is stored by the cloud-based storage system318may be stored in both: 1) the cloud-based object storage348, and 2) at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such embodiments, the local storage330,334,338resources and block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nmay effectively operate as cache that generally includes all data that is also stored in S3, such that all reads of data may be serviced by the cloud computing instances340a,340b,340nwithout requiring the cloud computing instances340a,340b,340nto access the cloud-based object storage348. Readers will appreciate that in other embodiments, however, all data that is stored by the cloud-based storage system318may be stored in the cloud-based object storage348, but less than all data that is stored by the cloud-based storage system318may be stored in at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such an example, various policies may be utilized to determine which subset of the data that is stored by the cloud-based storage system318should reside in both: 1) the cloud-based object storage348, and 2) at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. As described above, when the cloud computing instances340a,340b,340nwith local storage330,334,338are embodied as EC2 instances, the cloud computing instances340a,340b,340nwith local storage330,334,338are only guaranteed to have a monthly uptime of 99.9% and data stored in the local instance store only persists during the lifetime of each cloud computing instance340a,340b,340nwith local storage330,334,338. As such, one or more modules of computer program instructions that are executing within the cloud-based storage system318(e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances340a,340b,340nfrom the cloud-based object storage348, and storing the data retrieved from the cloud-based object storage348in local storage on the newly created cloud computing instances. Readers will appreciate that many variants of this process may be implemented. Consider an example in which all cloud computing instances340a,340b,340nwith local storage330,334,338failed. In such an example, the monitoring module may create new cloud computing instances with local storage, where high-bandwidth instances types are selected that allow for the maximum data transfer rates between the newly created high-bandwidth cloud computing instances with local storage and the cloud-based object storage348. Readers will appreciate that instances types are selected that allow for the maximum data transfer rates between the new cloud computing instances and the cloud-based object storage348such that the new high-bandwidth cloud computing instances can be rehydrated with data from the cloud-based object storage348as quickly as possible. Once the new high-bandwidth cloud computing instances are rehydrated with data from the cloud-based object storage348, less expensive lower-bandwidth cloud computing instances may be created, data may be migrated to the less expensive lower-bandwidth cloud computing instances, and the high-bandwidth cloud computing instances may be terminated. Readers will appreciate that in some embodiments, the number of new cloud computing instances that are created may substantially exceed the number of cloud computing instances that are needed to locally store all of the data stored by the cloud-based storage system318. The number of new cloud computing instances that are created may substantially exceed the number of cloud computing instances that are needed to locally store all of the data stored by the cloud-based storage system318in order to more rapidly pull data from the cloud-based object storage348and into the new cloud computing instances, as each new cloud computing instance can (in parallel) retrieve some portion of the data stored by the cloud-based storage system318. In such embodiments, once the data stored by the cloud-based storage system318has been pulled into the newly created cloud computing instances, the data may be consolidated within a subset of the newly created cloud computing instances and those newly created cloud computing instances that are excessive may be terminated. Consider an example in which 1000 cloud computing instances are needed in order to locally store all valid data that users of the cloud-based storage system318have written to the cloud-based storage system318. In such an example, assume that all 1,000 cloud computing instances fail. In such an example, the monitoring module may cause 100,000 cloud computing instances to be created, where each cloud computing instance is responsible for retrieving, from the cloud-based object storage348, distinct 1/100,000th chunks of the valid data that users of the cloud-based storage system318have written to the cloud-based storage system318and locally storing the distinct chunk of the dataset that it retrieved. In such an example, because each of the 100,000 cloud computing instances can retrieve data from the cloud-based object storage348in parallel, the caching layer may be restored 100 times faster as compared to an embodiment where the monitoring module only create 1000 replacement cloud computing instances. In such an example, over time the data that is stored locally in the 100,000 could be consolidated into 1,000 cloud computing instances and the remaining 99,000 cloud computing instances could be terminated. Readers will appreciate that various performance aspects of the cloud-based storage system318may be monitored (e.g., by a monitoring module that is executing in an EC2 instance) such that the cloud-based storage system318can be scaled-up or scaled-out as needed. Consider an example in which the monitoring module monitors the performance of the could-based storage system318via communications with one or more of the cloud computing instances320,322that each are used to support the execution of a storage controller application324,326, via monitoring communications between cloud computing instances320,322,340a,340b,340n, via monitoring communications between cloud computing instances320,322,340a,340b,340nand the cloud-based object storage348, or in some other way. In such an example, assume that the monitoring module determines that the cloud computing instances320,322that are used to support the execution of a storage controller application324,326are undersized and not sufficiently servicing the I/O requests that are issued by users of the cloud-based storage system318. In such an example, the monitoring module may create a new, more powerful cloud computing instance (e.g., a cloud computing instance of a type that includes more processing power, more memory, etc. . . . ) that includes the storage controller application such that the new, more powerful cloud computing instance can begin operating as the primary controller. Likewise, if the monitoring module determines that the cloud computing instances320,322that are used to support the execution of a storage controller application324,326are oversized and that cost savings could be gained by switching to a smaller, less powerful cloud computing instance, the monitoring module may create a new, less powerful (and less expensive) cloud computing instance that includes the storage controller application such that the new, less powerful cloud computing instance can begin operating as the primary controller. Consider, as an additional example of dynamically sizing the cloud-based storage system318, an example in which the monitoring module determines that the utilization of the local storage that is collectively provided by the cloud computing instances340a,340b,340nhas reached a predetermined utilization threshold (e.g., 95%). In such an example, the monitoring module may create additional cloud computing instances with local storage to expand the pool of local storage that is offered by the cloud computing instances. Alternatively, the monitoring module may create one or more new cloud computing instances that have larger amounts of local storage than the already existing cloud computing instances340a,340b,340n, such that data stored in an already existing cloud computing instance340a,340b,340ncan be migrated to the one or more new cloud computing instances and the already existing cloud computing instance340a,340b,340ncan be terminated, thereby expanding the pool of local storage that is offered by the cloud computing instances. Likewise, if the pool of local storage that is offered by the cloud computing instances is unnecessarily large, data can be consolidated and some cloud computing instances can be terminated. Readers will appreciate that the cloud-based storage system318may be sized up and down automatically by a monitoring module applying a predetermined set of rules that may be relatively simple of relatively complicated. In fact, the monitoring module may not only take into account the current state of the cloud-based storage system318, but the monitoring module may also apply predictive policies that are based on, for example, observed behavior (e.g., every night from 10 PM until 6 AM usage of the storage system is relatively light), predetermined fingerprints (e.g., every time a virtual desktop infrastructure adds 100 virtual desktops, the number of IOPS directed to the storage system increase by X), and so on. In such an example, the dynamic scaling of the cloud-based storage system318may be based on current performance metrics, predicted workloads, and many other factors, including combinations thereof. Readers will further appreciate that because the cloud-based storage system318may be dynamically scaled, the cloud-based storage system318may even operate in a way that is more dynamic. Consider the example of garbage collection. In a traditional storage system, the amount of storage is fixed. As such, at some point the storage system may be forced to perform garbage collection as the amount of available storage has become so constrained that the storage system is on the verge of running out of storage. In contrast, the cloud-based storage system318described here can always ‘add’ additional storage (e.g., by adding more cloud computing instances with local storage). Because the cloud-based storage system318described here can always ‘add’ additional storage, the cloud-based storage system318can make more intelligent decisions regarding when to perform garbage collection. For example, the cloud-based storage system318may implement a policy that garbage collection only be performed when the number of IOPS being serviced by the cloud-based storage system318falls below a certain level. In some embodiments, other system-level functions (e.g., deduplication, compression) may also be turned off and on in response to system load, given that the size of the cloud-based storage system318is not constrained in the same way that traditional storage systems are constrained. Readers will appreciate that embodiments of the present disclosure resolve an issue with block-storage services offered by some cloud computing environments as some cloud computing environments only allow for one cloud computing instance to connect to a block-storage volume at a single time. For example, in Amazon AWS, only a single EC2 instance may be connected to an EBS volume. Through the use of EC2 instances with local storage, embodiments of the present disclosure can offer multi-connect capabilities where multiple EC2 instances can connect to another EC2 instance with local storage (‘a drive instance’). In such embodiments, the drive instances may include software executing within the drive instance that allows the drive instance to support I/O directed to a particular volume from each connected EC2 instance. As such, some embodiments of the present disclosure may be embodied as multi-connect block storage services that may not include all of the components depicted inFIG.3C. In some embodiments, especially in embodiments where the cloud-based object storage348resources are embodied as Amazon S3, the cloud-based storage system318may include one or more modules (e.g., a module of computer program instructions executing on an EC2 instance) that are configured to ensure that when the local storage of a particular cloud computing instance is rehydrated with data from S3, the appropriate data is actually in S3. This issue arises largely because S3 implements an eventual consistency model where, when overwriting an existing object, reads of the object will eventually (but not necessarily immediately) become consistent and will eventually (but not necessarily immediately) return the overwritten version of the object. To address this issue, in some embodiments of the present disclosure, objects in S3 are never overwritten. Instead, a traditional ‘overwrite’ would result in the creation of the new object (that includes the updated version of the data) and the eventual deletion of the old object (that includes the previous version of the data). In some embodiments of the present disclosure, as part of an attempt to never (or almost never) overwrite an object, when data is written to S3 the resultant object may be tagged with a sequence number. In some embodiments, these sequence numbers may be persisted elsewhere (e.g., in a database) such that at any point in time, the sequence number associated with the most up-to-date version of some piece of data can be known. In such a way, a determination can be made as to whether S3 has the most recent version of some piece of data by merely reading the sequence number associated with an object—and without actually reading the data from S3. The ability to make this determination may be particularly important when a cloud computing instance with local storage crashes, as it would be undesirable to rehydrate the local storage of a replacement cloud computing instance with out-of-date data. In fact, because the cloud-based storage system318does not need to access the data to verify its validity, the data can stay encrypted and access charges can be avoided. The storage systems described above may carry out intelligent data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe. For example, the storage systems described above may be configured to examine each backup to avoid restoring the storage system to an undesirable state. Consider an example in which malware infects the storage system. In such an example, the storage system may include software resources314that can scan each backup to identify backups that were captured before the malware infected the storage system and those backups that were captured after the malware infected the storage system. In such an example, the storage system may restore itself from a backup that does not include the malware—or at least not restore the portions of a backup that contained the malware. In such an example, the storage system may include software resources314that can scan each backup to identify the presences of malware (or a virus, or some other undesirable), for example, by identifying write operations that were serviced by the storage system and originated from a network subnet that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and originated from a user that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and examining the content of the write operation against fingerprints of the malware, and in many other ways. Readers will further appreciate that the backups (often in the form of one or more snapshots) may also be utilized to perform rapid recovery of the storage system. Consider an example in which the storage system is infected with ransomware that locks users out of the storage system. In such an example, software resources314within the storage system may be configured to detect the presence of ransomware and may be further configured to restore the storage system to a point-in-time, using the retained backups, prior to the point-in-time at which the ransomware infected the storage system. In such an example, the presence of ransomware may be explicitly detected through the use of software tools utilized by the system, through the use of a key (e.g., a USB drive) that is inserted into the storage system, or in a similar way. Likewise, the presence of ransomware may be inferred in response to system activity meeting a predetermined fingerprint such as, for example, no reads or writes coming into the system for a predetermined period of time. Readers will appreciate that the various components described above may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways. Readers will appreciate that the storage systems described above may be useful for supporting various types of software applications. For example, the storage system306may be useful in supporting artificial intelligence (‘AI’) applications, database applications, DevOps projects, electronic design automation tools, event-driven software applications, high performance computing applications, simulation applications, high-speed data capture and analysis applications, machine learning applications, media production applications, media serving applications, picture archiving and communication systems (‘PACS’) applications, software development applications, virtual reality applications, augmented reality applications, and many other types of applications by providing storage resources to such applications. The storage systems described above may operate to support a wide variety of applications. In view of the fact that the storage systems include compute resources, storage resources, and a wide variety of other resources, the storage systems may be well suited to support applications that are resource intensive such as, for example, AI applications. AI applications may be deployed in a variety of fields, including: predictive maintenance in manufacturing and related fields, healthcare applications such as patient data & risk analytics, retail and marketing deployments (e.g., search advertising, social media advertising), supply chains solutions, fintech solutions such as business analytics & reporting tools, operational deployments such as real-time analytics tools, application performance management tools, IT infrastructure management tools, and many others. Such AI applications may enable devices to perceive their environment and take actions that maximize their chance of success at some goal. Examples of such AI applications can include IBM Watson, Microsoft Oxford, Google DeepMind, Baidu Minwa, and others. The storage systems described above may also be well suited to support other types of applications that are resource intensive such as, for example, machine learning applications. Machine learning applications may perform various types of data analysis to automate analytical model building. Using algorithms that iteratively learn from data, machine learning applications can enable computers to learn without being explicitly programmed. One particular area of machine learning is referred to as reinforcement learning, which involves taking suitable actions to maximize reward in a particular situation. Reinforcement learning may be employed to find the best possible behavior or path that a particular software application or machine should take in a specific situation. Reinforcement learning differs from other areas of machine learning (e.g., supervised learning, unsupervised learning) in that correct input/output pairs need not be presented for reinforcement learning and sub-optimal actions need not be explicitly corrected. In addition to the resources already described, the storage systems described above may also include graphics processing units (‘GPUs’), occasionally referred to as visual processing unit (‘VPUs’). Such GPUs may be embodied as specialized electronic circuits that rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. Such GPUs may be included within any of the computing devices that are part of the storage systems described above, including as one of many individually scalable components of a storage system, where other examples of individually scalable components of such storage system can include storage components, memory components, compute components (e.g., CPUs, FPGAs, ASICs), networking components, software components, and others. In addition to GPUs, the storage systems described above may also include neural network processors (‘NNPs’) for use in various aspects of neural network processing. Such NNPs may be used in place of (or in addition to) GPUs and may also be independently scalable. As described above, the storage systems described herein may be configured to support artificial intelligence applications, machine learning applications, big data analytics applications, and many other types of applications. The rapid growth in these sort of applications is being driven by three technologies: deep learning (DL), GPU processors, and Big Data. Deep learning is a computing model that makes use of massively parallel neural networks inspired by the human brain. Instead of experts handcrafting software, a deep learning model writes its own software by learning from lots of examples. Such GPUs may include thousands of cores that are well-suited to run algorithms that loosely represent the parallel nature of the human brain. Advances in deep neural networks have ignited a new wave of algorithms and tools for data scientists to tap into their data with artificial intelligence (AI). With improved algorithms, larger data sets, and various frameworks (including open-source software libraries for machine learning across a range of tasks), data scientists are tackling new use cases like autonomous driving vehicles, natural language processing and understanding, computer vision, machine reasoning, strong AI, and many others. Applications of such techniques may include: machine and vehicular object detection, identification and avoidance; visual recognition, classification and tagging; algorithmic financial trading strategy performance management; simultaneous localization and mapping; predictive maintenance of high-value machinery; prevention against cyber security threats, expertise automation; image recognition and classification; question answering; robotics; text analytics (extraction, classification) and text generation and translation; and many others. Applications of AI techniques has materialized in a wide array of products include, for example, Amazon Echo's speech recognition technology that allows users to talk to their machines, Google Translate™ which allows for machine-based language translation, Spotify's Discover Weekly that provides recommendations on new songs and artists that a user may like based on the user's usage and traffic analysis, Quill's text generation offering that takes structured data and turns it into narrative stories, Chatbots that provide real-time, contextually specific answers to questions in a dialog format, and many others. Data is the heart of modern AI and deep learning algorithms. Before training can begin, one problem that must be addressed revolves around collecting the labeled data that is crucial for training an accurate AI model. A full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data. Adding additional high quality data points directly translates to more accurate models and better insights. Data samples may undergo a series of processing steps including, but not limited to: 1) ingesting the data from an external source into the training system and storing the data in raw form, 2) cleaning and transforming the data in a format convenient for training, including linking data samples to the appropriate label, 3) exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster, 4) executing training phases to select random batches of input data, including both new and older samples, and feeding those into production GPU servers for computation to update model parameters, and 5) evaluating including using a holdback portion of the data not used in training in order to evaluate model accuracy on the holdout data. This lifecycle may apply for any type of parallelized machine learning, not just neural networks or deep learning. For example, standard machine learning frameworks may rely on CPUs instead of GPUs but the data ingest and training workflows may be the same. Readers will appreciate that a single shared storage data hub creates a coordination point throughout the lifecycle without the need for extra data copies among the ingest, preprocessing, and training stages. Rarely is the ingested data used for only one purpose, and shared storage gives the flexibility to train multiple different models or apply traditional analytics to the data. Readers will appreciate that each stage in the AI data pipeline may have varying requirements from the data hub (e.g., the storage system or collection of storage systems). Scale-out storage systems must deliver uncompromising performance for all manner of access types and patterns—from small, metadata-heavy to large files, from random to sequential access patterns, and from low to high concurrency. The storage systems described above may serve as an ideal AI data hub as the systems may service unstructured workloads. In the first stage, data is ideally ingested and stored on to the same data hub that following stages will use, in order to avoid excess data copying. The next two steps can be done on a standard compute server that optionally includes a GPU, and then in the fourth and last stage, full training production jobs are run on powerful GPU-accelerated servers. Often, there is a production pipeline alongside an experimental pipeline operating on the same dataset. Further, the GPU-accelerated servers can be used independently for different models or joined together to train on one larger model, even spanning multiple systems for distributed training. If the shared storage tier is slow, then data must be copied to local storage for each phase, resulting in wasted time staging data onto different servers. The ideal data hub for the AI training pipeline delivers performance similar to data stored locally on the server node while also having the simplicity and performance to enable all pipeline stages to operate concurrently. Although the preceding paragraphs discuss deep learning applications, readers will appreciate that the storage systems described herein may also be part of a distributed deep learning (‘DDL’) platform to support the execution of DDL algorithms. The storage systems described above may also be paired with other technologies such as TensorFlow, an open-source software library for dataflow programming across a range of tasks that may be used for machine learning applications such as neural networks, to facilitate the development of such machine learning models, applications, and so on. The storage systems described above may also be used in a neuromorphic computing environment. Neuromorphic computing is a form of computing that mimics brain cells. To support neuromorphic computing, an architecture of interconnected “neurons” replace traditional computing models with low-powered signals that go directly between neurons for more efficient computation. Neuromorphic computing may make use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system, as well as analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems for perception, motor control, or multisensory integration. Readers will appreciate that the storage systems described above may be configured to support the storage or use of (among other types of data) blockchains. In addition to supporting the storage and use of blockchain technologies, the storage systems described above may also support the storage and use of derivative items such as, for example, open source blockchains and related tools that are part of the IBM™ Hyperledger project, permissioned blockchains in which a certain number of trusted parties are allowed to access the block chain, blockchain products that enable developers to build their own distributed ledger projects, and others. Blockchains and the storage systems described herein may be leveraged to support on-chain storage of data as well as off-chain storage of data. Off-chain storage of data can be implemented in a variety of ways and can occur when the data itself is not stored within the blockchain. For example, in one embodiment, a hash function may be utilized and the data itself may be fed into the hash function to generate a hash value. In such an example, the hashes of large pieces of data may be embedded within transactions, instead of the data itself. Readers will appreciate that, in other embodiments, alternatives to blockchains may be used to facilitate the decentralized storage of information. For example, one alternative to a blockchain that may be used is a blockweave. While conventional blockchains store every transaction to achieve validation, a blockweave permits secure decentralization without the usage of the entire chain, thereby enabling low cost on-chain storage of data. Such blockweaves may utilize a consensus mechanism that is based on proof of access (PoA) and proof of work (PoW). The storage systems described above may, either alone or in combination with other computing devices, be used to support in-memory computing applications. In-memory computing involves the storage of information in RAM that is distributed across a cluster of computers. Readers will appreciate that the storage systems described above, especially those that are configurable with customizable amounts of processing resources, storage resources, and memory resources (e.g., those systems in which blades that contain configurable amounts of each type of resource), may be configured in a way so as to provide an infrastructure that can support in-memory computing. Likewise, the storage systems described above may include component parts (e.g., NVDIMMs, 3D crosspoint storage that provide fast random access memory that is persistent) that can actually provide for an improved in-memory computing environment as compared to in-memory computing environments that rely on RAM distributed across dedicated servers. In some embodiments, the storage systems described above may be configured to operate as a hybrid in-memory computing environment that includes a universal interface to all storage media (e.g., RAM, flash storage, 3D crosspoint storage). In such embodiments, users may have no knowledge regarding the details of where their data is stored but they can still use the same full, unified API to address data. In such embodiments, the storage system may (in the background) move data to the fastest layer available—including intelligently placing the data in dependence upon various characteristics of the data or in dependence upon some other heuristic. In such an example, the storage systems may even make use of existing products such as Apache Ignite and GridGain to move data between the various storage layers, or the storage systems may make use of custom software to move data between the various storage layers. The storage systems described herein may implement various optimizations to improve the performance of in-memory computing such as, for example, having computations occur as close to the data as possible. Readers will further appreciate that in some embodiments, the storage systems described above may be paired with other resources to support the applications described above. For example, one infrastructure could include primary compute in the form of servers and workstations which specialize in using General-purpose computing on graphics processing units (‘GPGPU’) to accelerate deep learning applications that are interconnected into a computation engine to train parameters for deep neural networks. Each system may have Ethernet external connectivity, InfiniBand external connectivity, some other form of external connectivity, or some combination thereof. In such an example, the GPUs can be grouped for a single large training or used independently to train multiple models. The infrastructure could also include a storage system such as those described above to provide, for example, a scale-out all-flash file or object store through which data can be accessed via high-performance protocols such as NFS, S3, and so on. The infrastructure can also include, for example, redundant top-of-rack Ethernet switches connected to storage and compute via ports in MLAG port channels for redundancy. The infrastructure could also include additional compute in the form of whitebox servers, optionally with GPUs, for data ingestion, pre-processing, and model debugging. Readers will appreciate that additional infrastructures are also possible. Readers will appreciate that the storage systems described above, either alone or in coordination with other computing machinery may be configured to support other AI related tools. For example, the storage systems may make use of tools like ONXX or other open neural network exchange formats that make it easier to transfer models written in different AI frameworks. Likewise, the storage systems may be configured to support tools like Amazon's Gluon that allow developers to prototype, build, and train deep learning models. In fact, the storage systems described above may be part of a larger platform, such as IBM™ Cloud Private for Data, that includes integrated data science, data engineering and application building services. Readers will further appreciate that the storage systems described above may also be deployed as an edge solution. Such an edge solution may be in place to optimize cloud computing systems by performing data processing at the edge of the network, near the source of the data. Edge computing can push applications, data and computing power (i.e., services) away from centralized points to the logical extremes of a network. Through the use of edge solutions such as the storage systems described above, computational tasks may be performed using the compute resources provided by such storage systems, data may be storage using the storage resources of the storage system, and cloud-based services may be accessed through the use of various resources of the storage system (including networking resources). By performing computational tasks on the edge solution, storing data on the edge solution, and generally making use of the edge solution, the consumption of expensive cloud-based resources may be avoided and, in fact, performance improvements may be experienced relative to a heavier reliance on cloud-based resources. While many tasks may benefit from the utilization of an edge solution, some particular uses may be especially suited for deployment in such an environment. For example, devices like drones, autonomous cars, robots, and others may require extremely rapid processing—so fast, in fact, that sending data up to a cloud environment and back to receive data processing support may simply be too slow. As an additional example, some IoT devices such as connected video cameras may not be well-suited for the utilization of cloud-based resources as it may be impractical (not only from a privacy perspective, security perspective, or a financial perspective) to send the data to the cloud simply because of the pure volume of data that is involved. As such, many tasks that really on data processing, storage, or communications may be better suited by platforms that include edge solutions such as the storage systems described above. The storage systems described above may alone, or in combination with other computing resources, serves as a network edge platform that combines compute resources, storage resources, networking resources, cloud technologies and network virtualization technologies, and so on. As part of the network, the edge may take on characteristics similar to other network facilities, from the customer premise and backhaul aggregation facilities to Points of Presence (PoPs) and regional data centers. Readers will appreciate that network workloads, such as Virtual Network Functions (VNFs) and others, will reside on the network edge platform. Enabled by a combination of containers and virtual machines, the network edge platform may rely on controllers and schedulers that are no longer geographically co-located with the data processing resources. The functions, as microservices, may split into control planes, user and data planes, or even state machines, allowing for independent optimization and scaling techniques to be applied. Such user and data planes may be enabled through increased accelerators, both those residing in server platforms, such as FPGAs and Smart NICs, and through SDN-enabled merchant silicon and programmable ASICs. The storage systems described above may also be optimized for use in big data analytics. Big data analytics may be generally described as the process of examining large and varied data sets to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful information that can help organizations make more-informed business decisions. As part of that process, semi-structured and unstructured data such as, for example, internet clickstream data, web server logs, social media content, text from customer emails and survey responses, mobile-phone call-detail records, IoT sensor data, and other data may be converted to a structured form. The storage systems described above may also support (including implementing as a system interface) applications that perform tasks in response to human speech. For example, the storage systems may support the execution of intelligent personal assistant applications such as, for example, Amazon's Alexa, Apple Siri, Google Voice, Samsung Bixby, Microsoft Cortana, and others. While the examples described in the previous sentence make use of voice as input, the storage systems described above may also support chatbots, talkbots, chatterbots, or artificial conversational entities or other applications that are configured to conduct a conversation via auditory or textual methods. Likewise, the storage system may actually execute such an application to enable a user such as a system administrator to interact with the storage system via speech. Such applications are generally capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real time information, such as news, although in embodiments in accordance with the present disclosure, such applications may be utilized as interfaces to various system management operations. The storage systems described above may also implement AI platforms for delivering on the vision of self-driving storage. Such AI platforms may be configured to deliver global predictive intelligence by collecting and analyzing large amounts of storage system telemetry data points to enable effortless management, analytics and support. In fact, such storage systems may be capable of predicting both capacity and performance, as well as generating intelligent advice on workload deployment, interaction and optimization. Such AI platforms may be configured to scan all incoming storage system telemetry data against a library of issue fingerprints to predict and resolve incidents in real-time, before they impact customer environments, and captures hundreds of variables related to performance that are used to forecast performance load. The storage systems described above may support the serialized or simultaneous execution of artificial intelligence applications, machine learning applications, data analytics applications, data transformations, and other tasks that collectively may form an AI ladder. Such an AI ladder may effectively be formed by combining such elements to form a complete data science pipeline, where exist dependencies between elements of the AI ladder. For example, AI may require that some form of machine learning has taken place, machine learning may require that some form of analytics has taken place, analytics may require that some form of data and information architecting has taken place, and so on. As such, each element may be viewed as a rung in an AI ladder that collectively can form a complete and sophisticated AI solution. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver an AI everywhere experience where AI permeates wide and expansive aspects of business and life. For example, AI may play an important role in the delivery of deep learning solutions, deep reinforcement learning solutions, artificial general intelligence solutions, autonomous vehicles, cognitive computing solutions, commercial UAVs or drones, conversational user interfaces, enterprise taxonomies, ontology management solutions, machine learning solutions, smart dust, smart robots, smart workplaces, and many others. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver a wide range of transparently immersive experiences (including those that use digital twins of various “things” such as people, places, processes, systems, and so on) where technology can introduce transparency between people, businesses, and things. Such transparently immersive experiences may be delivered as augmented reality technologies, connected homes, virtual reality technologies, brain-computer interfaces, human augmentation technologies, nanotube electronics, volumetric displays, 4D printing technologies, or others. The storage systems described above may also, either alone or in combination with other computing environments, be used to support a wide variety of digital platforms. Such digital platforms can include, for example, 5G wireless systems and platforms, digital twin platforms, edge computing platforms, IoT platforms, quantum computing platforms, serverless PaaS, software-defined security, neuromorphic computing platforms, and so on. The storage systems described above may also be part of a multi-cloud environment in which multiple cloud computing and storage services are deployed in a single heterogeneous architecture. In order to facilitate the operation of such a multi-cloud environment, DevOps tools may be deployed to enable orchestration across clouds. Likewise, continuous development and continuous integration tools may be deployed to standardize processes around continuous integration and delivery, new feature rollout and provisioning cloud workloads. By standardizing these processes, a multi-cloud strategy may be implemented that enables the utilization of the best provider for each workload. The storage systems described above may be used as a part of a platform to enable the use of crypto-anchors that may be used to authenticate a product's origins and contents to ensure that it matches a blockchain record associated with the product. Similarly, as part of a suite of tools to secure data stored on the storage system, the storage systems described above may implement various encryption technologies and schemes, including lattice cryptography. Lattice cryptography can involve constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Unlike public-key schemes such as the RSA, Diffie-Hellman or Elliptic-Curve cryptosystems, which are easily attacked by a quantum computer, some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. A quantum computer is a device that performs quantum computing. Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement. Quantum computers differ from traditional computers that are based on transistors, as such traditional computers require that data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1). In contrast to traditional computers, quantum computers use quantum bits, which can be in superpositions of states. A quantum computer maintains a sequence of qubits, where a single qubit can represent a one, a zero, or any quantum superposition of those two qubit states. A pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. A quantum computer with n qubits can generally be in an arbitrary superposition of up to 2{circumflex over ( )}n different states simultaneously, whereas a traditional computer can only be in one of these states at any one time. A quantum Turing machine is a theoretical model of such a computer. The storage systems described above may also be paired with FPGA-accelerated servers as part of a larger AI or ML infrastructure. Such FPGA-accelerated servers may reside near (e.g., in the same data center) the storage systems described above or even incorporated into an appliance that includes one or more storage systems, one or more FPGA-accelerated servers, networking infrastructure that supports communications between the one or more storage systems and the one or more FPGA-accelerated servers, as well as other hardware and software components. Alternatively, FPGA-accelerated servers may reside within a cloud computing environment that may be used to perform compute-related tasks for AI and ML jobs. Any of the embodiments described above may be used to collectively serve as a FPGA-based AI or ML platform. Readers will appreciate that, in some embodiments of the FPGA-based AI or ML platform, the FPGAs that are contained within the FPGA-accelerated servers may be reconfigured for different types of ML models (e.g., LSTMs, CNNs, GRUs). The ability to reconfigure the FPGAs that are contained within the FPGA-accelerated servers may enable the acceleration of a ML or AI application based on the most optimal numerical precision and memory model being used. Readers will appreciate that by treating the collection of FPGA-accelerated servers as a pool of FPGAs, any CPU in the data center may utilize the pool of FPGAs as a shared hardware microservice, rather than limiting a server to dedicated accelerators plugged into it. The FPGA-accelerated servers and the GPU-accelerated servers described above may implement a model of computing where, rather than keeping a small amount of data in a CPU and running a long stream of instructions over it as occurred in more traditional computing models, the machine learning model and parameters are pinned into the high-bandwidth on-chip memory with lots of data streaming through the high-bandwidth on-chip memory. FPGAs may even be more efficient than GPUs for this computing model, as the FPGAs can be programmed with only the instructions needed to run this kind of computing model. The storage systems described above may be configured to provide parallel storage, for example, through the use of a parallel file system such as BeeGFS. Such parallel files systems may include a distributed metadata architecture. For example, the parallel file system may include a plurality of metadata servers across which metadata is distributed, as well as components that include services for clients and storage servers. The systems described above can support the execution of a wide array of software applications. Such software applications can be deployed in a variety of ways, including container-based deployment models. Containerized applications may be managed using a variety of tools. For example, containerized applications may be managed using Docker Swarm, Kubernetes, and others. Containerized applications may be used to facilitate a serverless, cloud native computing deployment and management model for software applications. In support of a serverless, cloud native computing deployment and management model for software applications, containers may be used as part of an event handling mechanisms (e.g., AWS Lambdas) such that various events cause a containerized application to be spun up to operate as an event handler. The systems described above may be deployed in a variety of ways, including being deployed in ways that support fifth generation (‘5G’) networks. 5G networks may support substantially faster data communications than previous generations of mobile communications networks and, as a consequence may lead to the disaggregation of data and computing resources as modern massive data centers may become less prominent and may be replaced, for example, by more-local, micro data centers that are close to the mobile-network towers. The systems described above may be included in such local, micro data centers and may be part of or paired to multi-access edge computing (‘MEC’) systems. Such MEC systems may enable cloud computing capabilities and an IT service environment at the edge of the cellular network. By running applications and performing related processing tasks closer to the cellular customer, network congestion may be reduced and applications may perform better. For further explanation,FIG.3Dillustrates an exemplary computing device350that may be specifically configured to perform one or more of the processes described herein. As shown inFIG.3D, computing device350may include a communication interface352, a processor354, a storage device356, and an input/output (“I/O”) module358communicatively connected one to another via a communication infrastructure360. While an exemplary computing device350is shown inFIG.3D, the components illustrated inFIG.3Dare not intended to be limiting. Additional or alternative components may be used in other embodiments. Components of computing device350shown inFIG.3Dwill now be described in additional detail. Communication interface352may be configured to communicate with one or more computing devices. Examples of communication interface352include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, an audio/video connection, and any other suitable interface. Processor354generally represents any type or form of processing unit capable of processing data and/or interpreting, executing, and/or directing execution of one or more of the instructions, processes, and/or operations described herein. Processor354may perform operations by executing computer-executable instructions362(e.g., an application, software, code, and/or other executable data instance) stored in storage device356. Storage device356may include one or more data storage media, devices, or configurations and may employ any type, form, and combination of data storage media and/or device. For example, storage device356may include, but is not limited to, any combination of the non-volatile media and/or volatile media described herein. Electronic data, including data described herein, may be temporarily and/or permanently stored in storage device356. For example, data representative of computer-executable instructions362configured to direct processor354to perform any of the operations described herein may be stored within storage device356. In some examples, data may be arranged in one or more databases residing within storage device356. I/O module358may include one or more I/O modules configured to receive user input and provide user output. I/O module358may include any hardware, firmware, software, or combination thereof supportive of input and output capabilities. For example, I/O module358may include hardware and/or software for capturing user input, including, but not limited to, a keyboard or keypad, a touchscreen component (e.g., touchscreen display), a receiver (e.g., an RF or infrared receiver), motion sensors, and/or one or more input buttons. I/O module358may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, I/O module358is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation. In some examples, any of the systems, computing devices, and/or other components described herein may be implemented by computing device350. For further explanation,FIG.4Asets forth a block diagram of an example system for end-to-end encryption in a storage system according to some embodiments of the present disclosure. The term ‘end-to-end encryption’ as it is used in this specification generally refers to a model in which a storage system that receives encrypted data, stores encrypted data, and returns encrypted data. In messaging, ‘end-to-end encryption’ further restricts encrypted messages from being decrypted at any point between source and target. In slight contrast, the storage systems configured for end-to-end encryption in accordance with embodiments of the present disclosure may, at times, decrypt data that was received encrypted for various purposes such as garbage collection and deduplication. However, when decrypted, the decryption is internal to the storage system and is not accessible by entities external to the storage system. In the example ofFIG.4A, a first storage system430is configured to replicate to a paired second storage system432. In the example system, the path between the first and second storage system is an encrypted and authenticated link. Further, the second storage system may be configured to ‘prove’ to the first storage system that the second storage system has access to appropriate information such as keys (through an API, a key manger, or some other access means) for encryption and decryption. The first storage system may receive a write operation for a block of data from a client (not shown). The client when transmitting the write to the first storage system402for storage in the first datastore403, may encrypt the block with a local key. A replication server410may request the block from a source and the source402may provide an identifier of the local key, an initialization vector, and the encrypted block to the replication server. The replication server410may decrypt the block utilizing the local key and compress or deduplicate the block of data. Alternatively, the source402may decrypt the block, perform the data reduction and send along the key ID to the replication server. Such decryption and compression may result in metadata describing the re-encryption details for the block. Once the data reduction is performed, the replication server410may translate the identifier of the local key to a global key, or to a key identifier for a global key, by querying a key manager404. The key manager404is coupled to key storage406which may store mappings of global keys or key identifiers to local keys or key identifiers. The method of replication described above in which the replication server requests a block from a source is but one possible method among for delivering a block from one storage system to another. Some embodiments of replication for example, may operate by sending blocks from one storage system to another. In snapshot-based replication, a first storage system may detect differences between an already transferred snapshot and a new snapshot, and send the blocks for those differences, without a back channel request. The replication server then encrypts the data-reduced block utilizing the global key mapped to the local key and transmits the re-encryption metadata, an initialization vector (or the like), and the UUID for the global key to a second storage system432. A replication client416of the second storage system432receives the transmission, decrypts the block utilizing the initialization vector, metadata and the global key. The replication client may then query a key manager422and its associated key storage424for a local key ID mapped to the global key UUID. The local key, in this example, is local to the second storage system rather than the same local key utilized by the first storage system. The replication client then transmits to a target426, the data reduced block, and a local key identifier. The target426encrypts the data reduced block utilizing the local key associated with the local key identifier (or other key in other embodiments) and stores the encrypted, compressed block in the target datastore428. Although depicted here as two separate key managers that are part of the source and target system respectively, readers will recognize that the key manager may be a single entity, accessible by both storage systems separately over a network and/or through an API. For further explanation,FIG.4Bsets forth a flow chart illustrating an example method of end-to-end encryption in a storage system configured for replication. The method ofFIG.4Bincludes receiving434from a client an encrypted block, the block encrypted utilizing a client-shared key. The source storage system may then decrypt436the block utilizing the client-shared key. In the example ofFIG.4B, such decryption may generate438re-encryption metadata describing re-encryption details for the block. The method ofFIG.4Balso includes performing440data reduction of the unencrypted block. Such data reduction may include compression or de-duplication. Such data reduction may be optional. The method ofFIG.4Balso includes encrypting442the data reduced block utilizing an offload key. An offload key is a key that may be utilized by both a source and a target for encryption and decryption. Such an offload key may be accessed in a variety of manners. The method ofFIG.4Balso includes transmitting444the encrypted data reduced block, an initialization vector, and the re-encryption metadata to a target storage system for replication. An initialization vector is utilized in an encryption or decryption algorithm to perturb the algorithm so that the input data is not encrypted identically to other data. The method ofFIG.4Balso includes receiving446the encrypted data reduced block at the target storage system and decrypting446the data reduced block utilizing the offload key and the initialization vector. The method ofFIG.4Bcontinues by re-encrypting448the block utilizing a target local key. In some embodiments, prior to transmission of the encrypted data reduced block, the source storage system may receive proof from the target storage system of access to encryption keys. Such proof may indicate to the source that the target is capable of participating in end-to-end encryption during replication. The examples ofFIG.4AandFIG.4Bdepict re-encryption by the source storage system utilizing the UUID key or a global key prior to transmission to a target. In some embodiments, however, the re-encryption does not occur. Instead, the UUID is utilized by the target storage system when services a read to a snapshot or volume (the replicated and encrypted data). Full asynchronous replication support requires that replication continue to work even when the target does not fully support all the features on the source. Such features include the end-to-end encryption techniques described above. Such targets are referred to here as ‘Non-E2EE-Ready’. Non-E2EE-Ready targets include among others E2EE-capable storage systems that cannot prove key access, E2EE-capable storage systems without authenticated, secure transport and Non-E2EE-capable storage systems. For further explanation, an additional model for end-to-end encryption in a storage system is described here. This model starts with a storage system that encrypts its stored data, and with a representation of an encrypted dataset to and from a host, where the encryption associated with the internal data and the external representation are encrypted separately, such as using different encryption keys. Data received by an array from a host (e.g., as a write) using a known key is decrypted, then deduplicated and compressed, and that resulting data is encrypted using some key which may or may not have a relationship with the original key used in transfer to the array from the host. Data transferred from an array to a host (e.g., a read) is read from the array, decrypted, uncompressed, and re-encrypted using the appropriate host key. An IV (initialization vector, aka, a seed, salt, XTS-weak or other term) may be associated with a particular write (such as based on the logical offset of a block or some other factor not related to the content of the write itself) and may be used along with the general encryption key as part of decrypting the data received by the array. In general, this IV must be remembered and used to re-encrypt the data on a later read request. These IV's are used to ensure that two writes of identical unencrypted content do not generally result in identical encrypted content, which is useful to avoid security issues from certain kinds of pattern analysis (such as making inferences from noticing that two separately stored blocks are identical). Initialization vector is a term for setting the initial state of the encryption state engine to a particular set of values rather than having them start with, say, a zeroed state. Encryption state engines generally ensure that all subsequent encrypted bytes vary in a statistically evenly distributed way, if started with different initialized states. Instead of varying encryption using an initialization vector, instead, the key itself could be altered to achieve the same end result, for example by XORing the logical offset into a 256 bit AES key (further use of IV or the term initialization vector should be presumed to include alternative means of varying the encryption in a deterministic way). Correct decryption of a piece of encrypted data generally requires knowing the correct key as well as the original initialization vector and using that same set of values to load the initial state of the decryption engine. In a block-based storage system, it often makes sense that each block (e.g., every 512 byte multiple offset and 512 byte block written to a volume) be encrypted using a specific key-initialization-vector combination. That way, as long as writes are an even multiple of 512 bytes on 512-byte logical address boundaries, then reads that are issued against any 512-byte logical address boundary for any potentially different length that is a multiple of 512 bytes will encrypt and decrypt consistently based on the agreed key-IV combination. The storage system is generally expected to return the same data that was written (presuming a scheme isn't being used that alters the encryption of the transferred data) so if a block written by a host to the storage system was encrypted by the host prior to transfer based on one key and initialization vector combination, then a later read of that block should generally ensure the data transferred to the host is encrypted with the same key and initialization vector combination (which ensures that the data is identical to what was written). In a storage system that can deduplicate and compress the unencrypted data before then storing it encrypted, the key and initialization vector used for encryption on transfer back to the host could be computable (such as from the volume and a volume block address) or could be recorded when the data is written. Deduplication itself generally further requires that the key and initialization vector not be recorded with the multiply referenced data, but rather that it be recorded in a reference to the multiply referenced data. For example, in a store that organizes blocks by their content (their hash value), a volume can be considered a list of volume block offsets that reference blocks stored with a particular hash value. In this case, the list of these references should generally include the key and the initialization vector information rather than storing it in the hash-value based store organized by block content. Commonly in security-conscious environments, a storage system will not itself permanently store keys, but instead an external key server of some kind will store keys. In such cases, the storage system may store, internally, some kind of key identifier that can be communicated to an external key server. As a result, the key/initialization-vector combination stored along with one of the references described in the previous paragraph may instead be a key identifier combined with an initialization vector. If key identifiers are large, the storage system may instead store a list of key identifiers indexed by some small value (such as a simple integer index) along with these references rather than a complete key identifier. In replication, data is encrypted on the source storage system, and is encrypted on the target storage system, and may be encrypted in transit. There are many possible embodiments in which all of this fits together. For example, in one embodiment data may be replicated to a storage system that isn't part of a trust relationship with the host, but where that storage system is expected to be able to service later read and write requests. In that case, the data may be replicated based on the volume's host encryption (presuming that encryption isn't dependent on the specific host or means of access). A trust relationship could be established, however, such that if a source (for example, a client host) trusts storage system A then the source trusts connected storage system B. In that case, the data could be transferred either as unencrypted data (likely using encrypted communications), or the data could be transferred using the storage system's internal encryption model with the associated key and initialization vector information needed for decryption by the target storage system. If unencrypted data is transferred, the source storage system may first require that target storage system prove it has access to the keys needed for encrypting and decrypting a dataset (such as a volume), such as by signing something with the encryption key, or obtaining a signed certificate from the key server authorizing the target storage system. A third model could transfer compressed and deduplicated data, such as based on the source storage system's internal compressed and deduplicated content and metadata (but not necessarily based on that), to be received and stored by the target storage system along with necessary key identifiers and initialization vectors needed for decryption, but where the target storage system may not have the trust relationship needed with a key server to actually obtain the keys needed for decryption. In this case, the target storage system could be, for example, an external file or object store such as a simple NFS server or an object cloud storage service. Keys will eventually be needed to be able to use that stored data. If the original source storage system itself recovers, say, lost data from the target system essentially as a form of recovery from backup, then it may well have the necessary information (or the necessary relationship to a key server) to make sense of the stored deduplicated and compressed data. Stored encrypted data and metadata could instead be restored by or rehydrated into or simply used as the content for some alternate storage system, such as for example, by a replacement for the original storage system, or by an alternate storage system at a different location, or to make a usable copy of the stored dataset on a different storage system, or possibly by some set of virtual storage system controllers or virtual storage system compute or virtual drive components instantiated for this purpose such as in cloud platform infrastructure. In doing so, these might need to be granted secured access to the necessary decryption keys or to the key server as well as the keys needed to re-encrypt data in transfer to any host systems. In this model, a further possibility is that the original storage system can decrypt and then re-encrypt its stored data and metadata using a different set of encryption keys than it uses itself, thus preserving the deduplication and compression and relationships and ensuring that it is transmitted and externally stored encrypted but avoiding the use of its own keys for those external interactions. In this case, the storage system or virtual storage system controllers which restore, rehydrate, or otherwise access the data will need access to the keys used to store the data (and possibly the keys needed to re-encrypt data for transfer to hosts), but do not need access to the keys the original storage system used for its own internal encryption. As a yet further model, a storage system restoring or rehydrating such content could further re-encrypt it for its own internal storage, so that the external storage encryption keys are not re-used by any storage system that actually operates on copies of the same content. It should be noted that these models for storing compressed and deduplicated content encrypted, and transferring encrypted compressed and deduplicated content between storage systems or to and from external stores of various kinds do not themselves depend on interactions with hosts being encrypted. Adding in host encryption ensures that data is always represented outside of the storage system in an encrypted form, so that any host only sees as encrypted form of any content just as can be the case with external exchanges of encrypted forms of the internally compressed deduplicated content, but although these ideas can be combined, they don't need to be combined. Further, any combination of these ideas can further be combined with transport layer encryption, which would ensure that, say, interconnect links between a host and a storage system with an internally encrypted data store were always encrypted whether or not dataset's content was represented to the host in encrypted form. Storage systems could also segregate datasets in such a way that different datasets are internally encrypted separately using separate keys or collections of encryption keys for these segregated datasets. An example of such a dataset is one of several tenants stored in a storage system, where tenants are not allowed to leak data between each other, even within the internally formatted content of a storage system, and such that knowing the keys associated with one tenant's internally stored content is not sufficient to decrypt the content associated with another tenant's internally stored content. In such cases, deduplication may operate only within one of these segregated (e.g., tenant) datasets, and any combination of data segments and metadata that are stored together within an encrypted underlying data segment might be limited to combining data segments and metadata from a single tenant. There are some models for internal storage that do allow deduplication to operate across tenants with little data leakage. One such model encrypts each block of stored data using a distinct key derived deterministically from the block's content, such as using a secure hash of the block's content. Each block with the same secure hash can be stored together, irrespective of dataset. Metadata would then record the location and the key derived from the block's content, and that metadata would then be encrypted based on the tenant's encryption keys. In this model, any tenant can decrypt and read its own metadata, and can locate any blocks encrypted by a block's derived key, and can decrypt each block because they know the block's key. A tenant which never stored a block with that content would, however, lack the metadata needed to decrypt it. This results in a tiny bit of data leakage in that one tenant might be able to know that some of its blocks are not unique only to it, but properly implemented, they might not be able to know which other tenant currently references it or how that block fits together with other data for any other tenant. A special case of this per-dataset internal encryption uses per-dataset encryption keys for the host format as well as for data stored compressed and deduplicated by a storage system. In that case, the keys could be from the same set, as an example, even if the re-encrypted internal data and metadata isn't the same as the encrypted dataset exchanged with host systems. Alternately, the external and internal representations of the dataset might not share the same keys but might share a relationship to a key management server as alternate representations and keys for a dataset as represented to a key management server. For example, a tenant for a storage system might be represented to a key management server as a dataset requiring keys for internal and external encryption, as well as possibly for external storage or external transfer of that dataset (or snapshots or copies of that dataset) to external storage or external storage systems. Meanwhile, a different tenant might be represented to the same or even a different key management server as a different dataset requiring its own separate keys for internal and external encryption, as well as possibly for external storage or external transfer of that dataset. Symmetric synchronous replication may well replicate the original received data from a write, with each storage system separately decrypting and re-encrypting content received or returned to the host. If two (or more) storage systems are synchronously replicating between each other such that each can receive read and write requests for a dataset stored and synchronously replicated between the storage systems, then if the storage systems can determine that they trust each other for transferring keys or key identifiers and initialization vectors for logically stored blocks, then the storage systems can serve the same datasets to the same sets of client hosts with the external block encryption intact. Otherwise, all the other techniques described herein can apply, with the storage systems possibly exchanging data in an internally encrypted form, or possibly exchanging blocks in plain text (possibly over an encrypted link), possibly exchanging data as encrypted blocks by deduplication hash, or as a distinct block encryption for transfer with each storage system then separately decrypting, compressing, and possibly deduplicating blocks internally to each of the separate replicating storage systems. In other embodiments, the storage systems could exchange the original encrypted form of any blocks as received from the client host, and each storage system may separately decrypt the blocks if the storage system has access to the keys, such as through separate relationships with key management servers. In such embodiments, the storage systems may separately compress and possibly deduplicate those blocks before internally encrypting and storing them. Such a cluster of symmetric synchronously replicating storage systems could also serve as a collective source for replication to a target for non-symmetric synchronous, nearly synchronous, or asynchronous or snapshot-based secured and encrypted replication as described elsewhere in this specification. Any model where a source storage system can transmit compressed, deduplicated, and encrypted data to be stored elsewhere can support a target that simply lacks the knowledge for decryption. This could, for example, be used when storing a storage system's content (or a sequence of snapshots of a storage system's content) into files or objects in a separate file or object storage server or storage service, including based on cloud platforms. Only when that data needs to be accessed by some host in the future (or otherwise is needed in unencrypted form or in a form where it needs to be re-encrypted using an alternate key or based on an alternate re-construction of the data) are keys needed. For example, the original source storage system, or a replacement for the original source storage system, could already have or be provided with the keys needed to decrypt that stored data and to perhaps then re-encrypt it for host read and write transactions. Alternately, a set of cloud storage system controllers used to receive replicated data might not need security keys for ingesting replicated data, including within virtual drives and object stores. The storage system controllers, or a new set of storage system controllers, could be provided with the necessary key server relationship to obtain keys only at some future time when this is needed. A further model preserves the compressed, deduplicated, and encrypted data of the source but instead of storing that data as-is further encrypts that data such that the data stored externally requires knowledge of the keys used for externally storing the data as well as the keys and/or key identifiers needed to reconstruct the uncompressed and unencrypted data represented by the original compressed, deduplicated and encrypted data of the source. Further combinations are possible, such as re-encrypting the compressed, deduplicated, and encrypted data as compressed and deduplicated data that is then encrypted with a different key for use with an external store of that content, but where all of that encrypted data and metadata is further encrypted for actual storage. In that case, nothing of substance (even metadata analysis) can be made from the stored data without the keys used to store all that encrypted data and metadata, yet the data is still stored as compressed and deduplicated, but even with that, the keys used by a specific storage system to encrypt its own internal data need not be shared with whatever system eventually provides access to that data, allowing an additional level of protection for the keys needed to finally decrypt the underlying content. Note that all of these ideas associated with keeping encrypted internal content always in an encrypted state for any replication transfers, or when storing data externally to the storage system, work regardless of host-based encryption of volume data that is recognized by the storage system. In an alternate description of a variant of one of these models, at least four components may be required: a host, a storage system with local storage (this could also be a virtual storage system), external storage connected through a regular protocol (for example, S3 or NFS), and a rehydrator, where a rehydrator can be a system that can run to reconstruct the stored data (this could be, for example, a storage system that can read the external storage, or it could be a virtual storage system controller that can read the external storage). Optionally, there may also be a key management server. In such an example, a host stores encrypted data to the storage system. The storage system decrypts it with a key or keys that has been shared somehow (commonly through the key management server but perhaps through an API exposed by the array), referred to as host-shared keys. This generates decrypted data along with some metadata for how to re-encrypt (such as a set of key identifiers and per-block key variations such as initialization vectors). This decrypted data then optionally goes through data reduction such as compression and deduplication. The storage system will subsequently take its reduced representation of the data in the snapshot and encrypt it based on one or more keys (which can be referred to as offload keys) before writing this to the external storage. The rehydrator may need access to both the offload keys (to decrypt the reduced representation) and the client-shared keys (to re-encrypt back to the data). This may require that the rehydrator be granted rights to retrieve keys from a key management server, or the keys can be provided to the rehydrator by some other system that has them or that itself has access to the key management server. In some cases, the rehydrator might be a separate component which reconstructs a dataset and delivers it to a storage system, in which case it might make sense for the storage system to provide it with the necessary keys. In other embodiments, multiple paths with different keys may be utilized. In such an example, instead of a dataset (e.g., a volume) being encrypted to and from all hosts with a dataset key, use a separate dataset encryption key for interactions with different paths to the same host, or use different keys for interaction with different hosts that are accessing the same dataset (sharing can either be concurrent, such as with a clustered file system), or can be sequenced (such as when one host, or set of hosts, creates or manipulates a dataset and then later another host, or set of hosts, further operates on the dataset perhaps with a different set of keys). Since the encryption keys the storage system uses internally can differ from the encryption keys used for host interactions, such changes are practical as long as there is a means of properly determining the differing sets of keys. This model further creates the possibility in replication or snapshot copy scenarios where hosts which access a replica or perhaps a snapshot, such as a rehydrated snapshot, can use different keys, which may be useful, for example, in providing access to a copy of data in a test and dev environment from a subset of data in a production environment, where the test and dev environment is never provided access to the keys used in the production environment. Means of communicating a key or a key identifier between a host and a storage system could be based on a communicated exchange of some kind (e.g., a special SCSI request), or could be based on separate exchanges with a key server, possibly using a shared understanding of the storage system's identifiers when interacting with the key server, or the host could write a key or key identifier into a dataset in some recognized way. For example, a specific block address of a volume could be used, or the key could be stored in an MBR or GPT/EFI partition format. In the case of GPT/EFI, the unique identifier associated with the block device, as stored in the GPT/EFI header, or the unique identifier associated with a partition, could be used in exchanges with the key management server. A host accessing a clone of a dataset (or even a synchronous replica of dataset) could further write a separate key identifier into an already existing dataset to change which keys are used for further encryption or for decrypting to the new host. Alternately, one host could interact with the storage system (such as by writing to a location or header, or by interacting through an extended SCSI operation) to alter the keys or key identifiers used for later interactions or for interactions from some other host, for example as part of configuring for a dataset being shared out from a production environment to a test and dev environment. Additionally, the host could interact with the storage system for key cycling, as part of ensuring that the same keys are not used for an excessive period of time. This key cycling can be very fast, as opposed to rewriting all the data with a new key. If the storage system also does key cycling internally, such as during gradually rebuilds of data sets over time, then this can ensure no data encryption will use keys for excessive periods of time, but with very low, if any, disruption to use of the dataset. As will be described in greater detail below, the example methods described below relate to embodiments where data is encrypted and decrypted. For ease of explanation, data is described as being encrypted using an encryption key and data is often described as being decrypted using the same encryption key, as is the case with symmetric encryption. Readers will appreciate, however, that other methods for encrypting data and decrypting data may also be utilized. For example, asymmetric encryption (a.k.a., public-key encryption) may be utilized to encrypt and decrypt data, where two different, but logically linked keys, may be utilized. As such, embodiments that are described below in which a first actor encrypts data using a particular key and a second actor decrypts data using the same particular key, may be modified to incorporate asymmetric encryption techniques. For further explanation,FIG.5Asets forth a flow chart illustrating an example method of replicating data to a storage system that has an inferred trust relationship with a client in accordance with some embodiments of the present disclosure. As will be described in greater detail below, an inferred trust relationship exists between a first storage system508and a second storage system526. In some embodiments described herein, a first storage system508is trusted by a client to decrypt datasets which a client stores as encrypted datasets. The first storage system508can use techniques described below to determine that a second storage system526is also trusted because the second storage system526can prove that it has the same means to decrypt the data. As a result, the first storage system508can transmit internal representations of the dataset to the second storage system526, with the understanding that the second storage system526has the ability to serve requests to read the data at some point in the future. The example method depicted inFIG.5Aillustrates an example in which data is replicated between a first storage system508and a second storage system526, although readers will appreciate that in other embodiments replication may be carried out between more than two storage systems. Each of the storage systems508,526depicted inFIG.5Amay be similar to the storage systems described above, and may include combinations of the components described above or variants of the components described above. The example method depicted inFIG.5Aincludes receiving510, by a first storage system508from a computing device502associated with a request504to write data, data506encrypted using a first encryption key. The computing device502that is associated with the request504to write data may be embodied, for example, as a server that is executing a software application that utilizes the first storage system508to store and retrieve data, as virtualized computer hardware that is executing a software application that utilizes the first storage system508to store and retrieve data, or in some other way. As part of an effort to protect such data, however, the computing device502may be configured to encrypt the data using a first data encryption key prior to sending the data to the first storage system508. As such, even if data communications between the computing device502and the first storage system508were intercepted, snooped, or otherwise comprised, the data itself could not be accessed without the first encryption key. In the example method depicted inFIG.5A, the data may be encrypted using any of the techniques described above. For example, each write operation may be associated with a different initialization vector, where the initialization vector is based on the logical offset of a block that data is to be written to, or some other factor not related to the content of the write itself. In such a way, two writes of identical unencrypted content do not generally result in identical encrypted content, which is useful to avoid security issues from certain kinds of pattern analysis, such as making inferences rom noticing that two separately stored blocks are identical. Alternatively, instead of using an initialization vector, the key itself could be altered to achieve the same end result, for example, by modifying the encryption key to be the output of applying an XOR operation that takes the logical offset of a block and the encryption key as inputs. Readers will appreciate that other techniques may be utilized for varying the encryption key in a deterministic way. The example method depicted inFIG.5Aillustrates an embodiment in which the first storage system508receives510data506encrypted using a first encryption key from a computing device502as part of a request504to write data that is issued by the computing device502. Readers will appreciate that, in other embodiments, the first storage system508may receive510data506that has been encrypted using a first encryption key from the computing device502outside of the context of a request504to write data that is issued by the computing device502(e.g., the storage system or some other intermediary may be configured to poll the computing device502for data), so long as the data506that is sent by or retrieved from the computing device502has been encrypted using a first encryption key. The example method depicted inFIG.5Aalso includes decrypting512, by the first storage system508, the encrypted data506using the first encryption key, thereby producing decrypted data514as illustrated inFIG.5A. In order to decrypt512the encrypted data506, the first storage system will need access to the first encryption key (or related key in the case of asymmetric encryption), and possibly any initialization vector or similar information, that was utilized by the computing device502to encrypt the data506. The example method depicted inFIG.5Aalso includes encrypting516, by the first storage system508, the decrypted data514using a second encryption key, thereby producing encrypted data518as illustrated inFIG.5A. In such an example, prior to encrypting516the decrypted data514using a second encryption key, the first storage system508may perform various data reduction techniques such as deduplicating the data and compressing the data, at which point the resultant data may be encrypted516using the second encryption key. The example method depicted inFIG.5Aalso includes storing520, on the first storage system508, the data518encrypted using the second encryption key. Readers will appreciate that the second encryption key may be a key that is only known by the first storage system508, or known by storage systems that are trusted by the first storage system508, such as the second storage system526. As such, data that is exchanged between the first storage system508and any external computing device (such as computing device502inFIG.5A) is encrypted using a different encryption key (i.e., the first encryption key) than is used to encrypt data that is stored within the first storage system508itself. Because different encryption keys are used, even if the first encryption key was somehow obtained (e.g., via an attack on the computing device502), the first encryption key would be useless in terms of gaining access to the data as it is stored on the first storage system508. In addition to storing520the data518that has been encrypted using the second encryption key, the first storage system508may also store, or at least be able to recreate, information that the computing device502utilized to encrypt the data prior to transmitting encrypted data to the first storage system508. Readers will appreciate that the first storage system508may generally be expected to return the same data that was written (presuming a scheme isn't being used that alters the encryption of the transferred data) by the computing device502, so if a block written by the computing device502to the first storage system508was encrypted by the computing device502prior to transfer and based, for example, on an encryption key and initialization vector combination, then a later read of that data should generally ensure the data transferred to the computing device502is encrypted with the same encryption key and initialization vector combination (which ensures that the data is identical to what was written). This may differ in a storage system that can deduplicate and compress the unencrypted data before then storing it encrypted. In such an example, the encryption key and initialization vector used for encryption on transfer back to the computing device502could be computable (such as from the volume and a volume block address) or could be recorded when the data is written. Deduplication itself generally further requires that the encryption key and initialization vector not be recorded with the multiply referenced data, but rather that it be recorded in a reference to the multiply referenced data. For example, in a store that organizes blocks by their content (such as indexed by their hash value), a volume can be considered a list of volume block offsets that reference blocks stored with a particular hash value. In this case, the list of these references should generally include the necessary metadata to determine the encryption key and the initialization vector information rather than storing it in the hash-value based store organized by block content. The example method depicted inFIG.5Aalso includes sending522, from the first storage system508to the second storage system526, the data518. In the example method depicted inFIG.5A, the data that is sent522from the first storage system508to the second storage system526is encrypted using the second encryption key, illustrated inFIG.5Aas encrypted data518. As such, data that is exchanged between the first storage system508and any external computing device (such as computing device502inFIG.5A) is encrypted using a different encryption key (i.e., the first encryption key) than is used to encrypt data that is exchanged between the first storage system508and the second storage system526. Because different encryption keys are used, even if the first encryption key was somehow obtained (e.g., via an attack on the computing device502), the first encryption key would be useless in terms of gaining access to the data as it is transferred between the storage systems508,526. Although the example method depicted inFIG.5Arelates to an embodiment where the first storage system508sends522the data518to the second storage system526, in other embodiments data may flow between the storage systems in other ways. For example, RDMA or RDMA-like technologies may be used such that the second storage system526essentially reads the data518from the first storage system508, the data518may flow through an intermediary, or data that is originally stored in the first storage system508may ultimately reside on the second storage system526in some other way. In the example method depicted inFIG.5A, the first storage system508has determined that the second storage system526is trusted by the computing device. The first storage system508may have determined that the second storage system526is trusted by the computing device by determining that the second storage system526can prove that it has the same means as the first storage system508to decrypt data that has been received from the computing device and the same (or functionally equivalent) means to re-encrypt data to return back to the computing device. As a result, the first storage system508can transmit internal representations of the dataset to the second storage system526, with the understanding that the second storage system526has the ability to serve requests to read the data at some point in the future. The example method depicted inFIG.5Aalso includes servicing524, by the second storage system526, an input/output (‘I/O’) operation directed to the data. In order for the second storage system526to be capable of servicing524an I/O operation that is directed to the data, the second storage system526may have retained the data518that was sent522from the first storage system508to the second storage system526. In such an example, the second storage system526may store the data as encrypted by the first storage system508. Alternatively, the second storage system526may decrypt the data as sent522from the first storage system508, re-encrypt the data using a different encryption key than was used by the first storage system508, and store the data as encrypted by the second storage system526. In other embodiments, the data may ultimately be persistently stored on the second storage system526in some other way. Readers will appreciate that the second storage system526may service524an I/O operation directed to the data at any point in time, including after a replicated snapshot is turned into a read-write dataset some time after the snapshot was replicated, as part of a symmetric synchronous replication solution, and so on. Servicing524, by the second storage system526, an I/O operation directed to the data may be carried out in a variety of ways as will be described in greater detail below, potentially in different ways for different types of I/O operations and in different ways in dependence upon which particular entity issued the I/O operation. For example, the second storage system526may receive a read operation from an external computing device such as computing device502that is depicted inFIG.5A. Prior to sending the data to the external computing device, however, the data may be encrypted using an encryption key that is known to the external computing device. If a read operation was received from computing device502, for example, the second storage system526may encrypt the data using the first encryption key as part of servicing a read operation that is issued by the computing device502. Alternatively, the second storage system526may receive a read operation from the first storage system508, for example, in response to the first storage system508losing some portion of the data. Prior to sending the data to the first storage system508, however, the data may be encrypted using an encryption key that is known to the first storage system508. If a read operation was received from the first storage system508, for example, the second storage system526may encrypt the data using the second encryption key as part of servicing a read operation that is issued by the first storage system508. In yet another example, the second storage system526may receive a read operation from a storage system that is not illustrated inFIG.5A, for example, in response to the first storage system508becoming unavailable and a replacement storage system being brought up as a replacement for the first storage system. Prior to sending the data to the replacement storage system, however, the data may be encrypted using an encryption key (e.g., the second encryption key) that was known by the first storage system508such that the content of the replacement storage system can mirror the content of the first storage system508that became unavailable. Readers will appreciate that other examples may exist, for example, where the second storage system526is used to migrate data away from the first storage system as part of a rebalancing effort, where I/O operations are directed to the second storage system526for load balancing reasons, and so on. For further explanation,FIG.5Bsets forth a flow chart illustrating an additional example method of replicating data using inferred trust in accordance with some embodiments of the present disclosure. The example method depicted inFIG.5Bis similar to the example method depicted inFIG.5A, as the example method depicted inFIG.5Balso includes receiving510data506encrypted using a first encryption key, decrypting512the encrypted data506using the first encryption key, encrypting516the decrypted data514using a second encryption key, storing520the data518encrypted using the second encryption key, sending522the data518from the first storage system508to the second storage system526, and servicing524an I/O operation directed to the data. The example depicted inFIG.5Billustrates an embodiment, however, where some of the steps referenced in the preceding sentence are carried out in a different order than was described with reference toFIG.5A. Readers will appreciate that, unless explicitly stated, no particular ordering of any of the steps described herein is required. In the example method depicted inFIG.5B, the data that is sent from the first storage system508to the second storage system526is unencrypted, which is depicted inFIG.5Bas decrypted data514. In such an example, once the unencrypted data is received by the second storage system526, the second storage system526may be configured to encrypt the data with a third encryption key prior to persistently storing the data on the second storage system526. Regardless of whether the data that is transmitted between the first storage system508and the second storage system526is encrypted by the first storage system508prior to transmission or the data is not encrypted by the first storage system508prior to transmission, secure data communications between the first storage system508and the second storage system526may be utilized. For example, data communications between the first storage system508and the second storage system526may utilize a variety of secure data transmission techniques, including those that encrypt data across the wire. The example method depicted inFIG.5Balso includes determining528, by the first storage system508, that the second storage system526has access to the first encryption key. The first storage system508may need to determine528that the second storage system526has access to the first encryption key, as well as any initialization vector or similar information, to ensure that the second storage system526can properly service I/O operations directed to the data, including sending data back to a computing device502,533that matches the data as written. The initialization vector (or similar information) may be supplied to the second storage system526by the first storage system508, the initialization vector may be computed by the second storage system526from known information such as a logical block number, or such information may otherwise be computed or provided to the second storage system526. The first storage system508may determine528that the second storage system526has access to the first encryption key, for example, by having the second storage system526sign something with the encryption key, by obtaining a signed certificate from the key server authorizing the second storage system526, or in some other way. Readers will appreciate that first storage system508determining528that the second storage system526has access to the first encryption key may serve as a means for a first storage system508to determine that a second storage system526is within a domain of trust. In such an example, when a first storage system508can infer that it can trust a second storage system526, a trust relationship as described earlier is enabled by proving that the second storage system526has access to the same encryption key as the first storage system508for decrypting and re-encrypting data received from and returned to the computing device. Readers will appreciate that in accordance with embodiments of the present disclosure, any techniques for proving access to the same encryption key may be utilized, including using zero-knowledge proof techniques or zero-knowledge protocols. Alternatively, the encryption key could also be used as part of establishing secure communications between the storage systems508,526, such that all communications between the storage systems508,526are encrypted using that same encryption key. For further explanation,FIG.5Csets forth a flow chart illustrating an additional example method of replicating data using inferred trust in accordance with some embodiments of the present disclosure. The example method depicted inFIG.5Cis similar to the example methods depicted inFIGS.5A and5B, as the example method depicted inFIG.5Calso includes receiving510data506encrypted using a first encryption key, decrypting512the encrypted data506using the first encryption key, encrypting516the decrypted data514using a second encryption key, storing520the data518encrypted using the second encryption key, sending522the data518from the first storage system508to the second storage system526, and servicing524an I/O operation directed to the data. The example method depicted inFIG.5Calso includes storing532, on the second storage system526, the data encrypted using a third encryption key. In such an example, the third encryption key may be utilized by and known only by the second storage system526, such that gaining access to any of the other encryption keys will not enable access to the data that is stored on the second storage system526. In such an example, the data that was received from the first storage system508may be decrypted, if needed, and subsequently encrypted using the third encryption key. Alternatively, the data as received from the first storage system508may be encrypted with the third encryption key and stored532within the second storage system526. In the example method depicted inFIG.5C, servicing524, by the second storage system526, the I/O operation directed to the data can include sending535, from the second storage system526to a computing device533associated with a request to read the data, the data506encrypted using the first encryption key. Readers will appreciate that the data may first need to be decrypted (with the third encryption key or second encryption key, as appropriate) prior to being encrypted with the first encryption key and sent534to the computing device533. Although the computing device502that initially caused the data to be stored on the first storage system508is depicted as being distinct from the computing device533that reads the data from the second storage system526, in other embodiments, the same computing device may cause the data to be stored on the first storage system508and to be read from the second storage system526. Although the examples described above relate to embodiments where the second storage system526services524an I/O operation that is directed to the data, in other embodiments the first storage system508may service524an I/O operation that is directed to the data in much the same way. For example, the first storage system508may service524an I/O operation directed to the data may be carried out in a variety of ways as described above, potentially in different ways for different types of I/O operations and in different ways in dependence upon which particular entity issued the I/O operation. For example, the first storage system508may receive a read operation from an external computing device such as computing device502that is depicted inFIG.5A. Prior to sending the data to the external computing device502, however, the data may be encrypted using an encryption key (e.g., the first encryption key) that is known to the external computing device502. For further explanation,FIG.6Asets forth a flow chart illustrating an example method of restoring a storage system from a replication target in accordance with some embodiments of the present disclosure. The example method depicted inFIG.6Aillustrates an example in which data is replicated between a first storage system608and a second storage system622, although readers will appreciate that in other embodiments replication may be carried out between more than two storage systems. Each of the storage systems608,622depicted inFIG.6Amay be similar to the storage systems described above, and may include combinations of the components described above or variants of the components described above. The example method depicted inFIG.6Aincludes receiving610, by a first storage system608from a computing device602, data606to be stored on the first storage system608. The data606that is to be stored on the first storage system608may be received, for example, as part of a request604to write the data606. As part of an effort to protect such data, however, the computing device602may be configured to encrypt the data using a first data encryption key prior to sending the data to the first storage system608. As such, even if data communications between the computing device602and the first storage system608were intercepted, snooped, or otherwise comprised, the data itself could not be accessed without the first encryption key. In the example method depicted inFIG.6A, the data may be encrypted using any of the techniques described above. For example, each write operation may be associated with a different initialization vector, where the initialization vector is based on the logical offset of a block that data is to be written to, or some other factor not related to the content of the write itself. In such a way, two writes of identical unencrypted content do not generally result in identical encrypted content, which is useful to avoid security issues from certain kinds of pattern analysis, such as making inferences rom noticing that two separately stored blocks are identical. Alternatively, instead of using an initialization vector, the key itself could be altered to achieve the same end result, for example, by modifying the encryption key to be the output of applying an XOR operation that takes the logical offset of a block and the encryption key as inputs. Readers will appreciate that other techniques may be utilized for varying the encryption key in a deterministic way. The example method depicted inFIG.6Aillustrates an embodiment in which the first storage system608receives610data606encrypted using a first encryption key from a computing device602as part of a request604to write data that is issued by the computing device602. Readers will appreciate that, in other embodiments, the first storage system608may receive610data606that has been encrypted using a first encryption key from the computing device602outside of the context of a request604to write data that is issued by the computing device602(e.g., the storage system or some other intermediary may be configured to poll the computing device602for data). The example method depicted inFIG.6Aalso includes reducing612, by the first storage system608, the data606using one or more data reduction techniques. Reducing612the data606using one or more data reduction techniques may be carried out, for example, by the first storage system608deduplicating the data606against other data stored in the first storage system608, by the first storage system608deduplicating the data606against data that is stored in other storage systems that (along with the first storage system608) are used as a deduplication pool, or in some other way to reduce the amount of duplicated data that is retained. Likewise, reducing612the data606using one or more data reduction techniques may be carried out by compressing the data606such that the non-duplicated data that remains after deduplicating the data606gets compressed using one or more compression algorithms. Through the use of such data reduction techniques, including combinations of multiple data reduction techniques, reduced data614may be created, where the reduced data614can be embodied as the resultant data that is produced by applying the data reduction techniques to the data606that was received610by the first storage system608. Readers will appreciate that although the example method depicted inFIG.6Arelates to an embodiment where the first storage system608itself performs the data reduction techniques to produce the reduced data614, in other embodiments other computing devices may assist in the process of applying data reduction techniques to the data606. The example method depicted inFIG.6Aalso includes sending616, from the first storage system608to the second storage system622, the reduced data618. In the example method depicted inFIG.6A, the reduced data618that is transmitted from the first storage system608to the second storage system622is encrypted. The reduced data618that is transmitted from the first storage system608to the second storage system622may be encrypted, for example, using an encryption key that the first storage system608uses to encrypt data that is stored on the first storage system608, where the encryption key that the first storage system608uses to encrypt data that is stored on the first storage system608is different than an encryption key that was utilized to encrypt data that was sent from the computing device602to the first storage system608. Alternatively, the reduced data618that is transmitted from the first storage system608to the second storage system622may be encrypted using an encryption key that the first storage system608uses for transmitting data to the second storage system622, where such an encryption key is different than both: 1) the encryption key that the first storage system608uses to encrypt data that is stored on the first storage system608, and 2) the encryption key that was utilized to encrypt data that was sent from the computing device602to the first storage system608. As such, the potential exposure of any internal encryption key that the first storage system608uses to encrypt data that is stored on the first storage system608can be avoided as such an encryption key is not utilized to encrypt data that is sent from the first storage system608to another storage system or computing device. The example method depicted inFIG.6Aalso includes retrieving620, by the first storage system608from the second storage system622, the reduced data618. The first storage system608may retrieve620the reduced data618from the second storage system622, for example, in response to some data loss on the first storage system608. For example, if one or more computing devices within the first storage system608become unavailable, or data that was stored within the first storage system608becomes unavailable for some other reason, the first storage system608may retrieve620such data from the second storage system622as the second storage system622can essentially operate as a backup appliance for the first storage system608. In this case, the target second storage system622could be, for example, an external file or object store such as a simple NFS server or an object cloud storage service. Although encryption keys will eventually be needed to be able to use that stored data, if the original source storage system itself recovers lost data from the target system essentially as a form of recovery from backup, then it may well have the necessary information (or the necessary relationship to a key server) to make sense of the encrypted reduced data that was stored on the second storage system622. In the example method depicted inFIG.6A, the reduced data618that is transmitted from the second storage system622to the first storage system608is encrypted. In such an example, the reduced data618that is transmitted from the second storage system622to the first storage system608may be encrypted using the same encryption key that was utilized when sending616the reduced data618from the first storage system608to the second storage system622, such that the first storage system608receives (upon retrieval) data that is identical to the data that it previously sent to the second storage system622. Readers will appreciate that although the reduced data618that is transmitted from the second storage system622to the first storage system608may be encrypted using the same encryption key that was utilized when sending616the reduced data618from the first storage system608to the second storage system622, different encryption keys may be utilized by the second storage system622after it initially receives the reduced data618from the first storage system608and before it sends the reduced data618back to the first storage system608. For example, the second storage system622may make use of its own internal encryption keys, such that after receiving the encrypted reduced data618from the first storage system608, the second storage system622essentially decrypts the encrypted reduced data618received from the first storage system608, encrypts the decrypted reduced data using its own internal encryption key, and stores the reduced data that is encrypted using its own internal encryption key. Likewise, prior to sending the reduced data back to the first storage system608, the second storage system622can decrypt the stored reduced data that is encrypted using its own internal encryption key, encrypt the decrypted reduced data with the encryption key that was utilized by the first storage system608, and transmit the encrypted reduced data618that is encrypted using the encryption key that was utilized by the first storage system608to the first storage system608. For further explanation,FIG.6Bsets forth a flow chart illustrating an additional example method of restoring a storage system from a replication target in accordance with some embodiments of the present disclosure. The example method depicted inFIG.6Bis similar to the example method depicted inFIG.6A, as the example method depicted inFIG.6Balso includes receiving610data606to be stored on the first storage system608, reducing612the data606using one or more data reduction techniques, sending616the reduced data618to the second storage system622, and retrieving620the reduced data618from the second storage system622. In the example method depicted inFIG.6B, the data606that is received by the first storage system608from the computing device602may be encrypted using a first encryption key, as described above. As such, the first storage system608may first decrypt the data606using a first encryption key prior to encrypting the data606with a different encryption key and storing the data within the first storage system608, as described above. The example method depicted inFIG.6Balso includes sending624, from the first storage system608to a computing device602associated with a request to read the data, the data626encrypted using the first encryption key. The first storage system608may send624the data626encrypted using the first encryption key to the computing device602in response to receiving a read operation from the computing device602. In such an example, because the data may be stored on the first storage system608in an encrypted form using an encryption key that is different that the first encryption key, the first storage system608may decrypt the data as stored on the first storage system608, re-encrypt the data using a the first encryption key, and subsequently send624the data626encrypted using the first encryption key to the computing device602. Readers will appreciate that read operations that are received from other computing devices may be serviced in a similar manner. For further explanation,FIG.6Csets forth a flow chart illustrating an additional example method of restoring a storage system from a replication target in accordance with some embodiments of the present disclosure. The example method depicted inFIG.6Cis similar to the example methods depicted inFIGS.6A and6B, as the example method depicted inFIG.6Calso includes receiving610data606to be stored on the first storage system608, reducing612the data606using one or more data reduction techniques, sending616the reduced data618to the second storage system622, and retrieving620the reduced data618from the second storage system622. The example method depicted inFIG.6Calso includes encrypting628the reduced data614using a second encryption key. The reduced data614may be encrypted628using a second encryption key, for example, in order to store the encrypted reduced data618within the first storage system608. In such an embodiment, the second encryption key may essentially serve as an internal encryption key that the first storage system608utilizes when storing data. As such, even if data communications between the computing device602and the first storage system608were intercepted, snooped, or otherwise comprised and the first encryption key that was used to exchange data between the computing device602and the first storage system608was obtained, the data stored within the first storage system608still could not be accessed without the second encryption key. In an alternative embodiment, the reduced data614may be encrypted628using a second encryption key, for example, prior to sending the encrypted reduced data618to the second storage system622. In such an embodiment, the second encryption key may essentially serve as an encryption key that the first storage system608utilizes when exchanging data with the second storage system. As such, even if data communications between the computing device602and the first storage system608were intercepted, snooped, or otherwise comprised and the first encryption key that was used to exchange data between the computing device602and the first storage system608was obtained, data exchanged between the first storage system608and the second storage system622still could not be accessed without the second encryption key. The example method depicted inFIG.6Calso includes storing630, within the first storage system608, the reduced data614encrypted with a third encryption key. In such an embodiment, a third encryption key may essentially serve as an internal encryption key that the first storage system608utilizes when storing data. As such, even if data communications between the computing device602and the first storage system608were intercepted, snooped, or otherwise comprised and the first encryption key that was used to exchange data between the computing device602and the first storage system608was obtained, or even if data communications between the first storage system608and the second storage system622were intercepted, snooped, or otherwise comprised and the second encryption key that was used to exchange data between the first storage system608and the second storage system622was obtained, the data stored within the first storage system608still could not be accessed without the third encryption key. Readers will appreciate that in embodiments where data that is exchanged between the first storage system608and the second storage system622is encrypted using a second encryption key and data that is stored630within the first storage system608is encrypted with a third encryption key, various decrypting and re-encrypting steps using different encryption keys may be required to carry out some of the steps described above. For example, if the first storage system608retrieved620the reduced data618from the second storage system622as part of a recovery effort, the reduced data618that was retrieved620from the second storage system622may be encrypted using a second encryption key. As such, the first storage608may subsequently need to decrypt the encrypted reduced data618using the second encryption key and then encrypt the reduced data using the third encryption key prior to storing630the reduced data614encrypted with a third encryption key within the first storage system608. In some of the embodiments described above, the second storage system622that has received encrypted reduced data618from the first storage system608may not have access to the encryption keys necessary to decrypt the encrypted reduced data618. In such an example, the second storage system622may store the encrypted reduced data618in the form that it was received. In an alternative embodiment, the encrypted reduced data618could be encrypted using an internal encryption key that is utilized by the second storage system622, without being first decrypted such that the data uses cascading encryption. In either example, the second storage system622will effectively serve as a resource for storing a second copy of the encrypted reduced data618, with no ability to access an unencrypted version of the data. For further explanation,FIG.6Dsets forth a flow chart illustrating an example method of creating a replica of a storage system in accordance with some embodiments of the present disclosure. The example method depicted inFIG.6Dis similar to the example methods depicted inFIGS.6A-6C, as the example method depicted inFIG.6Dalso includes: receiving610, by a first storage system608from a computing device602, data to be stored on the first storage system608; reducing612, by the first storage system608, the data606using one or more data reduction techniques; and sending616, from the first storage system to the second storage system, the reduced data618. The example method depicted inFIG.6Dalso includes sending640, from the second storage system622to a third storage system642, the reduced data (depicted in this example as encrypted reduced data618). The reduced data may be sent640to the third storage system642, for example, in response to determining that the first storage system608has become unavailable. As such, upon having received and stored the reduce data, the third storage system642may effectively serve as a replacement for the first storage system608, at least with respect to the data606that was originally sent to the first storage system608by the computing device602. In such an example, reads and writes associated with the data606that was originally sent to the first storage system608by the computing device602may be serviced by the third storage system642after the third storage system642has received and stored the reduced data618. In the example method depicted inFIG.6D, the data606that is received by the first storage system608from the computing device602may be encrypted using a first encryption key and the data618that is sent from the first storage system608to the second storage system622may be encrypted using a second encryption key, as is described above. In some embodiments, the data618that is sent640from the second storage system622to the third storage system642may also be encrypted using second encryption key, such that the data that was sent from the first storage system608to the second storage system622is essentially forwarded from the second storage system622to the third storage system640without any decrypting and re-encrypting. Alternatively, the data618that is sent640from the second storage system622to the third storage system642may be encrypted using a third encryption key. In such an example, the second storage system622could decrypt the data that was received from the first storage system608, re-encrypt the data using a new key (e.g., the third encryption key), and send640the re-encrypted data to the third storage system642. For further explanation,FIG.6Esets forth a flow chart illustrating an additional example method of creating a replica of a storage system in accordance with some embodiments of the present disclosure. The example method depicted inFIG.6Eis similar to the example methods depicted inFIGS.6A-6D, as the example method depicted inFIG.6Ealso includes: receiving610, by a first storage system608from a computing device602, data to be stored on the first storage system608; reducing612, by the first storage system608, the data606using one or more data reduction techniques; and sending616, from the first storage system to the second storage system, the reduced data618. The example method depicted inFIG.6Ealso includes sending644, from the third storage system642to a computing device602associated with a request604to read the data606, the data encrypted using the first encryption key, which is depicted here as encrypted data646. Readers will appreciate that the data may first need to be decrypted by the third storage system642(with the third encryption key or second encryption key, as appropriate) prior to being encrypted with the first encryption key and sent644to a computing device602that is associated with a request604to read the data606. By encrypting the data606with the first encryption key, the data that is returned via the read request matches the data that was originally written to the first storage system608. In such an example, the second storage system622or some other entity may need to initially verify that the third storage system642has access to the first encryption key, as well as any initialization vector or similar information, to ensure that the third storage system642can properly service I/O operations directed to the data, including sending data back to a computing device602that matches the data as written. Verifying that the third storage system642has access to the first encryption key may be carried out, for example, by having the third storage system642sign something with the first encryption key, by obtaining a signed certificate from the key server authorizing the third storage system642, or in some other way. The example method depicted inFIG.6Ealso includes detecting648that the first storage system608has become unavailable. Detecting648that the first storage system608has become unavailable, for example, through the use of a heartbeat mechanism that periodically sends messages to the first storage system608failing to receive a response from the first storage system608, by determining that one or more I/O operations directed to the first storage system608have failed to complete, by receiving an error message or similar notification from the first storage system608itself, or in some other way. Although the example depicted inFIG.6Erelates to an embodiment where the second storage system622detects648that the first storage system608has become unavailable, in other embodiments other entities may detect648that the first storage system608has become unavailable. For example, one or more monitoring modules that are executing in a cloud computing environment may detect648that the first storage system608has become unavailable, one or more monitoring modules that are executing on physical hardware that is located in the same data center as the first storage system608may detect648that the first storage system608has become unavailable, and so on. In the example method depicted inFIG.6E, the third storage system642may be created in response to detecting that the first storage system608has become unavailable. The third storage system642may be embodied, for example, as a cloud-based storage system as described above such that creating the third storage system642can be carried out by provisioning all the cloud computing resources that collectively form the cloud-based storage system, as is also described above. In other embodiments, rather than creating a storage system, one or more existing storage systems may be evaluated to identify the storage system that should be utilized to support the dataset that was previously available on the first storage system608. Determining which storage system, from amongst a plurality of storage systems, should be utilized to support the dataset that was previously available on the first storage system608may be carried out, for example, in dependence upon the amount of available storage or available I/O processing capabilities for each storage system such that those storage systems that are able to support the dataset and I/O operations to such a dataset would be more likely to be selected, in dependence upon the location of each storage system such that storage systems that are more physically proximate to the first storage system608would be more likely to be selected, in dependence upon the characteristics of each storage system such that storage systems that are most similar to the first storage system608would be more likely to be selected, or in other ways that may take many factors into consideration. In such a way, one or more modules (including modules that may be executing in a cloud computing environment) may detect that a first storage system has become unavailable; identify a second storage system that contains data that was stored on the first storage system; identify a replacement storage system; and instruct the second storage system to send, to the replacement storage system, the data that was stored on the first storage system, wherein the data that is sent to the replacement storage system is encrypted. As an alternative to the one or more modules instructing the second storage system to send, to the replacement storage system, the data that was stored on the first storage system, the replacement storage system may be configured or instructed to retrieve such data from the second storage system. In these examples, identifying a replacement storage system can include creating the replacement storage system or alternatively identifying the replacement storage system from amongst a plurality of storage systems using one or more selection criterion. In the example method depicted inFIG.6E, the encrypted reduced data618that is sent from the first storage system608to the second storage system622may be encrypted with a different encryption key than is used to encrypt the encrypted reduced data that is stored on the second storage system622. As such, the second storage system622may decrypt the data that is received from the first storage system608and re-encrypt the data prior to storing the data within the second storage system622. Likewise, the second storage system622may send data to the third storage system642using an encryption key that is different than any of the encryption keys that were used in any of the other data transfers described above (e.g., data transfer from the computing device to the first storage system, data transfer from the first storage system to the second storage system), depicted here as encrypted reduced data649. Although many of the embodiments described above relate to embodiments where data reduction is preserved, frequently by the first storage system608performing one or more data reduction techniques to data that was received from a host computing device and then sending the encrypted reduced data618to the second storage system622, in other embodiments the first storage system608may send encrypted data that has not been reduced to the second storage system622. In such an example, the second storage system622may then apply data reduction techniques itself, which may or may not be preserved when sending the data to a third storage system. Readers will appreciate that combinations of such embodiments (e.g., the first storage system608sends encrypted unreduced data to the second storage system622and the second storage system622subsequently sends encrypted reduced data to a third storage system) are within the scope of the present disclosure. For further explanation,FIG.6Fsets forth a flow chart illustrating an additional example method of creating a replica of a storage system in accordance with some embodiments of the present disclosure. The example method depicted inFIG.6Fis similar to the example methods depicted inFIGS.6A-6E, as the example method depicted inFIG.6Falso includes: receiving610, by a first storage system608from a computing device602, data to be stored on the first storage system608; and reducing612, by the first storage system608, the data606using one or more data reduction techniques. In the example method depicted inFIG.6F, rather than sending the reduced data to the second storage system, the first storage system sends650encrypted data656to the second storage system622, where the encrypted data656does not preserve the data reduction techniques that the first storage system608applied to the data606. In such an example, the first storage system608may still reduce612the data606using one or more data reduction techniques, however, to reduce the amount of data that is stored on the first storage system608. In the example method depicted inFIG.6F, the second storage system622also reduces652the data that was received from the first storage system608using one or more data reduction techniques. For example, the second storage system622may deduplicate the data against data that is stored on the second storage system622, the second storage system622may compress the data, or the second storage system622may perform any of the data reduction techniques described above. Readers will appreciate that prior to reducing652the data using one or more data reduction techniques, the second storage system622may need to decrypt the encrypted data656that was received from the first storage system608. After reducing652such decrypted data, the second storage system622may encrypt the reduced data prior to persistently storing the reduced data within the second storage system622. The example method depicted inFIG.6Falso includes sending654, from the second storage system622to the third storage system642, encrypted data658. The encrypted data658that is depicted inFIG.6Fhas not been reduced, although in other embodiments the second storage system622may send encrypted reduced data to the third storage system642. In this example, the encrypted data658may be encrypted with the same key that the second storage system622uses to encrypt data that is stored within the second storage system, or with a different encryption key. In fact, the encryption key that is used to create the encrypted data658that is sent654to the third storage system642may be different than any encryption key used by the computing device602or the first storage system608. In the example method depicted inFIG.6F, the third storage system642reduces660the data that was received from the second storage system622using one or more data reduction techniques. For example, the third storage system642may deduplicate the data against data that is stored on the third storage system642, the third storage system642may compress the data, or the third storage system642may perform any of the data reduction techniques described above. Readers will appreciate that prior to reducing660the data using one or more data reduction techniques, third storage system642may need to decrypt the encrypted data658that was received from the second storage system622. After reducing660such decrypted data, third storage system642may encrypt the reduced data prior to persistently storing the reduced data within the third storage system642. Readers will appreciate that in the examples described above, although a first encryption key, a second encryption key, and a third encryption key are described, each of the three encryption keys may be embodied as a combination of an encryption key and one or more initialization vectors, as described above. Likewise, each of the three encryption keys may be embodied as an encryption key that has been modified in some deterministic way, such as using an XOR operation and logical offset, as described above. Furthermore, each of the three encryption keys (including any input used to generate or modify an encryption key) may be retrieved from an external resource such as, for example, a key server. Combinations of such embodiments may also be utilized in accordance with some embodiments of the present disclosure. Readers will further appreciate that in the examples described above, where data is encrypted with a particular key, such encryption may be separate from any communications level encryption that is used in an effort to facilitate secure communications between the systems. That is, the encryption may be done regardless of whether or not secure data communications techniques will be utilized. Readers will appreciate that in security-conscious environments, a storage system may not itself permanently store encryption keys, but instead an external key server of some kind will store the encryption keys. In such cases, the storage system may store, internally, some kind of key identifier that can be communicated to an external key server. As a result, the key/initialization-vector combination stored along with one of the references described above may instead be a key identifier combined with an initialization vector. If key identifiers are large, the storage system may instead store a list of key identifiers indexed by some small value (such as a simple integer index) along with these references rather than a complete key identifier. FIG.7Ais a diagram of a storage system with multiple tenant dataset that supports end-to-end encryption in accordance with some embodiments of the present disclosure. The example ofFIG.7Aincludes a host702coupled to a storage system704. The storage system may be implemented with components similar to those described above. The storage system704includes a first tenant dataset706and a second tenant dataset708. The term tenant dataset refers to a dataset that is generally associated with a defined set of one or more applications with various levels of accessibility being prohibited for any applications not included in that defined set. Such a prohibition may be enforced via an explicit policy or, in other embodiments, due to each tenant having separate structures, tables and other metadata so that no sharing would occur. InFIG.7A, the host702may execute two different applications: a first host application702A and a second host application702B. The first host application702A may be associated with the first tenant dataset706while the second host application702B may be associated with only the second tenant dataset708. In such an example, the first host application702A may be authorized to access the first tenant dataset but prohibited from accessing the second tenant dataset708. In addition to restrictions on access, different tenant datasets may also be restricted from data leakage within the storage system704itself. That is, data from one tenant dataset may be restricted from various combinations of data and metadata with data from other tenant datasets. In one example, deduplication between such tenant datasets may be completely prohibited such that no data leakage occurs between the datasets and in other examples deduplication may be restricted so that some data leakage may occur but tenants are restricted from significant knowledge regarding the other tenants' datasets. In a system that supports end-to-end encryption (like those described above), various techniques may be employed to support encryption within the storage system and support multi-tenancy. For further explanation, therefore,FIG.7Bsets forth a flow chart illustrating an example method of end-to-end encryption in a storage system that supports multiple tenant datasets between which deduplication is prohibited in accordance with some embodiments of the present disclosure. The method ofFIG.7Bincludes performing (710) deduplication on a first tenant dataset706. The first tenant dataset706includes data encrypted using a first storage system encryption key714. The term ‘storage system encryption key’ as used here refers to an encryption used by the storage system for encryption of data stored on the storage system. Such storage system encryption keys are in contrast to host encryption keys which are keys utilized by a host application to encrypt data prior to transmitting that data to a storage system for storage. Readers of skill in the art will recognize that in some embodiments, the host encryption and the storage encryption may be the same or may be accessible from a key manager using the same identifier. All encryption keys referred to here may also include an initialization vector, a seed, salt, another method of setting an initial state of an encryption state engine, or identifiers of the same. In the method ofFIG.7B, performing (710) deduplication on a first tenant dataset is carried out only within the first tenant dataset. That is, data of the first tenant dataset is not compared to data of another tenant dataset. Such deduplication may be in-line deduplication in which data of the first tenant dataset is compared to data to be written to the first tenant dataset. In other embodiments, the deduplication may be carried out in-place dynamically, or upon a predefined schedule, comparing data within the tenant dataset itself. The method ofFIG.7Balso includes performing712deduplication on a second tenant dataset708. The second tenant dataset708includes data encrypted using a second storage system encryption key716. In the method ofFIG.7Bdeduplication712and710is prohibited from being performed between the first and second tenant datasets. The term ‘prohibited’ refers to a general policy applied to the tenant datasets rather than an action carried out by the storage system. To enforce the policy, the storage system is configured in a manner so as not to perform deduplication between the first and second datasets. Deduplication performed on the second tenant dataset occurs only with respect to data of the second tenant dataset. There is no data leakage between the two datasets. Such deduplication can be in-line or in-place. Datasets within a storage system are associated with metadata of a variety of types. In some embodiments, the metadata may be included in the dataset as such, and in others, the metadata may be separate and utilized primarily by the storage system itself. Deduplication of metadata is often carried out, but for multi-tenancy that requires no data leakage, metadata associated with one tenant dataset may be prohibited from being deduplicated with metadata from another tenant dataset. To that end,FIG.7Csets forth a flow chart illustrating another example method of end-to-end encryption in a storage system that supports multiple tenant datasets between which deduplication is prohibited in accordance with some embodiments of the present disclosure. The method ofFIG.7Cis similar to the method ofFIG.7Bin that the method ofFIG.7Calso includes performing710,712deduplication on a first tenant dataset706and a second tenant dataset708, where such deduplication is prohibited between the datasets. The method ofFIG.7Cdiffers from the method ofFIG.7B, however, in that in the method ofFIG.7C, performing710deduplication on the first tenant dataset706includes performing718deduplication on metadata associated with the first tenant dataset and performing712deduplication on the second tenant dataset708includes performing720deduplication on metadata associated with the second tenant dataset. In the method ofFIG.7C, deduplication is prohibited from being performed between the metadata of the first and the second datasets. For further explanation,FIG.7Dsets forth a flow chart illustrating another example method of end-to-end encryption in a storage system that supports multiple tenant datasets between which deduplication is prohibited in accordance with some embodiments of the present disclosure. The method ofFIG.7Dis similar to the method ofFIG.7Bin that the method ofFIG.7Dalso includes performing710,712deduplication on a first tenant dataset706and a second tenant dataset708, where such deduplication is prohibited between the datasets. The method ofFIG.7Ddiffers from that ofFIG.7B, however, in that the method ofFIG.7Dincludes receiving722a write request of data to be stored in the first tenant dataset, where the data is encrypted by the host with a first host encryption key. In some embodiments, the write request may include a volume identifier and offset as well as the data to be stored. The storage system, using various metadata mapping tables, may determine from the volume identifier the tenant dataset within which the data is to be stored. The method ofFIG.7Dalso includes storing724the data in the first tenant dataset706. In the method ofFIG.7D, storing724the data in the first tenant dataset706includes decrypting726the data utilizing the first host encryption key, performing726data reduction on the unencrypted data; encrypting730the data utilizing the first storage system encryption key; and storing732the data encrypted with the first storage system encryption key in the first tenant dataset. The storage system704may obtain to the first encryption key in a variety of manners such as by accessing a key manager as described above, by generating the key through a predefined algorithm utilizing tenant dataset identifiers, or in other ways. As mentioned above, the key may also include an IV, salt, seed, or may identify an algorithm to calculate an initial state for a decryption algorithm. Once the key is obtained, the storage system may decrypt726the data utilizing the first host encryption key. In the method ofFIG.7D, in decrypting726the data, the storage system may generate re-encryption information for use in re-encrypting the data for return to the host702upon a later read of that data. Such re-encryption information may include one or more key identifiers for the encryption keys to use to re-encrypt the data as well as per-block key variations such as initialization vectors or other methods to re-encrypt the data. In some embodiments, the re-encryption information includes the first host encryption key (or an identifier of the first host encryption key) and an initialization vector for use in re-encrypting the data. In some embodiments, the re-encryption information specifies a method of calculating the first host encryption key and an initialization vector for use in re-encrypting the data. In the method ofFIG.7D, performing728data reduction on the unencrypted data may include a variety of data reduction techniques. For example, data compression and/or data compaction may be carried out. In-line deduplication710may be performed on the unencrypted data as well. That is, when performing728data reduction on the unencrypted data, the storage system may also compare the unencrypted data to data stored in the first tenant dataset706. If data in the first tenant dataset matches the unencrypted data, the storage system's metadata may be updated with references to the matching data without processing the unencrypted data further—that is, without encrypting730and otherwise storing732the unencrypted data. Upon receiving734a write request of data to be stored in the second tenant dataset, where such data is encrypted by the host102with a second host encryption, the storage system704in the example ofFIG.7Dmay carry out similar steps to those described above with respect to processing a write request of data to be stored in the first tenant dataset. For example, the method ofFIG.7Dalso includes storing736the data in the second tenant dataset, which in turn may include: decrypting738the data utilizing the second host encryption key (which may include generating738A re-encryption information describing details of re-encrypting data for return to the host upon a later read request of the data), performing740data reduction on the unencrypted data; encrypting742the data utilizing the second storage system encryption key; and storing744the data encrypted with the second storage system encryption key in the second tenant dataset. Readers of skill in the art will recognize that although the write requests for both tenant datasets are described here as being received from a single host702, such write requests may be received from any number of different hosts. Further, the term ‘host’ here may refer to either a host application or a host computing platform that supports execution of such a host application. In fact, throughout the remainder of this specification, when a single host is referenced, it is noted that multiple hosts may also be employed and such hosts may also be synonymous with host applications. For further explanation,FIG.7Esets forth a flow chart illustrating another example method of end-to-end encryption in a storage system that supports multiple tenant datasets between which deduplication is prohibited in accordance with some embodiments of the present disclosure. The method ofFIG.7Eis similar to the method ofFIG.7Bin that the method ofFIG.7Ealso includes performing710,712deduplication on a first tenant dataset706and a second tenant dataset708, where such deduplication is prohibited between the datasets. The method ofFIG.7Ediffers from the method ofFIG.7B, however, in that the method ofFIG.7Eincludes receiving746a read request for data stored in the first tenant data set706; decrypting748the data from the first tenant data set utilizing the first storage system encryption key; and re-encrypting750the data utilizing the first host encryption key and re-encryption information. Such a read request may include a volume and offset or other identifier of the data. The storage system may utilize the volume and offset along with various mapping tables to determine the data's tenant dataset and a location within storage of the data. The storage system may then decrypt the data utilizing the first storage system encryption key. Along with decrypting the data, the storage system may also decompress the data if the data was previously compressed. Prior to returning the data to the requesting host102, the storage system may re-encrypt the data utilizing the appropriate host encryption key. Such a key may be calculated based on details set forth in re-encryption information generated in a previous decryption and storage of the data, the key itself (and any IV or other deterministic perturbance method) may have been included in the re-encryption information, or an identifier of the key may have been included in the re-encryption information and maybe utilized to retrieve a key from a key manager. In the example ofFIG.7E, the storage system may carry out similar steps upon a read request for data stored in the second tenant dataset. Such steps may include: receiving752a read request for data stored in the second tenant data set; decrypting754the data from the second tenant data set utilizing the second storage system encryption key; and re-encrypting756the data utilizing the second host encryption key and re-encryption information. One or more embodiments may be described herein with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claims. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. As mentioned above, some implementations of multi-tenancy may prohibit any data leakage between tenant datasets while others may restrict, but not altogether prohibit, data leakage. To that end,FIG.7Fsets forth a flow chart illustrating another example method of end-to-end encryption in a storage system that supports multiple tenant datasets between which a level of deduplication is allowed in accordance with some embodiments of the present disclosure. The method ofFIG.7Fincludes encrypting758a first block774of data using a block encryption key derived from content of the first block of data and storing760the encrypted block of data. In the method ofFIG.7Fthe first block of data is included in a first tenant dataset706. The term ‘block encryption key’ refers to an encryption derived from the content of a block of data itself. In one example, the block encryption key may be a secure hash of the data block. The method ofFIG.7Falso includes deriving762, from content of a second block of data, a matching encryption key. In the method ofFIG.7F, the second block of data is included in a second tenant dataset708. The term ‘matching encryption key’ refers to a block encryption key that matches another block encryption key. In the case of a secure hash, for example, the hash of the second block would match the hash of the first block. As such, the blocks of data may be considered duplicate and candidates for deduplication. Because the blocks are for different tenant datasets, however, data leakage is to be reduced if possible. To that end, the method ofFIG.7Falso includes recording764, in metadata, an association of a location of the encrypted block of data and the matching encryption key, encrypting766a first copy770of the metadata with a first tenant dataset encryption key and encrypting768a second copy772of the metadata with a second tenant dataset encryption key. In this way, a host application with the first tenant dataset encryption key may have the ability to decrypt only one copy of the metadata, identify the location of the encrypted block from that metadata, and, because of its inclusion in the metadata, recognize that the block has been deduplicated. The host application with the first tenant dataset encryption key, however, may not identify which other dataset, if any, includes the same data. Further, the host application with the first tenant dataset encryption key is not capable of determining the number of references to the same data. The same is true for a host application with the second tenant dataset encryption key. In this manner, while a very small amount of knowledge may be inferred about other tenant datasets, the knowledge may not be definitive and does not identify the other datasets that contain the same data block. The deduplication performed here may be carried out in-line upon a write of a block or in-place, dynamically by comparing hashes of blocks between the two datasets. For further explanation,FIG.7Gsets forth a flow chart illustrating another example method of end-to-end encryption in a storage system that supports multiple tenant datasets between which a level of deduplication is allowed in accordance with some embodiments of the present disclosure. The method ofFIG.7Gis similar to that ofFIG.7Fand includes: encrypting758a first block774of data using a block encryption key derived from content of the first block of data, where the first block of data is included in a first tenant dataset706; storing760the encrypted block of data; deriving762, from content of a second block of data, a matching encryption key, where the second block of data is included in a second tenant dataset; recording764, in metadata, an association of a location of the encrypted block of data and the matching encryption key; encrypting766a first copy770of the metadata with a first tenant dataset encryption key; and encrypting768a second copy772of the metadata with a second tenant dataset encryption key. The method ofFIG.7Gdiffers from that ofFIG.7Fin that the method includes receiving776a read request identifying the first block776of data. Such a read request may include a volume and offset or other identifier. The storage system may utilize that volume and offset to determine from system metadata the data's tenant dataset and whether the data has been deduplicated. If the data has not been deduplicated, the metadata may also contain a mapping of a location of data within storage and the data may be read from that location. Alternatively, each tenant dataset's copy of metadata may include all the hashes for all data blocks of the dataset regardless of whether the data has been deduplicated, where the hash is mapped to a storage location. In such an embodiment, upon each read, the storage system may identify, from the volume and offset, the dataset for the data to be read and decrypt the associated copy of metadata to determine a location of the block of data. That is, the storage system then decrypts778the first copy706of the metadata using the first tenant dataset encryption key and identifies780, from the metadata, the location of the matching block of data. The storage system may then provide the block data as a response to the read request to the requesting host702. As such a response, the storage system may encrypt the block of data using a host encryption key as described above. The method ofFIG.7Galso includes similar steps for the second block of data including: receiving782a read request identifying the second block of data, decrypting784the second copy of the metadata using the second tenant dataset encryption key, and identifying786, from the metadata, the location of the matching block of data. For further explanation,FIG.7Hsets forth a flow chart illustrating another example method of end-to-end encryption in a storage system that supports multiple tenant datasets between which a level of deduplication is allowed in accordance with some embodiments of the present disclosure. The method ofFIG.7His similar to that ofFIG.7Fand includes: encrypting758a first block774of data using a block encryption key derived from content of the first block of data, where the first block of data is included in a first tenant dataset706; storing760the encrypted block of data; deriving762, from content of a second block of data, a matching encryption key, where the second block of data is included in a second tenant dataset; recording764, in metadata, an association of a location of the encrypted block of data and the matching encryption key; encrypting766a first copy770of the metadata with a first tenant dataset encryption key; and encrypting768a second copy772of the metadata with a second tenant dataset encryption key. FIG.7Hdiffers fromFIG.7Fin that the method ofFIG.7Hincludes steps for storing data in the first and second tenant datasets utilizing host encryption keys. These steps are the same as those set forth above inFIG.7Dand include: receiving726a write request of data to be stored in the first tenant dataset, wherein the data is encrypted by the host with a first host encryption key; and storing724the data in the first tenant dataset, including: decrypting726the data utilizing the first host encryption key and generating726A re-encryption information; performing728data reduction on the unencrypted data; encrypting730the data utilizing a first storage system encryption key; and storing732the data encrypted with the first storage system encryption key in the first tenant dataset.FIG.7Halso includes: storing736the data in the second tenant dataset, which in turn may include: decrypting738the data utilizing the second host encryption key (which may include generating738A re-encryption information describing details of re-encrypting data for return to the host upon a later read request of the data), performing740data reduction on the unencrypted data; encrypting742the data utilizing the second storage system encryption key; and storing744the data encrypted with the second storage system encryption key in the second tenant dataset. Readers will note thatFIG.7Hrefers to a storage system encryption key and a block encryption key. The two in some instances may be different keys. In some embodiments, the two may be the same keys. In some embodiments, for example, the storage system704may encrypt730a first block of data to be written to a first tenant dataset706with a first storage system encryption key and then separately encrypt758the first block by generating a secure hash of the block for use in deduplication. In other embodiment, the storage system may perform a single encryption730and utilize the secure hash generated from that encryption for deduplication purposes as well. In some embodiments, a host may be coupled to a storage system through several different paths. Additionally, multiple hosts may be coupled to the same storage system through different paths and may all access the same dataset. To that end,FIG.8Asets forth a diagram of a multi-path based storage system that supports end-to-end encryption in accordance with some embodiments of the present disclosure. The system ofFIG.8Aincludes a storage system806and two hosts802and804. Each host802and804is coupled, through different paths802A,802B,804A,804B, to the storage system806for accessing a dataset808. The term ‘path’ as used here may refer to any identifiable logical or physical coupling between a host and a storage system. Examples of such paths may include Fibre Channel, NVMe, NVMe over Fabrics, Ethernet, Infiniband, SAS, SCSI, or the like. The set of all paths may further include more than one such type of path. The example ofFIG.8Amay be configured for end-to-end encryption similar to the systems set forth above. The system ofFIG.8Amay also be configured to support multi-path based encryption according to embodiments of the present disclosure. In some of those embodiments, each path802A,802B,804A,804B may be associated with a separate encryption key. For example, data written to the dataset808by host802along a first path802A is encrypted by a first path key when transmitted to the storage system806. The storage system may store the data in the dataset808encrypted by a storage system key. Upon a read of the data from the dataset808by the first host802, the storage system may decrypt the data with the storage system key and encrypt the data with a path-specific encryption key. The storage system may be configured in a variety of implementations, where, in each different implementation, the storage system uses a different path-specific encryption key on a read of the data. Several of those implementations are described below in greater detail. For further explanation,FIG.8Bsets forth a flow chart illustrating an example method of end-to-end encryption in a storage system that supports multiple paths to access a dataset in accordance with some embodiments of the present disclosure. The method ofFIG.8Bincludes processing800a write request received through a first path and processing801a write request received through a second path. In the method ofFIG.8B, processing800a write request received through a first path includes: receiving810, through a first path, a first write request for first data to be stored in a dataset, where the first data is encrypted with a first encryption key associated with requests received from the first path, and decrypting812the first data utilizing the first encryption key. In contrast to transmission level encryption keys that are utilized to encrypt data communications in flight over a transmission line, the encryption keys associated with a particular path from which requests are received refer to keys that are utilized to encrypt and decrypt the data blocks themselves. Methods of communicating a key or a key identifier between a host and a storage system could be based on a communicated exchange of some kind (e.g., a special SCSI request), or could be based on separate exchanges with a key server, possibly using a shared understanding of the storage system's identifiers when interacting with the key server, or the host may write a key or key identifier into a dataset in some recognized way. For example, a specific block address of a volume could be used, or the key could be stored in a master boot record (MBR), global partition table, extensible firmware interface (‘GPT/EFI’) partition format. In the case of GPT/EFI, the unique identifier associated with the block device, as stored in the GPT/EFI header, or the unique identifier associated with a partition, could be used in exchanges with the key management server. A host accessing a clone of a dataset (or a synchronous replica of dataset) could further write a separate key identifier into an already existing dataset to change which keys are used for further encryption or for decrypting to the new host. Alternately, one host could interact with the storage system (such as by writing to a location or header, or by interacting through an extended SCSI operation) to alter the keys or key identifiers used for later interactions or for interactions from some other host, for example, as part of configuring for a dataset being shared out from a production environment to a test and development environment. In the method ofFIG.8B, decrypting812the first data utilizing the first encryption key may also include generating re-encryption information describing details of re-encrypting the first data utilizing the first encryption key. After decrypting the first data utilizing the first encryption key (and prior to encrypting814the first data using the storage system encryption key), the method ofFIG.8Bmay include performing data reduction on the first data. Such data reduction may include deduplication, data compression, data compaction, and the like. The method ofFIG.8Balso includes encrypting814the first data using a storage system encryption key and storing816the first data in the dataset808. The ‘storage system encryption key’ in this case is an internal key utilized by the storage system for encrypting data. The steps described above with respect to processing a write request received through a first path are similar to those carried out while processing801a write request received through a second path. That is, in the example ofFIG.8B, processing801a write request received through a second path includes: receiving818, by the storage system808through a second path, a second write request for second data to be stored in the dataset, where the second data is encrypted with a second encryption key associated with requests received from the second path; decrypting the second data utilizing the second encryption key; encrypting the second data using the storage system encryption key; and storing the second data in the dataset. Multiple hosts may be configured to access the same dataset808, such as in an example implementation of a clustered file system. In other embodiments, a single host may be coupled to the storage system with multiple paths. In yet other embodiments, multiple hosts may be coupled to the same storage system through multiple paths. For these various implementations, the encryption keys utilized by any of the hosts to encrypt data that is written to the dataset808are path-specific rather than host-specific. To that end, the write requests referred to in the example ofFIG.8Bmay be received from the same host or different hosts. Said another way: in the example ofFIG.8B, receiving810the first write request may include receiving the first write request from a host and receiving818the second write request may include receiving the second write request from the same host. Alternatively, in the method ofFIG.8B, receiving810the first write request may include receiving the first write request from a first host and receiving818the second write request may include receiving the second write request from a second host. FIG.8Bgenerally encompasses path-specific encryption for writes of data to a storage system. Various implementations of path-specific encryption may exist for reads of data from the storage system.FIGS.8C,8D, and8Eset forth various example implementations of path-specific encryption for read requests in a storage system that supports multi-path end-to-end encryption according to embodiments of the present disclosure.FIGS.8C,8D, and8Eare similar to the example method ofFIG.8Bas each Figure also includes: receiving810, by a storage system806through a first path, a first write request for first data to be stored in a dataset, where the first data is encrypted with a first encryption key; decrypting812the first data utilizing the first encryption key; encrypting814the first data using a storage system encryption key; storing816the first data in the dataset; receiving818, by the storage system through a second path, a second write request for second data to be stored in the dataset, where the second data is encrypted with a second encryption key associated with requests received from the second path; decrypting820the second data utilizing the second encryption key; encrypting822the second data using the storage system encryption key; and storing824the second data in the dataset. FIG.8Cincludes receiving826, through the first path, a read request for the first data; decrypting828the first data utilizing the storage system encryption key; encrypting830the first data with the first encryption key associated with requests received from the first path; and returning832the encrypted first data through the first path. In this example, data is requested through the same path that the data was stored (the first path in this example). Any host may request such data through that path and the data may be returned along that path, encrypted by the path's associated encryption key. The method ofFIG.8Csets forth an example of a read request for the first data. Readers of skill in the art will recognize that the steps carried out inFIG.8Cfor a read request of the first data may be similar to those carried out upon a read request for the second data. For example, the implementation ofFIG.8Cmay include receiving a read request for the second data through the second path, decrypting the second data utilizing the storage system encryption key, encrypting the second data with the second path encryption key and returning the encrypted data through the second path. Although the example ofFIG.8Csets forth data stored along one path being retrieved along the same path utilizing the same path-specific encryption key on a read request that was utilized on the write request, other variations may be implemented. To that end,FIG.8Dincludes receiving834, through the second path, a request for the first data; decrypting836the first data utilizing the storage system encryption key; encrypting838the first data with the second encryption key associated with requests received from the second path; and840returning the encrypted first data through second path. In this example, any host that transmits a read request through a path may expect to receive a response through that same path, where the data returned in the response is encrypted with the path's associated encryption key. That is, each path is associated with a different path-specific encryption key that is utilized for encryption in either direction (read or write) regardless of the path utilized originally to write the data to the storage system. Data that was originally stored in response to a write request received along one path may be retrieved through a read request issued through any path. That is, in the example ofFIG.8D, the first data which was stored in the storage system as a result of a write request received through the first path, may be retrieved by a host as a response to a read request issued to the storage system through the second path. The key used to encrypt the data returned as a response to the read request is based on the path through which the data is requested and returned. FIG.8Dsets forth an example of a read request being received through the second path for data originally stored in the dataset by a write request received through the first path. Readers will recognize that this is an example of reading data from one path that was written through another, where the read returns data encrypted by the path's encryption key upon which the read was transmitted. That is, well within the scope of the implementations set forth inFIG.8Dis an example that includes: receiving, through a first path, a request for the second data, decrypting the second data utilizing the storage system encryption key, encrypting the second data with the first encryption key, and returning the encrypted second data through the first path. Multiple paths between a host and a storage system often are implemented for the purpose of redundancy. A read request then may be issued along one path that subsequently fails or is otherwise inaccessible (do to load balancing, for example). In such a situation, a storage system that supports multi-path end-to-end encryption according to embodiments of the present disclosure may be configured to process the read request in a variety of manners. Several of those implementations are set forth in the example ofFIG.8E. FIG.8Eincludes receiving842, through the first path, a request for the first data; decrypting844the first data utilizing the storage system encryption key; detecting846by a host inaccessibility of the first path and reissuing the request for the first data along the second path; encrypting848A the first data with the second encryption key; and returning850the encrypted first data through the second path. In this example, the storage system, upon detecting that the first path is inaccessible, may encrypt the data with the second encryption key and return the data through the second path. That is, the data is encrypted with the key associated with the path upon which the data will be returned rather than the path upon which the data was requested via the read request. FIG.8Ealso includes an alternative in which, rather than encrypting the data with the second encryption key, the data is encrypted848B with the first encryption key. In such an embodiment, the storage system encrypts the data with the encryption key associated with the path upon which the read request was received, rather than with the encryption key associated with the path upon which the data is returned. The storage system and host may be configured for one or the other embodiments so that upon receipt of the encrypted data, the host is able to utilize the correct key for decryption. As above, althoughFIG.8Esets forth processing of a read request of first data, similar steps could be carried out with respect to second data. Likewise, althoughFIG.8Esets forth processing of a read request received along a first path which becomes inaccessible and data is returned along the second path, the opposite implementation may also be carried out. One or more embodiments may be described herein with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claims. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claims. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof. While particular combinations of various functions and features of the one or more embodiments are expressly described herein, other combinations of these features and functions are likewise possible. The present disclosure is not limited by the particular examples disclosed herein and expressly incorporates these other combinations. | 301,253 |
11943294 | While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). The words “include,” “including,” and “includes” indicate open-ended relationships and therefore mean including, but not limited to. Similarly, the words “have,” “having,” and “has” also indicate open-ended relationships, and thus mean having, but not limited to. The terms “first,” “second,” “third,” and so forth as used herein are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless such an ordering is otherwise explicitly indicated. “Based On.” As used herein, this term is used to describe one or more factors that affect a determination. This term does not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase “determine A based on B.” While B may be a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B. The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims. DETAILED DESCRIPTION Various embodiments of an object compression system for a storage service of a cloud or provider network are described in this disclosure. In some embodiments, the object compression system may automatically monitor files or objects in various object stores over the lifecycles of the files or objects for a user, identify files or objects to be compressed, compress the identified files or objects, and move the resultant compressed files to object stores in appropriate tiers. Compared to existing storage services, the object compression system may store compressed files rather than the original (uncompressed) flies or objects, reduce the storage space for the user, and thus lower the storage cost. In addition, in some embodiments, the storage service may provide different pricing structures and/or access performance for storage in different tiers. For instance, the storage service may charge relatively high storage and/or access fees (e.g., a relatively high $/gigabyte) for files or objects stored in object stores in a standard access tier and relatively low fees (e.g., a lower $/gigabyte) for a less frequent access tier. Therefore, by moving the compressed files to the less frequently-accessed tier, the compression system may further reduce the costs for the user. In short, by automatically monitoring and compressing objects, the object compression system may provide a user-friendly and cost-efficient solution to manage stored files or objects for user of a remote storage service. In some embodiments, the object compression system may include a monitoring system, a compression analyzing system, and a compressing and moving system. In some embodiments, the monitoring system may automatically monitor individual ones of a plurality of objects in one or more object stores of the storage service. In some embodiments, the monitoring system may create one or more characteristics associated with individual objects based on the monitoring. For instance, the characteristics may include an access pattern of a user to an object. The access pattern may indicate historically how many times the user accesses the object within one or more previous time periods, a frequency by which the user accesses the object, and the like. In some embodiments, the characteristics may include a type of the object, e.g., a JPEG file, an Excel file, or binary large object (BLOB), and so on. In some embodiments, the characteristics may further include a content type of the object, which may be indicated by a filename and/or a filename extension of the object. In some embodiments, the characteristics may further include a usage pattern of the object. The usage pattern may represent a workflow of the user with respect to the object, e.g., a set of tasks or actions performed by the user on the object. In some embodiments, the usage pattern may provide supplemental information for the access pattern of the user to the object. In some embodiments, the usage pattern may indicate one or more performance requirements of the user associated with accessing the object, e.g., requires related to latency and/or throughput. In some embodiments, the usage pattern may be obtained based on monitoring log file(s) of the object. In some embodiments, the usage pattern may be derived based on, e.g., a type of the object, a content type of the object, a size of the object, and/or usage patterns of one or more other similar objects. In addition, in some embodiments, the characteristics associated with the object may include other information, e.g., an age of the object in the storage service. In some embodiments, the compression analyzing system may receive, from the monitoring system, the characteristics associated with individual ones of the plurality of objects. In some embodiments, the compression analyzing system may receive one or more other additional features, e.g., a risk tolerance and/or a cost sensitivity associated with compressing the object. In some embodiments, based on the characteristics and/or additional features, the compression analyzing system may generate compression decisions for individual objects using a machine learning model. For instance, the compression decision for an object may identify whether or not to compress the object. In some embodiments, responsive to a determination that the object is to be compressed, the compression analyzing system may also determine a compression algorithm appropriate according to which the object is to be compressed. In some embodiments, the compression analyzing system may provide the compression decisions for individual ones of the plurality of objects to the compressing and moving system. In response, the compressing and moving system may compress the objects and transition the resultant compressed files to appropriate tiers, as needed. For instance, when an object is identified to be compressed by the compression analyzing system, the compressing and moving system may compress the object according to the compression algorithm determined by the compression analyzing system. In some embodiments, the compressing and moving system may store the compressed file at the current location together with the original (uncompressed) object. In some embodiments, the compressing and moving system may transit (or move) the compressed file from the current tier to another location in another tier (e.g., from the current location in a standard access tier to another location in a less frequently-accessed tier), and remove (or delete) the originally (uncompressed) object from the storage service—thus to save storage costs for the user. FIG.1shows an object compression system of a storage service, according to some embodiments. In this example, storage service100may include object compression system105. Here, the term “object” may broadly refer to any file or item stored in a storage service of a provider network. For instance, the object may include an image file, an audio or video file, a text file, a spreadsheet, and/or a file which may not necessarily be originally uploaded by the user but instead created by the storage service, e.g., a metadata file and/or an access log file associated with a user-uploaded object. In some embodiments, storage service100may be implemented on or across one or more computing devices of a provider network to provide object storage and/or management functions for users. In some embodiments, object compression system105may be implemented on or across the computing devices of storage service100, e.g., offered as a feature of an object-based data storage system of storage service100. In some embodiments, storage service100may include a plurality of objects125(1)-125(n) stored in one or more object stores in a specific tier, e.g., tier #1. For instance, tier #1 may refer to storage medium or media and associated networking devices and/or infrastructure of storage service100which may be designed for users to have standard or regular accesses to stored objects, e.g., objects125(1)-125(n). In some embodiments, storage service100may include at least another different tier, e.g., tier #2. For instance, tier #2 may include storage medium or media and associated networking devices and/or infrastructure for storing less-frequently accessed objects. In some embodiments, object stores in the different tiers, e.g., tier #1 vs. tier #2, may provide different access performances. For instance, tier #1 may be implemented based on flash or solid-state drives (SSDs) which may provide fast writing/reading speeds, whilst tier #2 may use less-expensive but slower storage media such as SATA drives, optical disks, tape storage systems, etc. In another example, tier #1 may include networking devices and/or infrastructure having more input/output (I/O) ports and thus provide higher networking speed and/or more networking throughput (or bandwidth) than tier #2. In some embodiments, object stores in the different tiers, e.g., tier #1 vs. tier #2, may be assigned with different pricing structures and/or access performance. For instance, storage using object stores in tier #1 may be charged with relatively high storage and/or access fees (e.g., relatively high $/gigabyte fees), whilst storage using object stores in tier #2 may require relatively low storage and/or access fees (e.g., relatively low $/gigabyte fees). In some embodiments, object compression system105may include monitoring system110, compression analyzing system115, and compressing and moving system120. In some embodiments, monitoring system110may monitor individual ones of the plurality of objects125(1)-125(n), to obtain various characteristics or features associated with respective objects, throughout the objects' lifecycles. In some embodiments, monitoring system110may be configured to automatically monitor individual objects125(1)-125(n), e.g., according to one or more default settings provided by storage system100. In some embodiments, the user may have the option (e.g., via an interface such as a graphic interface, an API, an command line interface, and the like) to specify settings for monitoring system110on his/her own, and may also have the option to enable and/or disable object compression system105, for individual objects125(1)-125(n). In some embodiments, monitoring system110may monitor an age of an object, e.g., object125(1). The age of object125(1) may be defined by object compression system105in various ways. For instance, the age may refer to how long object125(1) has been stored in storage service100. In another example, the age may refer to how long it has been since last time the user accessed object125(1). In some embodiments, the age of object125(1) may impact a compression decision for object125(1). For instance, the older object125(1) is in storage service100, the more likely object compression system105may determine to compress object125(1). In some embodiments, monitoring system110may monitor access, e.g., including one or more access patterns, of the user to object125(1). For instance, monitoring system110may monitor historically a number of accesses of the user to object125(1) in one or more previous time periods. In another example, monitoring system110may monitor a frequency by which the user accessed the object125(1), e.g., an average frequency of access in last 12 weeks. In some embodiments, the access patterns may also impact the compression decision for object125(1). For instance, when object125(1) is less frequently accessed by the user (e.g., less below a threshold), it may become more probable for object compression system105to determine to compress object125(1). In some embodiments, monitoring system110may monitor a filename and/or a filename extension of object125(1). In some embodiments, monitoring system110may determine a content type for object125(1) based at least in part on the filename and/or a filename extension of object125(1). For instance, some domain-specific datasets may use specific file formats, e.g., Telegraphic Format (TF) or GSE/IMS for seismic data. Therefore, when object125(1) includes a filename extension of TF or GSE/IMS, monitoring system110may accordingly determine object125(1) contains a seismic dataset. In some embodiments, the content type of object125(1) may imply a potential usage pattern by the user to access object125(1). In some embodiments, the usage pattern may be derived based on other information, e.g., a size of the object and/or usage pattern(s) of other similar object(s) of object125(1). Here, the term “usage pattern” may broadly refer to a workflow or a set of tasks (or actions) which the user may perform on object125(1). In some embodiments, the usage pattern may indicate various performance requirements by the user for accessing object125(1). For instance, when object125(1) includes a seismic dataset, the access of the user to object125(1) may not necessarily require a fast speed, but rather a high throughput because the seismic dataset is generally in large size. In another example, when object125(1) includes a medical dataset, the user may require fast access with low latency in order to share the information with patients or other colleagues quickly. The performance requirements may impact how object125(1) shall be compressed, e.g., the selection of a compression algorithm for object125(1). For example, when object125(1) contains seismic data, a compression algorithm may be selected to provide a small size for the compressed file to provide a good throughput. Alternatively, if object125(1) contains medical data, a compression algorithm with fast compression and decompression speeds may be selected to provide the low latency transmission. In some embodiments, the usage pattern may be collected by inspecting transaction log file(s) of object125(1) by monitoring system110. The log file(s) may provide information as to historically how the user has used object125(1) and/or typical workflows associated with object125(1). In some embodiments, the usage pattern of object125(1) may also provide (supplemental) information for the access pattern of the user to object125(1), e.g., how many times the user has used object125(1) over a previous time period. In some embodiments, monitoring system110may also monitor a size of object125(1). In some embodiments, monitoring system110may monitor information associated with data lineage between different objects. For instance, the data lineage may indicate that object125(1), object125(2), and object125(3) are all part of a video content, and that object125(1), object125(2), and object125(3) need to be played in sequence—e.g., object125(1) is an “input” to object125(2) whilst object125(3) is an “output” of object125(22). In some embodiments, the data lineage information may affect how the linked objects, e.g., object125(1), object125(2), and object125(3), to be compressed (and decompressed). This may be useful, e.g., for compressing audio or video objects including streaming contents. Referring back toFIG.1, in some embodiments, monitoring system110may provide characteristics associated130with individual ones of the plurality of objects125(1)-125(n) to compression analyzing system115. As described above, characteristics130may include, e.g., an age (e.g., a storage duration and/or a duration since last access), an access pattern (e.g., an access frequency and/or a number of accesses within an interval), a filename or filename extension (which may indicate a content type), a usage pattern (e.g., a workflow which may indicate various access performance requirements), a file size, and the like, for respective objects125(1)-125(n). In some embodiments, compression analyzing system115may further receive one or more other features135via an interface (as shown inFIG.3), e.g., one or more compression tolerance characteristics including a risk tolerance, a cost sensitivity, and the like. The risk tolerance may broadly refer to a level of risk (e.g., the risk of losing data or information during compression and/or decompression) for compressing an object, e.g., object125(1). In some embodiments, the risk tolerance may include a default value provided by object compression system105. In some embodiments, the risk tolerance may be specified by the user. The risk tolerance may impact the selection of compression algorithms for object125(1). For instance, for a low risk tolerance, compressing analyzing system115may choose a compression algorithm which may split object125(1) into multiple data blocks and compress object125(1) block-by-block to increase the reliability. Conversely, for a high risk tolerance, compressing analyzing system115may instead choose a compression algorithm to minimize the size of the compressed file for object125(1). The cost sensitivity may indicate a level of sensitivity of a user with respect to the storage and/or accessing fees. Similarly, the cost sensitivity may be set to a default value by object compression system105or specified by the user. In some embodiments, the cost sensitivity may also impact how object125(1) may be compressed. For instance, for a high cost sensitivity, compressing analyzing system115may decide to select a compression algorithm to minimize the size of the compressed file for object125(1). Conversely, for a low cost sensitivity, compressing analyzing system115may determine a compression algorithm that may optimally satisfy various performance requirements of the user to access object125(1). In some embodiments, compressing analyzing system115may use machine learning model140to predict future access to make respective compression decisions145for individual ones of the plurality of objects125(1)-125(n). In some embodiments, machine learning model140may be implemented using various machine learning algorithms, e.g., a supervised neural network, an unsupervised neural network, a space vector machine, and the like. In some embodiments, machine learning model140of compression analyzing system115may receive characteristics130associated with individual objects125(1)-125(n) from monitoring system110and additional features135such as the risk tolerance and/or cost sensitivity as input to predict the future access for making compression decisions145for respective objects125(1)-125(n). In some embodiments, compression decision145for an object, e.g., object125(1), may indicate whether object125(1) is to be compressed. In some embodiments, when object125(1) is identified for compression, one or more other objects in the same folder and/or object store as object125(1) may automatically be determined for compression as well. This may be useful for objects for certain use cases or application domains. For instance, when object125(1) relates to a legal matter and is decided to be compressed for a legal hold, other objects125(2) and125(3) in the same folder and/or object matter may also need to be compressed for the hold given that they all relate to the same legal matter. In some embodiments, responsive to a decision that object125(1) is to be compressed, corresponding compression decision145may determine a compression algorithm for compressing object125(1). In some embodiments, the compression algorithm may be selected from a compression algorithm catalog (not shown) within object compression system105. In some embodiments, the catalog may include compression algorithms authenticated by storage service100of the provider network itself, and/or algorithms submitted by clients of storage service100. For instance, the catalog may be “linked” to an algorithm repository where clients of storage service100may update various self-identified compression algorithms. In some embodiments, the compression algorithm catalog may include program instructions or executable codes for various compression algorithms including, e.g., lossless compression algorithms, lossy compression algorithms, and/or domain-specific algorithms. In some embodiments, storage service100may provide the user the option (e.g., via an interface such as a graphic interface, an API, a command line interface, and the like) to specify his/her own compression algorithm155. As described above, a purpose of object compression system is to provide automated object monitoring and compression management with minimum required intervention from the client. However, the client may be aware of specific algorithms that can provide superior compression performance. Therefore, it can still be beneficial to have the ability to allow a client to specify his/her own algorithm to compress his/her objects and data. In some embodiments, compression analyzing system115may also provide confidence scores for compression decisions145. For instance, compression decision145may indicate that object125(1) is identified to be compressed with 98% confidence. In some embodiments, compression analyzing system115may determine that the confidence score for a given object is over a threshold. This may be useful for users working with domain-specific objects because they may work better with some domain-specific compression algorithms. In some embodiments, object compression system105may provide a performance comparison to the user between a system-determined algorithm and a user-specified algorithm. In some embodiments, object compression system105may automatically select a compression algorithm from the two for the user. In some embodiments, object compression system105may allow the user to select a compression algorithm from the two, e.g., based on the provided comparison, to be used for compression. As shown inFIG.1, in some embodiments, compression analyzing system115may provide respective compression decisions145for individual objects125(1)-125(n) to compressing and moving system120to perform individual compression decisions for respective objects. In some embodiments, compressing and analyzing system120may identify one or more objects (e.g., objects125(1)-125(m) out of the plurality of objects125(1)-125(n)) which are decided by compression analyzing system115to be compressed, and compress these identified objects according to their respective compression algorithms (e.g., selected by compression analyzing system115or specified by the user) to create compressed files150(1)-150(m). In some embodiments, compressing and moving system120may store compressed files150(1)-150(m) together with original (uncompressed) objects125(1)-125(m) in their corresponding (current) object stores in tier #1. In some embodiments, after a first time period, compressing and moving system120may copy compressed files150(1)-150(m) to object stores in another tier, e.g., tier #2. In some embodiments, after a second time period, compressing and moving system120may delete compressed files150(1)-150(m) from tier #1 or move compressed files150(1)-150(m) to tier #2 to replace the duplicate copies. In some embodiments, after a third time period, compressing and moving system120may remove or delete original (uncompressed) objects125(1)-125(m) from tier #1, and thus retain only compressed files150(1)-150(m) for objects125(1)-125(m) in tier #2. These staged operations may ensure data reliability and availability for data in objects125(1)-125(m). By storing only compressed files (e.g., compressed files150(1)-150(m)) for objects (e.g., objects125(1)-125(m)), object compression system105may reduce the storage size for the user. By transitioning compressed files from a standard access tier (e.g., tier #1) to a less expensive, less frequent access tier (e.g., tier #2), object compression system105may provide further cost savings for the user to use storage service100. Note thatFIG.1shows only two tiers, tier #1 and tier #2, for purposes of illustration. In some embodiments, storage service100may have more than two tiers with various pricing structures and/or access performance (by using various storage medium or media and/or associated networking devices) to accommodate various storage and accessing needs. Accordingly, in some embodiments, compression analyzing system115(using machine learning model140), moving system120, and/or another component of object compression105may determine a tier for storing the compressed file for a given object, e.g., based at least in part on characteristics130, the risk tolerance, and/or the cost sensitivity of the object. Also note that the above-described monitoring, compression analyzing, and compressing and moving with regards toFIG.1may be performed on the object-level for individual ones of the plurality of objects125(1)-125(n), respectively. In some embodiments, object compression system105may store compression decisions145from compression analyzing system115in one or more files. In some embodiments, compression analyzing system115and/or monitoring system110may access and use the information in these files to assist future compression determinations for other objects. For instance, compression analyzing system115or monitoring system110may determine to compress a future object based on previous compression decision (without necessarily predicting future access to the object using machine learning model140), e.g., when the future object includes a similar access pattern, type, content type, and/or usage with one or more previous objects that have been decided to be compressed. This may further increase efficiency and performance of object compression system105. Referring back toFIG.1, in some embodiments, compressing and moving system120may further include compression validation system160. In some embodiments, a purpose of compression validation system160may be to validate restoration of the original, uncompressed object (e.g., object125(1)) from the corresponding compressed version (e.g., compressed file150(1)). This is to ensure safe and reliable storage of the object, in the compressed version and at a different data store, before deleting the object from storage service100. In some embodiments, the validation performed by compression validation system160may include retrieving the compressed file (from a different data store if needed), decompressing the compressed file to generate an intermediate object, and compare the intermediate object with the original object to detect any discrepancy. In some embodiments, responsive to the validation, compression validation system160may provide a validation result to compressing and moving system120. When the decompressed file passes the validation, compressing and moving system120may proceed to label the original object as compression-validated and delete the original object at an appropriate time as needed. Conversely, when the decompressed file fails the validation, compressing and moving system120may send the validation result to compression analyzing system to re-select another compression algorithm to perform the compression. As described above, the machine learning model of the object compression system (e.g., machine learning model140of object compression system105inFIG.1) may be implemented based on various machine learning algorithms. For instance, the machine learning model may include a feedforward neural network, a recurrent neural network, a reinforcement learning model, a generative adversarial network, a support vector machine, and the like.FIGS.2A-2Bshow training and testing of an example machine learning model, according to some embodiments. InFIG.2A, in training, machine learning model210may include a supervised neural network. In some embodiments, machine learning model210may receive training dataset205as input. Training dataset205may include, e.g., a plurality of sets of characteristics (e.g., characteristics130as described above inFIG.1) and/or other features (e.g., risk tolerance and/or cost sensitivity135) for a first plurality of objects. For each set of input for an object, training dataset205may also include an expected or labeled output for the object. For instance, the expected output may indicate whether the object is supposed to be compressed and, if so, at least one associated compression algorithm. In some embodiments, the expected output may also include a confidence score. In some embodiments, based on each set of the input, machine learning model210may predict future access to the object to make a compression decision215for the corresponding object. In some embodiments, in each epoch of the training, predicted compression decision215may be compared with the given expected output according to loss function220. The error may be calculated to represent a discrepancy between predicted compression decision215and the expected output. The error may be sent back to machine learning model210to update parameters of machine learning model210, e.g., according to a backpropagation function to minimize the error. The above operations may repeat for individual sets of input and expected output in training dataset205for individual ones of the first plurality of objects until the end of the training. InFIG.2B, once trained, machine learning model210may be deployed for testing. In testing, machine learning model210may be applied to testing dataset225including characteristics (e.g., characteristics130as described above inFIG.1) and/or other features (e.g., risk tolerance and/or cost sensitivity135) associated with individual ones of a second plurality of objects to produce compression decisions for the respective objects. As described above, in some embodiments, the compression decision for each object may include a compression decision as to whether to compress the object. In some embodiments, the compression decision may also include a confidence score for the associated decision. In some embodiments, when the object is determined to be compressed, the compression decision may also determine a compression algorithm. FIG.3shows an example user interface for set up configurations of an object compression system of a storage service, according to some embodiments. In this example, a user may use user interface300to specify various configurations for an automated object compression management system of an object-based storage system (e.g., object compression system105of storage service100inFIG.1). For instance, user interface300may include two sections305and310. In some embodiments, section305may provide an interface for the user to specify parameters related to object compressions decision-makings. For instance, the user may use checkbox315to enable or disable the object compression feature for one or more objects. In addition, the user may use slider320to choose a scale between compression performance and storage reliability. In some embodiments, the selection may be “translated” by the object compression system into one or more compression tolerance characteristics, e.g., risk tolerance, cost sensitivity, and so on. For instance, moving of slider320closer to “Performance” may indicate higher risk tolerance and cost sensitivity, whilst moving of slider320closer to “Reliability” may infer lower risk tolerance and cost sensitivity. Moreover, section305may provide checkbox325for a client to specify his/her own compression algorithm. In some embodiments, the client may specify the compression algorithm by selecting an algorithm from a drop-down menu when checking checkbox325. In some embodiments, the client may submit a file containing program instruction or executable codes via an upload window after checking checkbox325. In some embodiments, it may be optional for the client to specify an algorithm. Thus, unchecking of checkbox325may be deemed as a default value which may allow the object compression system to select and use an algorithm to compress the object. By comparison, section310may provide an interface for the user to specify parameters related to file transition (or moving) and clean-up after compression, according to some embodiments. For instance, the user may use dialog box340to specify whether to transition a compressed file from a current tier (e.g., tier #1) to a different tier (e.g., tier #2). In addition, dialog box340may also allow the user to select the tier where to transition the compressed file (e.g., tier #2). In some embodiments, section310may provide dialog box345for the user to specify a time period according to which to transition the compressed file. In this example, the compressed file is set to be moved to tier #2 after 30 days since creation (or compression). In some embodiments, section310may also provide an option for the user to specify whether to delete the original object and, if so, a time period according to which to delete the object. In this example, the original object is set to be deleted from the storage service after 60 days since compression. Note thatFIG.3is provided merely as one example for the user interface for the disclosed object compression system of a storage service. In some embodiments, the storage service may provide an interface for the object compression system through other types of interfaces, e.g., a command line interface, an API, and the like. FIG.4shows a high-level flowchart for a method to automatically monitor and compress objects in a storage service, according to some embodiments. In this example, the method may include monitor access to an object (e.g., object125(1) inFIG.1) stored in a first type of object store (e.g., a data store in tier #1 inFIG.1) to determine one or more characteristics of the first object, as indicated by block405. As described above, a plurality of objects (e.g., objects125(1)-125(n) inFIG.1) may be stored in a plurality of object stores in an object-based storage system of a storage service (e.g., storage service100inFIG.1), according to some embodiments. As described above, in some embodiments, the storage service may have object stores organized into multiple tiers, e.g., tier #1 and tier #2, to meet different storage and/or accessing requirements of users. For instance, tier #1 may refer to storage medium or media and associated networking devices and/or infrastructure designed to provide standard accesses of stored objects, whilst tier #2 may refer to storage medium or media and associated networking devices and/or infrastructure for less frequency accesses. In some embodiments, the different tiers may be assigned with different pricing structures (e.g., different storage and/or accessing fees) and/or access performance for users of the storage service. In some embodiments, the different tiers of the storage service may be implemented using different types of storage medium or media. As described above, a monitoring system of an object compression system of the storage service (e.g., monitoring system110of object compression system105of storage service100inFIG.1) may be used to monitor access to the individual ones of the plurality of objects to determine the respective characteristics for the individual objects. In some embodiments, the characteristics for one object (e.g., object125(1) inFIG.1) may indicate a frequency by which the user has historically accessed the object. In some embodiments, the access pattern may indicate a number of accesses by the user to the object within a previous interval. In some embodiments, the characteristics may also include an age (e.g., a storage duration and/or a duration since last access), a filename or filename extension (which may indicate a content type), a usage pattern (e.g., a workflow which may indicate various access performance requirements), a file size, and the like, associated with the object. As indicated by block410, based at least in part on the characteristics of the object, the method may include making a determination of whether to compress the object (e.g., object125(1) inFIG.1) using a machine learning model to predict future access to the object. As described above, this determination may be generated from a compression analyzing system of the object compression system of the storage service (e.g., compression analyzing system115of object compression system105of storage service100inFIG.1), using the machine learning model (e.g., machine learning model140of compression analyzing system115inFIG.1and machine learning model210inFIG.2). In some embodiments, besides the above characteristics, the machine learning model of the compression analyzing system may further receive one or more additional features for the object (e.g., a risk tolerance and/or a cost sensitivity). In some embodiments, the compression analyzing system may apply the machine learning model to a combination of the characteristics and additional features of the object to generate the determination of whether to compress the object. In some embodiments, responsive to a determination that the object is to be compressed, the machine learning model may further determine a compression algorithm to compress the object. In some embodiments, the machine learning model may provide a confidence score for the determination of whether to compress the object. Referring back toFIG.4, the method may include, responsive to a determination to compress the object (e.g., object125(1) inFIG.1), generating a compressed version (e.g., compression file150(1) inFIG.1) of the object, as indicated by block415. In some embodiments, the method may further include determining a compression algorithm and accordingly using compression algorithm to generate the compressed version of the object. In some embodiments, the method may include storing the compressed version of the object in one data store determined from a plurality of data stores, as indicated by block420. In some embodiments, the data store for the compression version (e.g., compression file150(1) inFIG.1) of the object (e.g., object125(1) inFIG.1) may be a second data store of a different type from the first data store storing the object. For instance, the first data store for the original, uncompressed object may be within a standard or regular access tier (e.g., tier #1 as described above), whilst the second data store for the compressed file may be within a less-frequent access tier (e.g., tier #2 as described above). In some embodiments, tier #1 and tier #2 may provide different pricing structures and/or access performance. FIG.5shows a high-level flowchart for compressing and transitioning objects in a storage service, according to some embodiments. In this example, at least one of a plurality of objects in a storage service may be compressed, according to some embodiments, as indicated by block505. As described above, in some embodiments, the object may be compressed using a compressing and moving system of an objection compression system of the storage service (e.g., compressing and moving system120of object compression system105of storage service100inFIG.1). In some embodiments, the compressing and moving system may receive a compression determination from a compression analyzing system of the object compression system (e.g., compression analyzing system115of object compression system105of storage service100inFIG.1). In some embodiments, the compression determination may indicate whether the object is to be compressed and, if so, determine a compression algorithm for the object. In some embodiments, responsive to the determination that the object is to be compressed, the compressing and moving system may compress the object according to the compression algorithm to create at least one compressed file for the at least one object. As indicated by block510, in some embodiments, the compressed file may be stored in a same object store and/or a same storage tier of the original object to replace the object store. As indicated by block515, in some embodiments, a duplicate copy of the compressed file may be created in another object store in another storage tier different from the original object. As indicated by block520, in some embodiments, after a time period, access requests to the compressed file may be transitioned to the other storage object store in the other storage tier. As the compressed file was already stored in the different tier transitioning from the original location to the different tier may be instantaneous from the perspective of a client application. Moreover, as the compressed file is also stored in the current object store, storage savings may be achieved without creating a significant impact upon performance to access the object (as it is still in the current object store), in the event that the object was determined for compression and movement according to an access prediction that turned out to be inaccurate. As indicated by block525, in some embodiments, the compressed file may then be deleted from the current object store. FIG.6shows an example provider network including a storage service, according to some embodiments. InFIG.6, provider network600may be a private or closed system or may be set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of cloud-based storage) accessible via the Internet and/or other networks to one or more client(s)605. Provider network600may be implemented in a single location or may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like (e.g., computing system700described below with regard toFIG.7), needed to implement and distribute the infrastructure and storage services offered by provider network600. In some embodiments, provider network600may include various computing resources or services to implement various network-based cloud services, e.g., computing service(s)610(e.g., to provide virtual computing), storage service(s)615, and/or other service(s)620(e.g., to provide virtual networking, virtual server(s), etc.). Data storage service(s)615may implement different types of data stores for storing, accessing, and managing data on behalf of client(s)605as a network-based service that enables one or more client(s)605to operate a data storage system in a cloud or network computing environment. For example, data storage service(s)615may include various types of database storage services (both relational and non-relational) or data warehouses for storing, querying, and updating data. Such services may be enterprise-class database systems that are scalable and extensible. Queries may be directed to a database or data warehouse in data storage service(s)615that is distributed across multiple physical resources, and the database system may be scaled up or down on an as needed basis. The database system may work effectively with database schemas of various types and/or organizations, in different embodiments. In some embodiments, clients/subscribers may submit queries in a number of ways, e.g., interactively via an SQL interface to the database system. In other embodiments, external applications and programs may submit queries using Open Database Connectivity (ODBC) and/or Java Database Connectivity (JDBC) driver interfaces to the database system. Data storage service(s)615may also include various kinds of object or file data stores for putting, updating, and getting data objects or files, which may include data files of unknown file type. Such data storage service(s)615may be accessed via programmatic interfaces (e.g., APIs) or graphical user interfaces. Data storage service(s)615may provide virtual block-based storage for maintaining data as part of data volumes that can be mounted or accessed similar to local block-based storage devices (e.g., hard disk drives, solid state drives, etc.) and may be accessed utilizing block-based data storage protocols or interfaces, such as internet small computer interface (iSCSI). In some embodiments, one or more object compression system(s) (e.g., object compression system110inFIG.1) may be implemented as part of storage service(s)615, as shown inFIG.6. In some embodiments, storage service(s)615may use the object compression system(s) to automatically monitor various objects stored in the object stores in storage service(s)615. In some embodiments, the object compression system(s) may determine various characteristics associated with the objects based on the monitoring. In some embodiments, the object compression system(s) may also receive one or more additional features for the objects, e.g., risk tolerance(s) and/or cost sensitivity(ies). In some embodiments, the object compression system(s) may generate determinations of whether to compress the objects using machine learning model(s) (e.g., machine learning model140and210inFIGS.1-2) based on the characteristics and additional features of the objects. In some embodiments, the object compression system(s) may further determine compression algorithms for the objects which may be identified for compression. In some embodiments, the object compression system(s) may compress the identified objects according to the compression algorithms to create corresponding compressed files. In some embodiments, the object compression system(s) may store the compressed files in object stores different from those of the corresponding original (uncompressed) objects. In some embodiments, the object stores for the compressed files and the object objects may be in different tiers of storage service(s)615which may be designed for different types of storage and/or access needs of client(s)605. In some embodiments, the object compression system(s) may retain only the compressed files and delete the original objects from storage service(s)615. Generally speaking, client(s)605may encompass any type of client configurable to submit network-based requests to provider network600via network625, including requests for storage services (e.g., a request to create, read, write, obtain, or modify data in data storage service(s)610, requests to specify parameters for object compression system(s) of storage service(s)615(e.g., as shown inFIG.3), etc.). For example, a given client605may include a suitable version of a web browser, or may include a plug-in module or other type of code module configured to execute as an extension to or within an execution environment provided by a web browser. Alternatively, a client605may encompass an application such as a database application (or user interface thereof), a media application, an office application or any other application that may make use of storage resources in data storage service(s)610to store and/or access the data to implement various applications. In some embodiments, such an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network-based data. That is, client605may be an application configured to interact directly with provider network600. In some embodiments, client(s)605may be configured to generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network-based services architecture. In various embodiments, network625may encompass any suitable combination of networking hardware and protocols necessary to establish network-based-based communications between client(s)605and provider network600. For example, network625may generally encompass the various telecommunications networks and service providers that collectively implement the Internet. Network625may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks. For example, both a given client605and provider network600may be respectively provisioned within enterprises having their own internal networks. In such an embodiment, network625may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given client605and the Internet as well as between the Internet and provider network600. It is noted that in some embodiments, client(s)605may communicate with provider network600using a private network rather than the public Internet. FIG.7shows an example computing system to implement the various techniques described herein, according to some embodiments. For example, in one embodiment, object compression system105inFIG.1may be implemented by a computer system, for instance, a computer system as inFIG.7that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors. In the illustrated embodiment, computer system700includes one or more processors710coupled to a system memory720via an input/output (I/O) interface730. Computer system700further includes a network interface740coupled to I/O interface730. WhileFIG.7shows computer system700as a single computing device, in various embodiments a computer system700may include one computing device or any number of computing devices configured to work together as a single computer system700. In various embodiments, computer system700may be a uniprocessor system including one processor710, or a multiprocessor system including several processors710(e.g., two, four, eight, or another suitable number). Processors710may be any suitable processors capable of executing instructions. For example, in various embodiments, processors710may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors710may commonly, but not necessarily, implement the same ISA. System memory720may be one embodiment of a computer-accessible medium configured to store instructions and data accessible by processor(s)710. In various embodiments, system memory720may be implemented using any non-transitory storage media or memory media, such as magnetic or optical media, e.g., disk or DVD/CD coupled to computer system700via I/O interface730. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computer system700as system memory720or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface740. In the illustrated embodiment, program instructions (e.g., code) and data implementing one or more desired functions, such as the objection compression system described above inFIGS.1-6, are shown stored within system memory730as code726and data727. In one embodiment, I/O interface730may be configured to coordinate I/O traffic between processor710, system memory720, and any peripheral devices in the device, including network interface740or other peripheral interfaces. In some embodiments, I/O interface730may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory720) into a format suitable for use by another component (e.g., processor710). In some embodiments, I/O interface730may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface730may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface730, such as an interface to system memory720, may be incorporated directly into processor710. Network interface740may be configured to allow data to be exchanged between computer system700and other devices760attached to a network or networks750. In various embodiments, network interface740may support communication via any suitable wired or wireless general data networks, such as types of Ethernet network, for example. Additionally, network interface740may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fiber Channel SANs, or via any other suitable type of network and/or protocol. In some embodiments, system memory720may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above forFIG.1-6. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD coupled to computer system700via I/O interface730. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computer system700as system memory720or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface740. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link. The various systems and methods as illustrated in the figures and described herein represent example embodiments of methods. The systems and methods may be implemented manually, in software, in hardware, or in a combination thereof. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Although the embodiments above have been described in considerable detail, numerous variations and modifications may be made as would become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such modifications and changes and, accordingly. | 56,786 |
11943295 | DETAILED DESCRIPTION The present disclosure relates generally to systems and methods for configuring and managing data shipper agents. In general, a data shipper agent is an object/service installed on an edge node of a network. The data shipper agent performs tasks at the edge node level and collects valuable data for use by a service provider platform. One data shipper agent can be implemented on an edge node and configured to collect multiple types of data related to various processes, programs, and applications running in the edge node. An edge node is a network machine or system on which the data shipper agent is residing. Collectively, multiple data shipper agents installed on edge nodes can transmit data from hundreds or thousands of network machines and systems to a service provider system such as Elasticsearch™. In some embodiments, data collected by the data shipper agents can be centralized in the service provider system. A system for configuring and managing data shipper agents can include a graphical user interface (GUI) that allows users to configure a data shipper agent running on one edge node or simultaneously configure multiple data shipper agents on multiple edge nodes through a single application programming interface (API). Managing collection of multiple types of data related to multiple services on an edge node and managing multiple data shipper agents running on multiple edge nodes can be a complicated task. The systems and methods for configuring and managing data shipper agents of the present disclosure can be utilized to alleviate these problems by setting multiple types of data to be collected and multiple tasks to be performed by the data shipper agent with respect to multiple programs, applications, and services running on the edge node. Moreover, the systems and methods for configuring and managing data shipper agents can allow multiple data shipper agents to be configured by using a specifically configured GUI and/or specifically configured API calls. Specifically, the central management of data shipper agents can provide for automatic reconfiguration of one or more data shipper agents on one or more edge nodes based on the configuration of one data shipper agent. Once the configuration is modified for one data shipper agent running on one edge node, the resulting configuration changes can be used by the system for configuring and managing data shipper agents to automatically reconfigure data shipper agents running on other edge nodes. A method for configuring and managing data shipper agents may commence with receiving a list of one or more data shipper agents installed on one or more edge nodes associated with a user. The method may further include providing a GUI to enable the user to configure the one or more data shipper agents. Upon providing the GUI, selections of configuration parameters associated with at least one of the one or more data shipper agents may be received, via the GUI, from the user. Each of the configuration parameters may represent one or more tasks assigned to the at least one of the one or more data shipper agents. Upon receiving the configuration parameters, a configuration of the at least one of the one or more data shipper agents may be retrieved. The configuration may be reconfigurable through the GUI using an API. Based on the configuration parameters provided by the user, the configuration of the at least one of the one or more data shipper agents may be automatically reconfigured. Advantageously, the data shipper agent provides a unified way to add monitoring for logs, metrics, and other types of data to each edge node. A single data shipper agent makes it easier and faster to deploy monitoring across the entire infrastructure of a user. A unified policy of the data shipper agent makes it easier for the user to add integrations of new data sources for the data shipper agent. The GUI may include a web-based GUI that enables adding and managing integration of services and platforms into data sources monitored by the data shipper agent. Referring now to the drawings,FIG.1is an example schematic diagram of an architecture100within which methods and systems for configuring and managing data shipper agents can be implemented. The architecture100may include a service provider platform102that includes a search engine service104, a data processing pipeline106, and a system200for configuring and managing data shipper agents also referred herein as a system200. In general, the service provider platform102is configured to provide one or more services such as search engine or data visualization services based on data collected by data shipper agents. The architecture100also includes edge nodes110A-110N. Each of edge nodes110A-110N can have one of data shipper agent112A-112N running on the edge node. In one example, edge node110A has a data shipper agent112A installed thereon, while edge node110N has a data shipper agent112N installed thereon. In some embodiments, the data shipper agent112A may be installed on the edge node110A and, in further example embodiment, may be running on a server, such as a cloud server, and may communicate with the edge node110A via an API. In general, the data processing pipeline106ingests data from multiple sources such as the one or more data shipper agents simultaneously and transforms the collected data into a format that is utilizable by either the system200for configuring and managing data shipper agents and/or the search engine service104. Additionally, the components of the architecture100can communicate through an example network118that can include any public and/or private network. In an example embodiment, the network118may include the Internet or any other network capable of communicating data between devices. Suitable networks may include or interface with any one or more of, for instance, a local intranet, a corporate data network, a data center network, a home data network, a Personal Area Network, a Local Area Network (LAN), a Wide Area Network (WAN), a Metropolitan Area Network, a virtual private network, a storage area network, a frame relay connection, an Advanced Intelligent Network connection, a synchronous optical network connection, a digital T1, T3, E1 or E3 line, Digital Data Service connection, Digital Subscriber Line connection, an Ethernet connection, an Integrated Services Digital Network line, a dial-up port such as a V.90, V.34 or V.34bis analog modem connection, a cable modem, an Asynchronous Transfer Mode connection, or a Fiber Distributed Data Interface or Copper Distributed Data Interface connection. Furthermore, communications may also include links to any of a variety of wireless networks, including Wireless Application Protocol, General Packet Radio Service, Global System for Mobile Communication, Code Division Multiple Access or Time Division Multiple Access, cellular phone networks, Global Positioning System, cellular digital packet data, Research in Motion, Limited duplex paging network, Bluetooth radio, or an IEEE 802.11-based radio frequency network. The network118can further include or interface with any one or more of a Recommended Standard 232 (RS-232) serial connection, an IEEE-1394 (FireWire) connection, a Fiber Channel connection, an IrDA (infrared) port, a Small Computer Systems Interface connection, a Universal Serial Bus (USB) connection or other wired or wireless, digital or analog interface or connection, mesh or Digi® networking. For context, each data shipper agent can be configured to collect a number of data types from an edge node. The data can include log file data, metric data, trace data, network data, event log data, audit data, uptime monitoring data, serverless shipper data, synthetic data, security data, and the like. Generally, the system200is configured to provide a means for central configuration and management of one or more data shipper agents. In various embodiments, the configuration and management are accomplished through a GUI230enabling the configuring and managing of data shipper agents. A user111associated with one or more of edge nodes110A-110N can use the GUI230for configuring and managing one or more data shipper agents running on the edge nodes110A-110N. FIG.2is a block diagram illustrating a system200for configuring and managing data shipper agents, according to an example embodiment. The system200may include a central agent management unit210, a memory220in communication with the central agent management unit210, and a GUI230. In an example embodiment, the operations performed by the central agent management unit210and the GUI230may be performed by a processor and the memory220for storing instructions executable by the processor. Example one or more processors are shown inFIG.8as one or more processors5. The operations performed by each of the central agent management unit210, the memory220, and the GUI230of the system200are described in more detail below with reference toFIG.3. FIG.3shows a process flow diagram of a method300for configuring and managing data shipper agents, according to an example embodiment. In some embodiments, the operations may be combined, performed in parallel, or performed in a different order. The method300may also include additional or fewer operations than those illustrated. The method300may be performed by processing logic that may include hardware (e.g., decision making logic, dedicated logic, programmable logic, and microcode), software (such as software run on a general-purpose computer system or a dedicated machine), or a combination of both. The method300may commence with receiving, by a central agent management unit, a list of one or more data shipper agents at operation310. The one or more data shipper agents may be installed on one or more edge nodes associated with a user. Each data shipper agent of the one or more data shipper agents may be installed on one of the one or more edge nodes. At operation320, the central agent management unit may provide a GUI. The GUI may enable the user to configure the one or more data shipper agents. The method300may continue with receiving, by the central agent management unit, via the GUI, selections of configuration parameters associated with at least one of the one or more data shipper agents from the user at operation330. The configuration parameters may represent one or more tasks assigned to the at least one of the one or more data shipper agents. In an example embodiment, receiving of the selections of the configuration parameters may include receiving selections of one or more policies associated with at least one of the one or more data shipper agents and, for each of the one or more policies, receiving the selections of the configuration parameters. In an example embodiment, the at least one of the one or more data shipper agents may be configured to collect data from the one of the one or more edge nodes and provide the data to a service provider platform. Using the data collected by the at least one of the one or more data shipper agents, the service provider platform may provide one or more services with respect to the one or more edge nodes. In an example embodiment, the data collected by the at least one of the one or more data shipper agents include at least one of log file data, metric data, trace data, network data, event log data, audit data, uptime monitoring data, serverless shipper data, synthetic data, security data, custom logs, and so forth. At operation340, the central agent management unit may receive a configuration of the at least one of the one or more data shipper agents. The configuration may be reconfigurable through the GUI using a configuration API. Based on the configuration parameters, the central agent management unit may automatically reconfigure the configuration of the at least one of the one or more data shipper agents at operation350. The data to be collected by the at least one of the one or more data shipper agents may be set in the configuration of the at least one of the one or more data shipper agents. In an example embodiment, the automatic reconfiguration of the configuration of the at least one of the one or more data shipper agents includes setting types of the data to be collected by the at least one of the one or more data shipper agents. In an example embodiment, the method300may further include monitoring a status of the one or more data shipper agents. The status may include one of the following: an enabled status, a disabled status, an error in an operation of the one or more data shipper agents, a version of the one or more data shipper agents, a last activity time, and so forth. Based on the status, a notification may be provided to the user. In an example embodiment, providing of the notification may include prompting the user to change the configuration parameters associated with the at least one of the one or more data shipper agents. In an example embodiment, the central agent management unit may analyze the collected data. Upon determining that there is an issue related to the data, the central agent management unit may notify the user. For example, the central agent management unit may send a notification to the user to inform the user of malware in one of the programs running on the edge node, security issue related to a service running on the edge node, the edge node having issues with the performance and provide reasons as to why the performance has deteriorated. In further example embodiments, the central agent management unit may provide a health status of an operating system of the edge node or a health status of any program, software piece, service, or process running on the edge node. In some embodiments, the user may address the issues mentioned in the notifications received from the central agent management unit. For example, the user may use the GUI to block malware, stop a suspicious process, or run a deep security check. In an example embodiment, the central agent management unit may provide recommendations to the user as to what data to collect. The central agent management unit may generate the recommendations based on the analysis of the collected data. The recommendations may include advise to protect a system associated with an edge node, collect more granular data, collect data with respect to further processes or services running on the edge node, and so forth. Therefore, the central agent management unit may provide the user with insight for all processes running on edge node and provide a recommendation concerning system performance and security protection. The systems and methods of the present disclosure may relieve the user from the burden of determining which software tool to be run in order to collect specific types of data, as well as eliminate the need for configuring the software tools by configuring the software tools automatically. Additionally, systems and methods of the present disclosure allow less technical user to be in control of all processes running on the edge nodes of the user. Specifically, in conventional data monitoring systems, a user typically needs to understand how to run software programs on the backend of an edge node. With the systems and methods of the present disclosure, a user may control the processes running on the edge nodes via a GUI by simply selecting edge nodes, data shipper agents, and configuration parameters of the data shipper agents. In an example embodiment, the GUI may provide a dashboard showing a list of all metrics collected by the data shipper agents from edge nodes for a period of time. The dashboard may further show logs and other types of data collected by the data shipper agents. FIG.4is a block diagram400showing configuring and managing data shipper agents by a user using a GUI of the system for configuring and managing data shipper agents, according to an example embodiment. A GUI230may be provided to a user111associated with an edge node110A. A data shipper agent112A may be running on the edge node110A. The data shipper agent112A may discover and collect data associated with the edge node110A and report the collected data to a central agent management unit210. The central agent management unit210of the system200may act as a single bi-directional point of policy control, administration, interactive queries, and security protections. The user111may use the GUI230to monitor all processes running on the edge node110A and change the configuration of the data shipper agent112A as needed according to the types of data the user needs to be collected and tasks the user needs to perform based on the data. The user111may select data sources430from which the data shipper agent112A needs to collect data. The data sources430may include services435of a service provider platform running on the edge node110A, MySQL440running on the edge node110A, and other services, applications, and processes445running on the edge node110A. In an example embodiment, the GUI230may include an uptime user interface (UI)415, an application performance monitoring UI420, and other UIs425. The uptime UI415may be used to monitor a status of edge nodes via HyperText Transfer Protocol (HTTP)/HyperText Transfer Protocol Secure (HTTPS), Transmission Control Protocol (TCP), and Internet Control Message Protocol (ICMP) and explore the status over time, drill into specific monitors, and view a high-level snapshot of a network environment at a selected point in time. The application performance monitoring UI420may be used to automatically collect in-depth performance metrics and errors from applications running on edge nodes. The central agent management unit210may receive a list of data shipper agents installed on edge nodes associated with the user111. In this embodiment, one data shipper agent112A is running on one edge node110A. The central agent management unit210may further store a configuration405of the data shipper agent112A. The user111may select, via the GUI230, configuration parameters410associated with the data shipper agent112A. The configuration parameters410may represent one or more tasks assigned to the data shipper agent112A to be performed with respect to the edge node110A. The central agent management unit210may receive and store the configuration parameters410. In an example embodiment, the central agent management unit210may monitor a status of the data shipper agent112A and recommend one or more further data sources to user111from which the data shipper agent112A needs to collect data. In an example embodiment, the recommendation may be further selected by the central agent management unit210based on the data collected by the data shipper agent112A. Upon selecting the data sources by the user in response to the recommendation, the central agent management unit210may automatically apply changes to the configuration405of the data shipper agent112A based on predetermined rules. The central agent management unit210may have an API402for reconfiguring the configuration405of the data shipper agent112A. In general, the central agent management unit210acts as bi-directional point of policy control, administration, interactive queries, and security protection between the GUI230provided to the user111and the API402enabling the reconfiguration of the data shipper agents. The central agent management unit210provides a secure communication channel between the data shipper agent112A and the API402. The central agent management unit210may further send commands450to the data shipper agent112A or tasks run by the data shipper agent112A. The command450may be include a command to update the tasks executed by the data shipper agent112A. Thus, the user111can configure the data shipper agent via the GUI230, manage the configuration changes using the central agent management unit210, deploy specific configurations to a plurality of data shipper agents, and use the GUI230to investigate any issues during the deployment of the configuration. In an example embodiment, the central agent management unit210may provide, via the GUI230, a status of data shipper agents to the user111, such as an offline status, a successful deployment status, a deployment failure, and so forth. The central agent management unit210may also display a list of tasks currently run by the data shipper agent112A on the edge node110A, schedule deployment of one or more further data shipper agents based on a time schedule, and enable the user111to apply the same configuration to a subset of data shipper agents. In a further example embodiment, central agent management unit210allows the user111to read the logs collected by the data shipper agent112A via the GUI230, display metrics information associated with the edge node110A running the data shipper agent112A, and provide a link between the running tasks/processes on the data shipper agent112A and the collected monitoring data. In an example embodiment, the central agent management unit210may create alerts for the data shipper agent112A based on the updated configuration. In an example embodiment, the central agent management unit210may store a history of the changes for each configuration of data shipper agents and record audit logs for changes and users implementing the changes. The central agent management unit210further allows configuring a role-based access control of actions, data, and targets for the data shipper agents. The central agent management unit210can manage when and how a configuration is pushed to the data shipper agent. In an example embodiment, the central agent management unit210can invoke an index lifecycle management via the GUI230UI for creating policies. The index lifecycle management may be used to automatically manage policies according to performance, resiliency, and retention requirements. In an example embodiment, the central agent management unit210may allow securely managing credentials and Secure Sockets Layer certificates. The communication between the data shipper agent and a software piece (program, process, or service) that needs to be monitored is provided by using an HTTP server. The data shipper agent sends commands to the HTTP server and the HTTP server establishes a communication channel with the software piece and sends commands to the software piece. Similarly, the software piece sends the data to the HTTP server and the HTTP server sends the data received from the software piece to the data shipper agent. In an example embodiment, the edge node on which the software piece is running may establish a connection with a remote HTTP server by using a handshake procedure. If the handshake procedure is not sufficient for the HTTP server to access software pieces of the edge node, a security certificate may be used for communications between the edge node and the HTTP server. The user of the edge node may need to provide the security certificate to the HTTP server so that the HTTP server can use this security certificate for communications with the edge node and for accessing the software piece running on the edge node. FIG.5is an example GUI500of the system for configuring and managing data shipper agents, according to an example embodiment. The GUI500may provide, to a user, information concerning the health of data shipper agents running on edge nodes of the user. InFIG.5, a group of data shipper agents is shown as fleet505. All data shipper agents can be running the same version in the fleet, and each data shipper agent can be a member of a single fleet only. The user may utilize the GUI to manage and update the configuration of one or a group of data shipper agents of any size (e.g., hundreds or thousands of data shipper agents). Upon selection of fleet505on the GUI500, a list of data shipper agents510running on the edge nodes532may be shown to the user. One of data shipper agents510may be running on each of the edge nodes532. The GUI500may also show a total number515of the data shipper agents510running on the edge node, a total number520of data shipper agents510having an online status, a total number525of data shipper agents510having an offline status, a total number530of data shipper agents510having an error status, and the like. The GUI500may also show, for each data shipper agent510, an edge node532, a status535, a configuration540, a version545, and last activity time550. The user may also search for a data shipper agent using a search line565. When the user wants to reconfigure or manage one or more of the data shipper agents510, the user may select an action button560associated with the data shipper agent that the user wants to reconfigure or manage. In an example embodiment, the configuration language used for generating configurations of data shipper agents can be unified. Furthermore, data storage format for storing data collected by multiple data shipper agents from multiple edge nodes can be unified. Due to the unified format of the configuration and data, the configuration and data stored in a storage may be easily retrievable from the storage for further processing or presenting to the user. FIG.6is an example GUI600of the system for configuring and managing data shipper agents, according to an example embodiment. The GUI600may provide a list of agent policies605to the user. The user may utilize the agent policies605to manage data shipper agents and data to be collected by the data shipper agents. The GUI600can be used to show a name610of an agent policy, last update time615, a total number620of data shipper agents to which the agent policy is applied, a total number625of data sources integrated for each data shipper agent, and an action630to be performed with respect to the agent policy. The user may also search for any needed agent policy using a search line635. FIG.7is an example GUI700of the system for configuring and managing data shipper agents, according to an example embodiment. The GUI700may be opened upon selection of an action button560related to a particular data shipper agent shown inFIG.5or upon selection of an action button630related to a particular agent policy shown inFIG.6. At step 1702, the GUI700can enable the user to search for and select a configuration the user wants to change. The default configuration705can be applied to multiple data shipper agents running on different edge nodes associated with the user to facilitate management of the configuration at scale. At step 2704, the user may select a data source710from which the data shipper agent needs to collect data. An example data source include Amazon Web Services (AWS) running on the edge node. At step 3712, the user may change the setting of the data source by providing a name715and description720of the data source. The user may further select configuration parameters associated with the data shipper agent, for example, by selecting data that need to be collected by the data shipper agent. For example, the user may opt to collect logs from AWS instances by selecting a button725(checkbox) and/or to collect metrics from AWS instances by selecting a button730(checkbox). The metrics of AWS instances may include core metrics735, CPU metrics740, entropy metrics745, and so forth. The user may also set a period for collecting the metrics, e.g., 10 seconds. After the user selects a save button750, the GUI700provides the selections made by user to the central agent management unit. Upon receipt of the configuration parameters corresponding to the selection made by the user, the central agent management unit may automatically change an agent policy of the data shipper agent by reconfiguring the configuration of the data shipper agent in the agent policy. The data shipper agent may check in for the latest updates of the agent policy on a regular basis. In an example embodiment, any number of data shipper agents may have the same configuration, which allows the user to scale up the configuration to data shipper agents on thousands of edge nodes. When the user makes a change to the configuration of the data shipper agent, all other data shipper agents that run on other edge nodes and have the same configuration receive the update to the configuration. Therefore, the user no longer needs to distribute configuration updates by himself to each edge node. Therefore, configuring and managing data shipper agents using the systems and methods of the present disclosure provides a user with a quick visibility into a status of a plurality of data shipper agents running on different edge nodes, enables the user to update agent policies and configuration of the data shipper agents remotely and manage data shipper agents at scale (e.g., manage thousands of data shipper agents). Moreover, due to the deep visibility to a status of data shipper agents provided by the systems and methods of the present disclosure, the user is able to resolve issues as soon as it is discovered that data shipper agents are not running correctly. FIG.8is a diagrammatic representation of an example machine in the form of a computer system800, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed. In various example embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be, for example, a base station, a personal computer (PC), a tablet PC, a personal digital assistant (PDA), a cellular telephone, a portable music player (e.g., a portable hard drive audio device such as a Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system1includes a processor or multiple processors5(e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), and a main memory10and static memory15, which communicate with each other via a bus20. The computer system1may further include a video display35(e.g., a liquid crystal display (LCD)). The computer system1may also include an alpha-numeric input device(s)30(e.g., a keyboard), a cursor control device (e.g., a mouse), a voice recognition or biometric verification unit (not shown), a drive unit37(also referred to as disk drive unit), a signal generation device40(e.g., a speaker), and a network interface device45. The computer system1may further include a data encryption module (not shown) to encrypt data. The drive unit37includes a computer or machine-readable medium50on which is stored one or more sets of instructions and data structures (e.g., instructions55) embodying or utilizing any one or more of the methodologies or functions described herein. The instructions55may also reside, completely or at least partially, within the main memory10and/or within static memory15and/or within the processors5during execution thereof by the computer system1. The main memory10, static memory15, and the processors5may also constitute machine-readable media. The instructions55may further be transmitted or received over a network via the network interface device45utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP)). While the machine-readable medium50is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals. Such media may also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like. The example embodiments described herein may be implemented in an operating environment comprising software installed on a computer, in hardware, or in a combination of software and hardware. Not all components of the computer system1are required and thus portions of the computer system1can be removed if not needed, such as Input/Output (I/O) devices (e.g., input device(s)30). One skilled in the art will recognize that the Internet service may be configured to provide Internet access to one or more computing devices that are coupled to the Internet service, and that the computing devices may include one or more processors, buses, memory devices, display devices, input/output devices, and the like. Furthermore, those skilled in the art may appreciate that the Internet service may be coupled to one or more databases, repositories, servers, and the like, which may be utilized in order to implement any of the embodiments of the disclosure as described herein. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present technology has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the present technology in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present technology. Exemplary embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, and to enable others of ordinary skill in the art to understand the present technology for various embodiments with various modifications as are suited to the particular use contemplated. Aspects of the present technology are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the present technology. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present technology. In this regard, each block in the flowchart or block diagrams may represent a module, section, or portion of code, which includes one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular embodiments, procedures, techniques, etc. in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” or “according to one embodiment” (or other phrases having similar import) at various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Furthermore, depending on the context of discussion herein, a singular term may include its plural forms and a plural term may include its singular form. Similarly, a hyphenated term (e.g., “on-demand”) may be occasionally interchangeably used with its non-hyphenated version (e.g., “on demand”), a capitalized entry (e.g., “Software”) may be interchangeably used with its non-capitalized version (e.g., “software”), a plural term may be indicated with or without an apostrophe (e.g., PE's or PEs), and an italicized term (e.g., “N+1”) may be interchangeably used with its non-italicized version (e.g., “N+1”). Such occasional interchangeable uses shall not be considered inconsistent with each other. Also, some embodiments may be described in terms of “means for” performing a task or set of tasks. It will be understood that a “means for” may be expressed herein in terms of a structure, such as a processor, a memory, an I/O device such as a camera, or combinations thereof. Alternatively, the “means for” may include an algorithm that is descriptive of a function or method step, while in yet other embodiments the “means for” is expressed in terms of a mathematical formula, prose, or as a flow chart or signal diagram. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It is noted that the terms “coupled,” “connected”, “connecting,” “electrically connected,” etc., are used interchangeably herein to generally refer to the condition of being electrically/electronically connected. Similarly, a first entity is considered to be in “communication” with a second entity (or entities) when the first entity electrically sends and/or receives (whether through wireline or wireless means) information signals (whether containing data information or non-data/control information) to the second entity regardless of the type (analog or digital) of those signals. It is further noted that various figures (including component diagrams) shown and discussed herein are for illustrative purpose only, and are not drawn to scale. If any disclosures are incorporated herein by reference and such incorporated disclosures conflict in part and/or in whole with the present disclosure, then to the extent of conflict, and/or broader disclosure, and/or broader definition of terms, the present disclosure controls. If such incorporated disclosures conflict in part and/or in whole with one another, then to the extent of conflict, the later-dated disclosure controls. Thus, various embodiments of methods and systems for configuring and managing data shipper agents have been described. Although embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes can be made to these example embodiments without departing from the broader spirit and scope of the present application. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. There are many alternative ways of implementing the present technology. The disclosed examples are illustrative and not restrictive. | 42,728 |
11943296 | DETAILED DESCRIPTION Described herein are systems and methods for workload-based cache compression in a distributed storage system. The storage system can store data in persistent storage devices such as magnetic or solid state disks. Client computer systems can communicate with the storage system, e.g., via a computer network, to store and retrieve data in the storage system. The persistent storage devices used by the storage system can be relatively slow to access in comparison to memory storage such as Random Access Memory (RAM). An in-memory cache can be used to speed up access to data stored on slower persistent storage devices by temporarily storing portions of the data in memory storage that can be accessed more quickly than the persistent storage. Thus, for example, data that is in the cache can be read more quickly than data that is not in the cache. However, the storage capacity of the in-memory cache is ordinarily small compared to the capacity of the persistent storage devices. Further, determining whether to store particular data items in the cache is difficult because future access requests are often unpredictable. Storing more data in the cache can increase the likelihood that requested data is in the cache, so it is desirable to increase the size or capacity of the cache. Cache compression can be used to increase the amount of data that can be stored in a cache by compressing the contents of the cache. For example, data can be compressed upon being stored in the cache and decompressed upon being retrieved. The compression and decompression operations are computationally-intensive and can increase access times, however. Thus, there is a trade-off between reduction in the size of cached data and increased overhead such as increased access time and processor usage caused by compression. It is desirable to compress cached data for which the benefit of reduction in the size of cached data outweighs the cost of increased access latency over a period of time. However, this tradeoff can be difficult to evaluate at the time cached data is accessed. Cache compression can be beneficial for workloads that access memory sequentially or with high locality of reference (e.g., data items in a relatively small region are accessed over a period of time). In the case of sequential memory reads, a region of cache memory can be decompressed, and the decompressed memory can be accessed by a sequence of memory read operations, so the decompression overhead can be amortized across the read operations. For example, an application that accesses relatively large chunks of data and uses a substantial amount of the decompressed data can be suitable for use with a compressed cache. Cache compression can be inefficient, however, for workloads that perform numerous small reads in different regions of memory, in a random-access manner. For random-access workloads, compressing or decompressing a cached data item is unlikely to be beneficial because the data item is unlikely to be accessed while the data item is still in the cache. For example, an application that performs random access operations on relatively small chunks of data can be unsuitable for use with a compressed cache. However, in existing storage systems, applications do not ordinarily inform the storage system of their expected access patterns, and determining whether to use cache compression in existing storage systems can be difficult. Aspects of the present disclosure address the above and other deficiencies by providing a cache manager that can, upon identifying an application that has issued a data access request, determine a cache classification that indicates whether the application's data access patterns are suitable for cache compression. The cache manager can then direct the data access request to compressed cache memory if the cache classification indicates that the application's data access patterns are suitable for cache compression. “Compressed cache memory” herein shall refer to memory, such as Random Access Memory (RAM), in which data is stored in a compressed form. The cache manager can identify the application based on a virtualized execution image, such as a container image, associated with the application. The cache manager can use a classifier to generate the cache classification based on past access patterns or other information associated with the application. An access pattern that is suitable for cache compression can be, for example, a sequence of memory reads of memory locations having sequential addresses. An access pattern that is not suitable for cache compression can be, for example, a sequence of memory reads of memory locations having addresses that appear random, e.g., are not located within a region of memory of a threshold size. If the cache classification indicates that the application's data access patterns are not suitable for cache compression, then the cache manager can direct the memory access request to non-compressed cache memory. The cache classification can be, for example, a Boolean value that indicates whether cache compression is to be used. Alternatively, the cache classification can be a classification rule that includes one or more cache classification criteria, such as a threshold data size, that the cache manager can evaluate for each memory access request. If the cache classification criteria are satisfied, then the cache manager can direct the memory access request to compressed cache memory. Otherwise, the cache manager can direct the memory access request to non-compressed cache memory. Further, the cache classification can include or be associated with a probability that the application's data access patterns are suitable for cache compression. The probability can be, e.g., a percentage, and can be generated by the classifier. The classifier can use a classification table to map applications or other application-related entities to cache classifications. The classification table can be generated from information such as access patterns or configuration information specifying whether a particular application is to use cache compression. Machine learning techniques can be used to learn to generate the classification for a particular application from access patterns of the application, e.g., whether read operations performed by the application are usually of sequential memory addresses. The application can execute in a virtualized execution environment. The virtualized execution environment can include one or more virtual machines on which the application can execute. Alternatively or additionally, the virtualized execution environment can include one or more containers. The application can execute in one or more containers, and the containers can be controlled by a container manager. Each container can handle infrastructure-related tasks such as deployment of the application for operation. The container can include the application and the application's dependencies, which can include libraries and configuration information. A containerized cluster can be a cluster of physical or virtual computing devices that run containerized applications. The containerized cluster can include entities that represent resources in the cluster, such as virtual machines, nodes representing physical or virtual computing devices, persistent volumes representing physical or virtual storage devices, and networks. A virtualized execution environment can be created from a virtualized execution image. A virtualized execution image can be, e.g., a virtual machine image and/or a container image. For example, a virtual machine can be created from a virtual machine image, which can be a file that contains a virtual disk. The virtual disk can have an installed bootable operating system. As another example, a container can be created from a container image. The application can be identified from metadata included in the virtualized execution image and/or the virtualized execution environment in which the application is running. A virtualized execution manager, such as a cluster manager or container manager, can deploy the virtualized execution image onto a client computer system for subsequent execution. Upon being deployed, the virtualized execution image can be loaded into memory of a client computing device and executed. The cache classification can be generated for a particular application at the application's deployment time by a component of the cache manager (e.g., at a client computing device), so that the classification operation would not have be performed for each read request. If the cache classification is a rule having one or more associated cache classification criteria, then the criteria can be evaluated at the client computing device for each read request, e.g., using attributes of the request, such as data size, as input to the criteria evaluation. The client can then add a cache compression indicator determined from the cache classification, such as a tag, to the read request. The tag can indicate whether to use compressed cache memory for the read request. The client computing device can send the read request to a storage server, and the storage server can use the tag to determine which type of cache memory, e.g., compressed or non-compressed, to access in response to the read request. The storage server can receive the read request and attempt to retrieve the requested data from the specified type of cache memory, e.g., at a memory address specified in the read request, and send the requested data to the client if the data is present in the specified type of cache memory. If the data is not present in the specified type of cache memory, the storage server can retrieve the requested data from longer-term storage, e.g., disk storage, and send the data to the client. The storage server can additionally determine whether the data is present in each type of cache memory, e.g., compressed and non-compressed, since the application can store data in either type of cache memory. In one example, the application may have multiple different threads of execution, each having a different access pattern. One application thread can perform reads using 8 kilobyte blocks, while another application thread, such as a background thread, can read data in 1 megabyte blocks. Depending on which thread reads a particular block of data, the data can be in either the compressed cache or the non-compressed cache. For such applications, the storage server can attempt to retrieve the requested data from both the compressed cache and the non-compressed cache. In one example, the storage server can attempt to retrieve the requested data from the specified type of cache memory (e.g., compressed), and, if the requested data is not present in the specified type of data, from the other type of cache memory (e.g., non-compressed). In another example, the storage server can attempt to retrieve the requested data from both types of cache memory (e.g., compressed and non-compressed) in parallel. If the data is not present in either type of cache memory, the storage server can retrieve the requested data from longer-term storage. The storage server can then send the requested data to the client. Alternatively, classification of read requests received from the application can be performed by the storage server, in which case the client would not have to perform the classification and tagging of read requests described above. The storage server can use a storage entity specified in the request, such as a data entity identifier, as input to the classifier. The data entity identifier can identify the storage which is accessed by an application, and thus can be used to identify an application. The storage server can generate a cache classification using, for example, a server-side classification mapping table that maps data entities to cache classifications. The server-side classification mapping table can be generated similarly to how the client-side classification mapping table described above is generated. If the classification includes cache classification criteria, the cache manager can evaluated the criteria at the storage server using attributes of the read request as input. The storage server can use the classification to determine whether to access compressed or non-compressed memory for the read request. The storage server can attempt to read the data from the type of cache memory specified by the classification, and send the data to the client if the data is present in the specified type of cache memory. In some embodiments, if the data is not found in the type of cache memory specified by the classification, the storage server can attempt to read the data from another type of cache memory, e.g., the other type of cache memory. The storage server can read data from the other cache type if, for example, the application has multiple threads, each having a different access pattern. Depending on which thread reads a particular block of data, the data can be in either the compressed cache or the non-compressed cache. In another example, the storage search can read data from each type of cache memory in parallel. If the data is not present in the type (or types) of cache memory from which reads are attempted, the storage server can retrieve the requested data from long-term storage and send the data to the client. The systems and methods described herein include technical improvements to data storage technology. In particular, aspects of the present disclosure may increase the efficiency of memory access operations by accessing compressed cache memory if a request is from an application having an access pattern for which cache compression is suitable. Thus, the cache compression is used if the workload has a pattern that is likely to benefit from cache compression, such as sequential memory accesses. Otherwise, if the workload is not likely to benefit from cache compression, then non-compressed cache memory can be used, and the overhead of cache compression is not incurred unnecessarily. Since the cache manager can select the appropriate type of memory cache, e.g., compressed or non-compressed, using a classification of the application, there is no need to modify the application to provide an access pattern classification to the cache manager. The access pattern classification can be determined by the cache manager at the client (or server) so that the appropriate type of cache memory (compressed or non-compressed) can be selected for each application. Since the classifier can learn which type of cache memory is suitable for each application from past accesses, the cache manager can generate an accurate classification for each application. Using the classification appropriate for each application increases overall storage system performance, since cache compression is used for applications that benefit from it, such as applications that access data sequentially. Further, cache compression is not used for applications that are not expected to benefit from it. Thus, the disclosed technique of cache compression management can reduce the amount of storage space used to cache data when the cost of the increased latency incurred by compression operations is likely to be outweighed by the benefit of reduced data access latency that occurs when requested data is available in the cache. Various aspects of the above referenced methods and systems are described in details herein below by way of examples, rather than by way of limitation. The examples provided below discuss management of cache compression in a container environment. Applications are described as executing in and being identified by their association with container images or data entities of the container environment. In other examples, applications can execute in other environments and be identified by other information, such as application names, process names, executable file names, or the like. Further, data is described as being stored at a storage server that receives data access requests from client computing devices. In other examples, data can be stored in any suitable storage system. Although the cache is described as being a memory and long-term storage is descried as being a disk, the cache and long-term storage can be any data storage components of a memory hierarchy in which the cache has lower access latency and smaller capacity than the long-term storage. FIG.1depicts a high-level block diagram of an example computing system in which a cache manager120located at a client computing device102can determine whether to access data for a particular application112in a compressed cache or a non-compressed cache, in accordance with one or more aspects of the present disclosure. A container image104can be stored on a disk or other storage device of the client computing device102. The container image104can include program code instructions that can run in a container manager on the client computing device102. The container image104can also include image metadata106, such as an application name108. The application name108can be, e.g., a text string identifying an executable program binary or other suitable identifier for an application. The container image104can be, for example, a set of layers, one or more of which can form an application image105. The application image can include program code and data of the application. The container manager can load the container image104into memory and execute the program code instructions of the application image105in a container110as a running application112. The application112can access data stored at a server computing device140by sending read requests128to the server computing device140via a computer network or other suitable form of communication. The application112can interact with a programming interface of a storage system or data store to create a read request, for example. The server computing device140stores data in persistent storage154, which can be, e.g., a rotational magnetic disk or a solid-state disk. The server computing device140can be a component of a storage system that processes read requests from the programming interface used by the application112, for example. The server computing device140also includes compressed cache memory150and non-compressed cache memory152, each of which can store one or more data items (e.g., blocks, objects, byte sequences, or the like) that are also stored in the persistent storage154. Each cache memory150,152can be, for example, Random Access Memory (RAM), which can be implemented using Dynamic RAM (DRAM) or the like. Each cache memory150,152is ordinarily smaller than and has lower access latency than persistent storage154. Compressed cache memory150stores data in a compressed form, which is ordinarily smaller (e.g., uses less memory space) than the non-compressed form of the same data. The compressed form can be generated by computer program instructions executed on server computing device140or by a memory sub-system, for example. Compression and de-compression can be performed using a Lempel-Zip compressor or other suitable compression algorithm. The compression algorithm can be implemented using computer program instructions, hardware devices, or a combination thereof. Compression of data can be performed prior to storing the data in the compressed cache memory150. Similarly, decompression of data can be performed subsequent to retrieving the compressed form of the data from the compressed cache memory150. The compression and de-compression operations can be performed by the memory sub-system in response to requests to store and retrieve data in the compressed cache memory150, respectively. Since the compressed form is stored in the compressed cache memory150, the same data item effectively uses less memory space in the compressed cache memory150than in the non-compressed cache memory152. However, because of the compression and de-compression operations, data write and data read operations performed on the compressed cache memory150ordinarily have greater latency than the analogous data write and data read operations performed on the non-compressed cache memory152. Because of these differences between the compressed cache memory150and non-compressed cache memory152in storage capacity and access times, the two different types of cache memory are suitable for different data access patterns. For example, storing a 64 kilobyte (Kbyte) sequence of bytes in the compressed cache memory150may use 32 kilobytes of the memory capacity of the compressed cache memory150if 50% compression is achieved. Compressing and decompressing the data each have an associated latency. The latency of compressing a small amount of data, such as 4 Kbytes, can be nearly as high as the latency of compressing a larger amount of data, such as 64 Kbytes because, for example, larger amounts of data can be more effectively batched together by the memory sub-system. Each compression or de-compression operation also has an overhead that causes additional latency. The compressed cache memory150can be suitable for sequential accesses of larger sequences of data, e.g., a read of each byte in a sequence of 64 Kbytes. In contrast, the non-compressed cache memory152can be suitable for random cache accesses in which smaller chunks of data are read from memory locations at different locations for which accesses are difficult to amortize. For example, reading 64 Kbytes of data in 16 non-contiguous 4 Kbyte blocks can have substantially greater compression and/or de-compression latency than reading a single 64 Kbyte sequence of data. Thus, the non-compressed cache memory152can be suitable for reading the 16 non-contiguous 4K byte blocks in one example. The server computing device140can process each read request128by reading the requested data from compressed cache memory150, non-compressed cache memory152, or, if the data is not in cache memory, from persistent storage154. The server computing device140can determine whether to retrieve the data from compressed cache memory150or non-compressed cache memory152based on information included in the read request128by the client-side cache manager120, such as a tag, as described herein. The server computing device140can also attempt to retrieve the data from each type of cache memory. For example, the server computing device140can attempt to retrieve the data from the type determined based on the information included in the read request128. Alternatively, the server computing device140can attempt to retrieve the data from each type of cache memory in parallel. If the data is not present in either type of cache memory (compressed or non-compressed) then the server computing device140can read the requested data from persistent storage154. The server computing device140can then send the data160that has been read from cache memory150,152or persistent storage to the application112on the client computing device102, and the application112can receive the data160. A client-side cache manager120can generate a cache classification for an application112. The cache classification can indicate whether the data access patterns of the application are expected to be suitable for cache compression. The cache classification can be based on, for example, past accesses (e.g., reads and/or writes) performed by the application112, on characteristics of the application112, which can be provided as configuration information, or a combination of both. The client-side cache manager120can add an indication (e.g., a tag) to the read request128indicating whether the request is to access the compressed cache memory150or, alternatively, the non-compressed cache memory152to request cached data. The client-side cache manager120can then send the read request128to the server computing device140. The client-side cache manager120can be located in the container110or, alternatively, in another component running on client computing device102that can be invoked by the client-side cache manager120. To read a data item from persistent storage154, the application112can invoke or perform an operation that requests data from storage (operation114). The data request operation114can be provided by a programming interface of a storage system, operating system, or other component that generates read requests for a storage system that stores data in persistent storage154on server computing device140. The client-side cache manager120can receive the read request generated by the application112. The client-side cache manager120does not necessarily receive the read request directly from the application112, but can instead receive a read request generated by a programming interface of a storage system, an operating system, a container manager, a device driver, or other component that generates read requests to be sent to a storage system. The read request can specify a persistent storage address that identifies the data item to be read, and a number of bytes of data to be read, or any other suitable reference to a data item stored in the persistent storage154. The client-side cache manager120can identify the application112that generated the read request (operation122). Since the identity of the application112is not necessarily provided directly to the client-side cache manager120by the application112, the application112can access the application name108(or other application identity information) in the image metadata106of the container image104. The application name108can be used as the application identity. The client-side cache manager120can generate a cache classification based on the application identity (operation124). The cache classification can indicate whether data accessed by the application is suitable for cache compression. The cache classification can be represented as a value, e.g., “compressed” or “non-compressed”, or a rule. The rule can specify one or more cache classification criteria that can be evaluated for each read request to determine a cache classification value for the particular request, as described below. An example rule is “compressed if data size>16K.” The rule can be evaluated to determine a value of “compressed” if the rule criteria is satisfied (e.g., if the read request specifies a data size>16K) or “non-compressed” if the rule criteria is not satisfied. To generate the cache classification, the client-side cache manager120can invoke an application classifier130, which can search a classification table132for an entry that associates the application identity with a cache classification. The cache classification can be a value such as “compressed” or “non-compressed.” Alternatively, the cache classification can be a classification rule that specifies one or more cache classification criteria and can be evaluated to determine a classification value. For example, evaluation of a classification rule can generate the cache classification can be “compressed” if the rule's criteria are satisfied, or “non-compressed” if the criteria are not satisfied. The criteria can be, for example, Boolean expressions having variables that can be evaluated in response to receiving a memory read request. The variables can include a data size variable that represents the data size specified by the read request. An example classification table132is shown in the application classifier130. The example classification table132includes three entries, each of which associates an application identity with a cache classification: a first entry associates an application identity “A” with classification value “compressed”, a second entry associates an application identity “B” with classification value “non-compressed”, and a third entry associates an application identity “C” with classification rule “compressed if data size>16 kbytes.” Thus, according to the example table132, an application named “A” is classified as “compressed”, an application named “B” is classified as “non-compressed”, and an application named “C” is classified as “compressed” if, upon evaluation of the classification rule, the read request specifies a data size of greater than 16 Kbytes. Although described as a table in the examples herein, the classification table132can be any suitable data structure that represents a set of associations in which each association associates an application with a cache classification. The client-side cache manager120can generate the cache classification in the container110, e.g., in response to each read request, as shown in operation124. Alternatively or additionally, the cache classification can be generated as part of a container deployment process, in which case generating cache classification value of “compressed” or “non-compressed” (operation124) can be performed once for the container110(at deployment time). If the generated cache classification does not include a rule, then the client-side cache manager120can, in response to each read request, access the generated classification value and tag the read request in accordance with the generated classification (operation126, without generating the cache classification at operation124). If the generated cache classification includes a rule, then the client-side cache manager120can, in response to each read request, evaluate the rule and determine a classification value of “compressed” or “non-compressed” in accordance with the result of evaluating the rule (without generating the cache classification at operation124). The classification table132used by the application classifier130can be generated from configuration information specifying particular classifications for particular applications. For example, a configuration parameter can specify that an application “A” is classified as “compressed.” The classification can be specified as “compressed” because, for example, the application is a streaming video or audio application that reads data from persistent storage. The classification can be specified as “non-compressed” because, for example, the application is a transaction processing application that performs short transactions and accesses small data items in a random-access pattern. As an alternative to using the classification table132, the application classifier130can use a machine learning model trained on historical access pattern data for particular applications. The access pattern data can include a determined classification for each particular application. The determined classification can be determined by analyzing the historical access patterns to determine whether the access patterns are sequential accesses or random accesses, for example. The historical access pattern data thus includes a set of records, each of which includes an application identity (e.g., an application name), a determined classification of the application, and associated features, such as a data size and/or data location, an application type (e.g., streaming, transaction processing, interactive user interface, and so on), an application version, or the like. The machine learning model can be generated from such historical access pattern data using a training process. The machine learning model can be trained to generate cache classifications from data available at the time a read request is received, such as an application identifier (or data entity identifier) and values of one or more features such as those listed above. The machine learning model can be any suitable machine learning model or heuristic model, such as a Linear Regression, Bayes, Random Forest, Support Vector Classifier, Decision Tree, Gradient Boosted Tree, K-Nearest Neighbors model, or the like. The model can be implemented by one or more neural networks. Upon generating the cache classification, the client-side cache manager120can determine a tag value in accordance with the cache classification, add the tag value to the read request, and send the tagged read request128to the server computing device140(operation126). The tag value can be “compressed” (or a corresponding value, e.g., 1 or true) if the classification is “compressed” or if the classification includes criteria that are satisfied. The tag value can be “non-compressed” (or a corresponding value, e.g., 0 or false) if the classification is “non-compressed” or if the classification includes criteria that are not satisfied. The server computing device140can receive the read request128(operation142), identify the cache type associated with the request (operation144), read data from the identified cache type (operation146), and send the data160to the application112on the client computing device102(operation148). The cache type associated with the request can be identified from the tag value attached to the request. Thus, the cache type can be “compressed” or “non-compressed”. If the cache type is “compressed”, the server computing device140can direct the request to the compressed cache memory150, e.g., by requesting the data from the compressed cache memory150using the attributes of the read request (e.g., address and data length). Alternatively, if the cache type is “non-compressed”, the server computing device140can direct the request to the non-compressed cache memory152, e.g., by requesting the data from the non-compressed cache memory152using the attributes of the read request. If the requested data is present in the cache memory to which the request is directed, then the server computing device140can retrieve the requested data from the corresponding cache memory (operation146) and send it to the application112(operation148). Reading data from the compressed cache memory150can cause a compressed form of data to be read from the compressed cache memory and provided as non-compressed data as a result of the read operation. In some embodiments, if the data is not present in the cache memory to which the request is directed, the server computing device140can attempt to read the data from another type of cache memory, e.g., the type of cache to which the request is not directed. The storage server can read data from the other type of cache memory if, for example, the application has multiple threads, each having a different access pattern. Depending on which thread reads a particular block of data, the data can be in either the compressed cache or the non-compressed cache. If the requested data is not present in the cache memory from which reads are attempted, then the server computing device140can retrieve the requested data from the persistent storage154and send it to the application112. In one example, compression is not performed by the cache manager disclosed herein when storing data in persistent storage154or in non-compressed cache memory152. The application112can receive the requested data160(operation116) and perform other operations using the data160. Similar techniques can also be applicable to split writes into compressed and non-compressed write caches. Write caches can be stored in persistent memory, for example. Storing data in the compressed cache memory150can cause the data to be automatically compressed (e.g., by the memory sub-system) to generate a compressed form of the data, and the compressed form of the data to be stored in the compressed cache memory150. Storing data in the non-compressed cache memory152does not cause the data to be compressed, and the data is in the non-compressed cache memory152without being compressed. FIG.2depicts a high-level block diagram of an example computing system in which a server-side cache manager230located at a server computing device140can determine whether to access data for a particular data entity in a compressed cache or a non-compressed cache, in accordance with one or more aspects of the present disclosure. As described above with reference toFIG.1, an application112can invoke or perform an operation that requests data from storage (operation114). A container manager220, or a component thereof, in the container110can send the read request to the server computing device140as a read request224(operation222). The container manager220need not add a tag the read request. The server computing device140can identify the appropriate cache type for the application112using a server-side cache manager230as described below. The server-side cache manager230can receive the read request224and identify a data entity (e.g., a volume, file, directory, object, bucket, disk image, or other form of data that can be persistent) on which the requested data is stored (operation232). The data entity can correspond to the application112. The identity of the application112is not necessarily available to the component of the server-side cache manager230that processes the read request224. The server-side cache manager230can then generate a cache classification based on the identity of the data entity (operation234). The cache classification can indicate whether data accessed by the application is suitable for cache compression. For example, the server computing device140can invoke a data entity classifier240, which can search a classification table242for an entry that associates the data entity with a cache classification. The cache classification can be “compressed” or “non-compressed,” or can include a rule that specifies one or more cache classification criteria that can be evaluated for each read request to determine a cache classification for the particular request. The data entity classifier240can use a classification table242to generate a classification for a given data entity. The cache classification can be represented as a classification value, e.g., “compressed” or “non-compressed”, or a classification rule that can be evaluated to determine a classification value, as described above with reference to the application classifier130ofFIG.1. An example classification table242shown inFIG.2includes three entries: a first entry associates a data entity identity “DE1” with classification “compressed”, a second entry associates a data entity identity “DE2” with classification “non-compressed”, and a third entry associates a data entity identity “DE3” with classification rule “compressed if data size>16 kbytes.” Thus, according to the example classification table242, a data entity having the identifier “DE1” is classified as “compressed”, a data entity having the identifier “DE2” is classified as “non-compressed”, and a data entity having the identifier “DE3” is classified as “compressed” if the read request specifies a data size of greater than 16 Kbytes. Since the data entity corresponds to an application, the classification effectively represents the classification of an application112that generated the read request224. The classification table242used by the data entity classifier240can be generated similarly to the classification table132used by the application classifier130as described above, except that the values in the application column of the table132can be replaced with the data entity identifier of the data entity associated with the application. This replacement process can be performed at container deployment time, for example. As an alternative to using the classification table242, the data entity classifier240can use a machine learning model trained on historical access pattern data for particular applications, as described above with reference to the classification table132ofFIG.1. Upon generating the cache classification, the server-side cache manager230can identify a cache type associated with the cache classification (operation236). If the cache classification is a classification value (e.g., “compressed” or “non-compressed”), the server-side cache manager230can use the classification value as the cache type. If the cache classification is a classification rule, the server-side cache manager230can evaluate the rule. The server-side cache manager230can evaluate the classification rule using information available at the time the read request224is received, such as the address and size of the requested data. For example, the rule can specify that the classification value is “compressed” if an address specified in the read request224is less than a threshold distance (e.g., in bytes) from a previous address specified in a previous read request. As another example, the rule can specify that the classification value is “compressed” if an average size of past read requests received by the server computing device140is greater than a threshold size. The server-side cache manager230can determine whether the identified cache type is “compressed” (operation244). If so, the server-side cache manager230can read data from compressed cache memory150(operation248). Otherwise, if the identified cache type is “not compressed”, the server-side cache manager230can read data from non-compressed cache memory152(operation246). If the requested data is not present in the cache memory that corresponds to the identified cache type, then the server-side cache manager230can read the requested data from the persistent storage154. The server-side cache manager230can then send the data to the application112on the client computing device102as data160(operation148). The application112can receive the data160(operation116) and perform other operations using the data160. In other embodiments, the server-side cache manager230can read data from the identified cache type and, if the data is not present in the identified cache type, read data from the other cache type(s). The server-side cache manager230can read data from the other cache type(s) if, for example, the application has multiple threads, each having a different access pattern. Depending on which thread reads a particular block of data, the data can be in either the compressed cache or the non-compressed cache. Thus, the server-side cache manager230can determine whether the identified cache type is “compressed” (operation244). If the cache type identified at operation244is compressed, the server-side cache manager230can attempt to read data from compressed cache memory150(operation248). If the requested data is not present in the compressed cache memory150, then the server-side cache manager230can attempt to read data from the non-compressed cache memory152(operation246). The server-side cache manager230can then send the data read from the compressed or non-compressed cache memory to the application112on the client computing device102as data160(operation148). Otherwise, if the cache type identified at operation244is not “compressed”, the server-side cache manager230can attempt to read data from non-compressed cache memory152(operation246). If the requested data is not present in the non-compressed cache memory152, then the server-side cache manager230can attempt to read data from the compressed cache memory150(operation248). The server-side cache manager230can then send the data read from the compressed or non-compressed cache memory to the application112on the client computing device102as data160(operation148). If the requested data is not present in any of the cache memories, then the server-side cache manager230can read the requested data from the persistent storage154. The server-side cache manager230can then send the data read from the compressed or non-compressed cache memory to the application112on the client computing device102as data160(operation148). The application112can receive the data160(operation116) and perform other operations using the data160. FIG.3depicts a flow diagram of an example method300for generating a request to access data in a compressed cache or a non-compressed cache in accordance with a cache classification of a requesting application, in accordance with one or more aspects of the present disclosure. Method300and each of its individual functions, routines, subroutines, or operations may be performed by one or more processors of the computer device executing the method. In certain implementations, method300may be performed by a single processing thread. Alternatively, method300may be performed by two or more processing threads, each thread executing one or more individual functions, routines, subroutines, or operations of the method. In an illustrative example, the processing threads implementing method300may be synchronized (e.g., using semaphores, critical sections, and/or other thread synchronization mechanisms). Alternatively, the processes implementing method300may be executed asynchronously with respect to each other. For simplicity of explanation, the methods of this disclosure are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term “article of manufacture,” as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media. In one implementation, method300may be performed by a client computing device102as shown inFIG.1or by an executable code of a host machine (e.g., host operating system or firmware), a virtual machine (e.g., guest operating system or virtual firmware), an external device (e.g., a PCI device), other executable code, or a combination thereof. Method300may be performed by processing devices of a server device or a client device and may begin at block310. At block310, a computing device may receive, in a container environment, a request to access data in a storage system. The request can be a read request (to retrieve data from the storage system), for example. At block320, the computing device may identify, in accordance with a virtualized execution image, such as a virtual machine image, or in accordance with the container execution environment, a container image. For example, the container image can be identified by image information, e.g., metadata, associated with the image. In another example, the container image can be identifiable from its image signature in the container image registry. The application can be identified in view of metadata included in the container image, for example. In one example, the metadata can include one or more of an application name or an application version, either or both of which can be used to identify the application. At block330, the computing device may generate, in accordance with the application, a classification that specifies whether data accessed by the application is suitable for cache compression. The cache classification can be generated by identifying, in a data structure (e.g., a classification mapping table) comprising one or more records, a cache classification record that specifies a stored identifier of the application, wherein the record further specifies a stored cache classification. The computing device may generate a classification table comprising one or more records, where each record comprises a particular application identifier and a particular cache classification. The computing device may generate the classification table by determining whether a past data access pattern of a previous execution of a particular application satisfies one or more threshold access pattern criteria. Responsive to determining that the past data access pattern of a previous execution of the particular application satisfies one or more threshold access pattern criteria, the computing device may generate a cache classification record that comprises a particular application identifier identifying the particular application and a particular cache classification specifying that data accessed by the particular application is suitable for cache compression. Further, responsive to determining that the past data access pattern of a previous execution of the particular application does not satisfy one or more threshold access pattern criteria, the computing device may generate a cache classification record that comprises a particular application identifier identifying the particular application and a particular cache classification specifying that data accessed by the particular application is not suitable for cache compression. The computing device may include the generated cache classification record(s) in the classification table. The past data access pattern can include a number N of previous data access operations, e.g., 100 or other suitable number. The previous data access operations can be, for example, read operations. The previous data access operations and can be, but are not necessarily, consecutive data access operations. The threshold access pattern criteria can include, for example, a threshold storage region size that specifies a size of a storage region in a logical device, in which case the past data access pattern satisfies the threshold access pattern criteria if at least a threshold number T of the data accesses in the past data access pattern reference a respective storage location in a storage region of the specified size. For example, suppose the threshold storage region size is 64 Kbytes, the past data access pattern includes N=100 previous data access operations, and the threshold number T is 80 data accesses. If there is a subset of operations of the past data access pattern such that the subset has at least T=80 data access operations and each of the data access operations in the subset reads data from a respective address that is less than 64 Kbytes distant from each of the addresses read by the other data access operations in the subset, then the past data access pattern satisfies the threshold access pattern criteria, and the past access pattern can be considered a local access pattern in a 64 Kbyte region. As another example, a past access pattern can be considered a local access pattern in a 64 Kbyte region if there is a subset of data access operations in the past access pattern such that each data access operation in the subset accesses data in a 64 Kbyte region. Such a local access pattern can be suitable for cache compression. If the past data access pattern satisfies the threshold access pattern criteria (accesses are within a 64 Kbyte region), the generated record's cache classification can be “compression” (specifying that data accessed by the application is suitable for cache compression), for example. The classification can be further generated from one or more attributes of the application. The attributes can include an access pattern, which can be, for example, a random access pattern or a sequential access pattern, and/or a type of the application, where the type can be one of streaming or transactional. The attributes of the application can be derived from configuration parameters, for example. For example, if streaming applications are known to be suitable for cache compression, and an attribute of the application indicates that the application is of a streaming type, then the generated record can specify that the record's cache classification is be “compression”. Although the access patterns are described as being random access or sequential in the examples herein, other access patterns are possible and can be specified by attributes and used to determine cache classification. For example, an access pattern can be semi-sequential, semi-random, or other hybrid access pattern. Further, although application types are described as being streaming or transactional, other application types are possible and can be used to determine the record's cache classification. Other possible application types are machine learning, user interface, compute intensive, data processing, and so on. Machine learning applications can be mapped to non-compression, for example, if machine learning applications are known to have access patterns suitable for non-compressed caches. User interface and data processing applications can also be mapped to non-compression if such applications are known to have access patterns suitable for non-compressed caches. Compute intensive applications can be mapped to cache compression if compute intensive applications are known to have access patterns suitable for compressed caches. At block340, the computing device may send, to a server of the storage system, a data access request that includes a tag indicating whether cached data is to be access in a compressed-memory cache, where the tag is determined in view of the classification. If the data access request is a read request, the computing device can subsequently receive a response containing the requested data from the server of the storage system. In some embodiments, if the computing device is unable to determine a classification, but it can set the tag to a predetermined classification (e.g., compressed or non-compressed) to provide a hint of which cache to store the data in if the data was not cached prior to sending the request. Thus, the tag can indicate whether cached data is to be stored in the compressed-memory cache or the non-compressed cache. Responsive to completing the operations described herein above with reference to operation340, the method may terminate. FIG.4depicts a flow diagram of an example method400for accessing data in a compressed cache or a non-compressed cache at a server computer system in response to a data access request that does not specify a cache classification, in accordance with one or more aspects of the present disclosure. Method400and each of its individual functions, routines, subroutines, or operations may be performed by one or more processors of the computer device executing the method. In certain implementations, method400may be performed by a single processing thread. Alternatively, method400may be performed by two or more processing threads, each thread executing one or more individual functions, routines, subroutines, or operations of the method. In an illustrative example, the processing threads implementing method400may be synchronized (e.g., using semaphores, critical sections, and/or other thread synchronization mechanisms). Alternatively, the processes implementing method400may be executed asynchronously with respect to each other. For simplicity of explanation, the methods of this disclosure are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term “article of manufacture,” as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media. In one implementation, method400may be performed by a server computing device140as shown inFIG.1or by an executable code of a host machine (e.g., host operating system or firmware), a virtual machine (e.g., guest operating system or virtual firmware), an external device (e.g., a PCI device), other executable code, or a combination thereof. Method400may be performed by processing devices of a server device or a client device and may begin at block410. At block410, a computing device may receive, at a server computing device, a data access request. The data access request can be, e.g., a data read request or a data write request. If the data access request is a write request, the data access request can include or specify data to be written. At block420, the computing device may identify a data entity associated with the data access request. At block430, the computing device may generate a cache classification value from an identity of the data entity. At block440, the computing device may determine whether the cache classification value corresponds to a compressed cache. At block450, the computing device may, responsive to determining that the cache classification value corresponds to a compressed cache, read data from compressed cache memory in accordance with the data access request if the data access request is a read request (or write data to the compressed cache memory in accordance with the data access request if the data access request is a write request). At block460, the computing device may, responsive to determining that the cache classification value does not correspond to a compressed cache, read data from non-compressed cache memory in accordance with the data access request if the data access request is a read request (or write data to the compressed cache memory in accordance with the data access request if the data access request is a write request). Responsive to completing the operations described herein above with reference to operation460, the method may, if the data access request is a read request, send the data read from the compressed or non-compressed cache memory to a client computer system as a response to the data access request, and subsequently terminate. FIG.5depicts a flow diagram of an example method for deploying an application container and generating cache compression classifications at deployment time, in accordance with one or more aspects of the present disclosure. Method500and each of its individual functions, routines, subroutines, or operations may be performed by one or more processors of the computer device executing the method. In certain implementations, method500may be performed by a single processing thread. Alternatively, method500may be performed by two or more processing threads, each thread executing one or more individual functions, routines, subroutines, or operations of the method. In an illustrative example, the processing threads implementing method500may be synchronized (e.g., using semaphores, critical sections, and/or other thread synchronization mechanisms). Alternatively, the processes implementing method500may be executed asynchronously with respect to each other. For simplicity of explanation, the methods of this disclosure are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term “article of manufacture,” as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media. In one implementation, method500may be performed by a client computing device102as shown inFIG.1, by a server computing device140as shown inFIG.2, or by an executable code of a host machine (e.g., host operating system or firmware), a virtual machine (e.g., guest operating system or virtual firmware), an external device (e.g., a PCI device), other executable code, or a combination thereof. Method500may be performed by processing devices of a server device or a client device and may begin at block510. At block510, a computing device may receive a request to deploy an application container in a container management environment. At block520, the computing device may identify an application associated with the application container. At block530, the computing device may generate, from an identity of the application, a cache classification that specifies whether data accessed by the application is suitable for cache compression. At block540, the computing device may store the cache classification in association with the application or with the application container. Once the application container110is deployed for execution (e.g., subsequent to the completion of method500), the client-side cache manager120can use the stored cache classification to tag and send each read request (operation126), and need not generate the cache classification for each read request. Alternatively or additionally, at block540, the computing device may store the cache classification in association with another entity, such as a data entity, and a system component can use the stored cache classification to determine whether to use a compressed or non-compressed cache, For example, a cache manager in an operating system or hypervisor, or in local storage drivers, can use the stored cache classification to determine whether to use a compressed or non-compressed cache. If the stored cache classification is a classification value, e.g., “compressed” or “non-compressed”, then the corresponding tag can correspond to the classification value. Alternatively, if the cache classification is a classification rule, then the client-side cache manager120can evaluate the classification rule in accordance with data values referenced by the rule's criteria, such as a size or address of the requested data. At block550, the computing device may deploy the application container in the container management environment. Responsive to completing the operations described herein above with reference to block550, the method may terminate. FIG.6depicts a block diagram of a computer system600operating in accordance with one or more aspects of the present disclosure. Computer system600may be the same or similar to client computing device102ofFIG.1or server computing device140ofFIG.2, and may include one or more processors and one or more memory devices. In the example shown, computer system600may include a data request receiving module610, an application identification module615, a cache classification generation module620, a tag inclusion module630, and a data access request sending module640. Data request receiving module610may enable a processor to receive, in a container environment, a request to access data stored in a storage system. Application identification module615may enable the processor to identify, in view of a container image associated with the request, an application running in the container environment. The application may be identified in view of metadata included in the container image, where the metadata comprises one or more of an application name or an application version. Cache classification generation module620may enable the processor to generate, in view of the application, a cache classification that specifies whether data accessed by the application is suitable for cache compression. To generate the cache classification, cache classification generation module620may enable the processor to identify, in a classification table that includes one or more records, a cache classification record that specifies a stored identifier of the application, wherein the record further specifies a stored cache classification. Tag inclusion module630may enable the processor to include, in the data access request, a tag indicating whether cached data is to be access in a compressed-memory cache, wherein the tag is determined in view of the cache classification. Data access request sending module640may enable the processor to send, to a server of the storage system, the data access request. FIG.7depicts a block diagram of a computer system operating in accordance with one or more aspects of the present disclosure. In various illustrative examples, computer system700may correspond to client computing device102ofFIG.1or server computing device140ofFIG.2. Computer system700may be included within a data center that supports virtualization. Virtualization within a data center results in a physical system being virtualized using virtual machines to consolidate the data center infrastructure and increase operational efficiencies. A virtual machine (VM) may be a program-based emulation of computer hardware. For example, the VM may operate based on computer architecture and functions of computer hardware resources associated with hard disks or other such memory. The VM may emulate a physical environment, but requests for a hard disk or memory may be managed by a virtualization layer of a computing device to translate these requests to the underlying physical computing hardware resources. This type of virtualization results in multiple VMs sharing physical resources. In certain implementations, computer system700may be connected (e.g., via a network, such as a Local Area Network (LAN), an intranet, an extranet, or the Internet) to other computer systems. Computer system700may operate in the capacity of a server or a client computer in a client-server environment, or as a peer computer in a peer-to-peer or distributed network environment. Computer system700may be provided by a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device. Further, the term “computer” shall include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methods described herein. In a further aspect, the computer system700may include a processing device702, a volatile memory704(e.g., random access memory (RAM)), a non-volatile memory706(e.g., read-only memory (ROM) or electrically-erasable programmable ROM (EEPROM)), and a data storage device716, which may communicate with each other via a bus708. Processing device702may be provided by one or more processors such as a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor). Computer system700may further include a network interface device722. Computer system700also may include a video display unit710(e.g., an LCD), an alphanumeric input device712(e.g., a keyboard), a cursor control device714(e.g., a mouse), and a signal generation device720. Data storage device716may include a non-transitory computer-readable storage medium724on which may store instructions726encoding any one or more of the methods or functions described herein, including instructions for implementing method400or500. Instructions726may also reside, completely or partially, within volatile memory704and/or within processing device702during execution thereof by computer system700, hence, volatile memory704and processing device702may also constitute machine-readable storage media. While computer-readable storage medium724is shown in the illustrative examples as a single medium, the term “computer-readable storage medium” shall include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of executable instructions. The term “computer-readable storage medium” shall also include any tangible medium that is capable of storing or encoding a set of instructions for execution by a computer that cause the computer to perform any one or more of the methods described herein. The term “computer-readable storage medium” shall include, but not be limited to, solid-state memories, optical media, and magnetic media. Other computer system designs and configurations may also be suitable to implement the system and methods described herein. The following examples illustrate various implementations in accordance with one or more aspects of the present disclosure. The methods, components, and features described herein may be implemented by discrete hardware components or may be integrated in the functionality of other hardware components such as ASICS, FPGAs, DSPs or similar devices. In addition, the methods, components, and features may be implemented by firmware modules or functional circuitry within hardware devices. Further, the methods, components, and features may be implemented in any combination of hardware devices and computer program components, or in computer programs. Unless specifically stated otherwise, terms such as “determining,” “deriving,” “encrypting,” “creating,” “generating,” “using,” “accessing,” “executing,” “obtaining,” “storing,” “transmitting,” “providing,” “establishing,” “receiving,” “identifying,” “initiating,” or the like, refer to actions and processes performed or implemented by computer systems that manipulates and transforms data represented as physical (electronic) quantities within the computer system registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. Also, the terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not have an ordinal meaning according to their numerical designation. Examples described herein also relate to an apparatus for performing the methods described herein. This apparatus may be specially constructed for performing the methods described herein, or it may comprise a general purpose computer system selectively programmed by a computer program stored in the computer system. Such a computer program may be stored in a computer-readable tangible storage medium. The methods and illustrative examples described herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used in accordance with the teachings described herein, or it may prove convenient to construct more specialized apparatus to perform method400or500and/or each of its individual functions, routines, subroutines, or operations. Examples of the structure for a variety of these systems are set forth in the description above. The above description is intended to be illustrative, and not restrictive. Although the present disclosure has been described with references to specific illustrative examples and implementations, it will be recognized that the present disclosure is not limited to the examples and implementations described. The scope of the disclosure should be determined with reference to the following claims, along with the full scope of equivalents to which the claims are entitled. | 71,985 |
11943297 | Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION Most organizations today are distributed. Such an organization may include mobile users, remote sites, branch offices, home offices, cloud-hosted assets and spread across multiple networks and multiple physical locations. This presents challenges in applying traditional cybersecurity techniques to such distributed assets, as such techniques may require implementation at each location within the organization, and the organization may not control the networks utilized by some assets (e.g., mobile users). One possible approach to providing such cybersecurity involves using traditional cybersecurity appliances hosted in a central location controlled by the organization. In such a configuration, network traffic may need to be backhauled from the organization's assets to the central location, which can be costly. Further, the organization itself may have to implement and manage the traditional cybersecurity appliance, which can lead to additional expense. Another approach involves hosting appliances onsite to secure users and assets within the organization's perimeter and then using a cloud-based service to secure mobile users and remote assets. Although this approach can eliminate data backhaul to the cybersecurity appliances hosted onsite, the organization will have to manage both its internal cybersecurity appliances as well as the cloud-based service. This may be inefficient since the two systems will likely have two separate administration interfaces. The configurations of the two systems may also have to be synchronized manually, leading to additional expense and the possibility of inconsistent security policies being applied. Another possible approach is an entirely cloud-based cybersecurity system. However, such a system may present problems in terms of data security and fulfilling organizational requirements. For example, there are regional and country-based requirements, such as Safe Harbor, that require data to stay within a particular location. Cloud-based services generally do not enforce such requirements, as the computing devices that scan and store the organization's data may be scattered across different locations within the cloud network. In addition, cloud networks generally utilize a multi-tenant model, which customer data processing and storage is generally on shared resources, and not isolated from the data of other customers of the cloud network. Because of this, security becomes a major concern. A single, compromised cloud system can expose data from multiple, unrelated customers. Performance may also be a concern in such a cloud system. Since resources are shared between customers, it may be difficult to guarantee performance as rapid changes in demand from one customer can affect other customers that happen to be using the same computing devices within the cloud network. Accordingly, the present disclosure describes an approach to providing cybersecurity to a customer (e.g., an organization) that provides the benefits of a cloud-based system, while ensuring that the customer's data is isolated from the data of other customers in the system. The approach also allows for cybersecurity appliances (e.g., nodes) installed at a customer site to be utilized, and integrates such on-premise appliances into the cloud network. One example method for providing such a system includes assigning a first node in a distributed network to a first customer. The first node is selected from a set of unassigned nodes that are not assigned to any customer. A second node in the distributed network is assigned to a second customer. The second node is also selected from the set of unassigned nodes. Both nodes are configured to only process network traffic associated with the assigned customer. When network traffic is processed by the nodes, network traffic from the first customer is isolated from network traffic from the second customer, and vice versa. This approach may lead to several advantages. For example, the present techniques may allow an organization to leverage as much or as little of the cloud for network-based cybersecurity as desired, depending on organizational needs. This includes hosting all nodes on-site, using all cloud-based nodes, which require no hosted hardware, or mixing both to form a secure hybrid strategy. The present techniques may also ensure that the same level of network cybersecurity is provided regardless of whether a user or asset is within the organization's perimeter or remote. Because both on-premise and cloud nodes are integrated into the system, a single consolidated view of reporting data and logs may be provided for the entire organization, including local and remote users and assets. Further, by providing operating system-level isolation between customers, the system may alleviate security and privacy issues generally associated with cloud-based systems. The system may also provide on-demand scalability, with the ability to assign additional nodes to a customer in response to increased network traffic. The present approach may also offer the ability to leverage globally available cloud infrastructure to service mobile users as they travel abroad, and may improve speed and performance by servicing remote users using a cloud node that is geographically nearby. In addition, the system may provide a configurable upgrade policy that allows even globally distributed organizations to control when upgrades occur in the cloud, including configuring on-demand upgrades and different upgrade schedules, depending on geographic location. FIG.1is a block diagram of an example computer system100for delivering a distributed network security service providing isolation of customer data. As shown, the system100includes a network110controlled by first customer (customer A) and a network120controlled by a second, different customer (customer B). Networks110and120are in communication with cloud computing system140. The cloud computing system140includes a customer A node container150including nodes152,154, a customer B node container160including nodes162,164, and a set of unassigned nodes170including nodes172,174,176. In operation, network traffic from clients112on the customer A network110is processed by the nodes152,154in the customer A node container150. Network traffic from the clients122on the customer B network120is processed first by the on-premise node124, and then by the nodes162,164in the customer B node container160. As shown, each of the node container150,160exclusively processes network traffic and stores data associated with its assigned customer (i.e., customer A for node container150, customer B for node container160). In this way, network traffic and data associated with customer A is isolated from network traffic associated with customer B, and vice versa. The cloud computing system140may also assign nodes from the set of unassigned nodes170to either customer A or customer B automatically, such as in response to increased network traffic, node failures, changes to configuration requirements made by the customer, or other events. The cloud computing system140may also receive network traffic from clients on an external network180separate from the customer A network110and the customer B network120. As shown, network traffic from clients182associated with customer A may be processed by nodes within customer A node container150, and network traffic from clients184associated with customer B may be processed by nodes within the customer B node container160. The cloud computing system140may be a distributed system including a plurality of computing devices or “nodes” interconnected by one or more communications networks. In some cases, the cloud computing system140may be a system configured to provide cybersecurity services to customers (e.g., customer A, customer B) by processing, storing, analyzing, and/or filtering network traffic provided to it by the customers. For example, customer A may configure network110such that network traffic generated by clients112is routed through the cloud computing system140, such as by configuring the clients112to use cloud computing system140as a proxy server or gateway when accessing external networks such as the Internet. The clients112may then send requests for resources on the Internet to cloud computing system140, where the requests may be processed by nodes152,154assigned by the cloud computing system140to customer A. The operation of these nodes is described in more detail below. In some cases, the network traffic sent from the clients112to the cloud computing system140may be encrypted, such as, for example, using Hypertext Transfer Protocol Secure (HTTPS), Internet Protocol Security (IPSec) tunnels or other Virtual Private Network (VPN) techniques, Layer 2 Medium Access Control (MAC) Address redirection, Generic Routing Encapsulation (GRE), Web Cache Communication Protocol (WCCP), or other techniques. In some cases, the clients112may include a software agent executing locally to forward the network traffic to the cloud computing system140. The cloud computing system140may also receive a copy or mirror of the network traffic from the clients112for processing. The nodes of the cloud computing system140may analyze the network traffic received from the customers, and forward the traffic onto the intended destination, such as a website or other resource on the Internet. The network traffic received from the clients112may include traffic using different communications protocols, such as, for example, Hypertext Transfer Protocol (HTTP), Domain Name System (DNS) protocol, File Transfer Protocol (FTP), or other protocols. In some cases, the cloud computing system140may also receive and process network traffic sent from resources on the external network to the clients112, such as webpages, files, or other data sent from servers on the Internet in response to requests by the clients112. The cloud computing system140may also receive customer network traffic from on-premise nodes (e.g.124) located within the customer's network. For example, the web security node124may receive and process network traffic from the clients122at a location inside the customer B network120. After processing the traffic, the web security node124may send the network traffic to the cloud computing system140for additional processing. The web security node124may be configured to communicate with the cloud computing system140using the same techniques described above relative to the clients112. In some cases, the cloud computing system140may be a globally or regionally distributed network, with the nodes and other components of the system located across different geographic areas and connected by high-speed communications networks, such as, for example, optical networks, wireless networks, satellite networks, or other types of networks. In some cases, the components may be connected at least partially over the Internet. The networks connecting the components may utilize different protocols or technologies at different layers in the Open Systems Interconnection (OSI) model, including transport layer technologies such as Ethernet, Asynchronous Transfer Mode (ATM), or Synchronous Optical Networking (SONET), and network layer technologies such as Internet Protocol (IP), Transmission Control Protocol (TCP), or Universal Datagram Protocol (UDP). The components of the cloud computing system140may communicate over these networks using application layer communications protocols, such as, for example, HTTP, FTP, Simple Object Access Protocol (SOAP), Remote Procedure Call (RPC), or using other proprietary or public protocols for application programming interfaces That (APIs). The cloud computing system140may also include controller components (not shown) to coordinate the operations of the nodes. The controller components may execute on separate computing devices from the nodes and/or may be resident on the nodes themselves. Customer A network110and customer B network120include clients112and clients122respectively. The clients112,122may be computing devices such as PCs, laptops, tablets, telephones, servers, routers, storage devices or other network enabled computing devices. The networks110,120may be networks operated by the associated customer, such as a wireless or wired network within a particular building or buildings or campuses. In some cases, the networks110,120may be virtual networks, such as a VPN. The networks110,120may utilize one or more communications technologies including but not limited to Ethernet, Wi-Fi (e.g., IEEE 802.11x), WiMAX (IEEE 802.16), Long Term Evolution (LTE), or other technologies. The clients112,122may be computing devices owned or controlled by customer A and customer B, respectively, and may be used by employees of the customers. In some cases, the clients112,122may not be owned or controlled by the customers, such as in the case the network110,120is a bring your own device (BYOD) network, or an access network such as an Internet service provider (ISP) network. External network180is a network separate from customer A network110and customer B network120. In some cases, external network180may be a public network such as the Internet. The external network180may also be a network owned or controlled by an organization besides customer A or customer B, such as a corporate network, an ISP access network, a cellular provider network, or other network. As shown inFIG.1, the clients182,184configured to send network traffic to the cloud computing system140. Accordingly, the network traffic from these external clients182,184may be processed in the same way as traffic originating from the customer networks110,120, and the same benefits, such as the traffic and data isolation described above, may be realized. The cloud computing system140includes nodes152,154,162,164,172,174,176. As described above, nodes are resources within the cloud computing system140configured to process network traffic received from clients. The cloud computing system140may include different types of nodes, such as, for example, web security nodes152,172, reporting nodes154,164,174, and sandbox nodes162,176. The different types of nodes within the cloud computing system140may be configured to perform different functions. For example, web security nodes152,172may be configured to analyze received network traffic and apply network policies to the traffic, such as by selectively blocking, allowing, filtering, or performing other actions on the traffic based on the configuration attribute set by the particular customer to which the particular node is assigned. For example, web security nodes152,172may filter requests for content from the clients112,122, and/or content sent from external resources to the clients112,122. Content matching certain parameters specified by the customer may be filtered, such as, for example, requests to certain domain names or Universal Resource Locators (URLs), requests for or responses including specific file types, traffic formatted according to certain protocols, traffic from certain users or clients, or other parameters. The web security nodes152,172may also identify and log (e.g., store with a reporting node) particular network events, including actual or suspected malware intrusions, actual or suspected network breaches, visits by clients to malicious, unsafe, or inappropriate websites, downloads of malicious, unapproved, or unlicensed software by clients, or other events. The web security nodes152,172may also identify and store behavioral data, such as client or user network activity, network flows, or other data. In some cases, the web security nodes152,172may be configured to provide proxy service to clients of an assigned customer by forwarding requests received from the clients to appropriate external resources, and forwarding responses from the resources back to the clients. Such forwarding may be selective based on the filtering functionality discussed above. Reporting nodes154,164,174may be configured to store network traffic and/or results of analysis by other nodes, and to produce reports based on the stored data for presentation to users or administrators of the cloud computing system140. The reports may include, but are not limited to, drill down reports allowing network activity to be viewed at both specific and high levels, event logs showing network traffic or other events matching particular criteria, real-time dashboards providing views of the current state of a customer's network traffic, incident response dashboards for monitoring issues with the customers network traffic, and other reports. Sandbox nodes162,176may be configured to execute malicious or potentially malicious software programs in a virtual environment to allow the behavior of the programs to be analyzed without adverse effects to other computing devices external to the sandbox. In some cases, the malicious software programs may be identified by a web security node152,172, such as in a response from an external resource to request from a client. In addition to blocking the download of the malicious software program, the web security node152,172may provide the identified malicious software program to sandbox node162,176for execution and analysis. The cloud computing system140may include other types of nodes not shown in the example configuration ofFIG.1. A risk assessment node may calculate a risk score for identified security events (e.g., intrusions, data exfiltration, denial of service attacks, or other events) in order to allow prioritization of the events based on a level of risk, which may facilitate planning of a remedy or response by the effected organization. For example, the risk assessment node may assign a higher risk score to a data exfiltration involving malicious removal of sensitive data from customer A network110, and assign a lower risk score to an intrusion on the customer A network110that did not access any sensitive data. Such a risk score may be generated based on network traffic received from the clients112, or based on data generated or stored by other nodes in the cloud computing system140. A log indexer node may organize data stored by a reporting node in a specific way to allow it to be accessed quickly, such as by another node within the cloud computing system140, or by a user or administrator of the cloud computing system140through a user interface. The set of unassigned nodes170includes nodes172,174,176that have not been assigned to a particular customer of cloud computing system140. In order to provide more computing resources for particular customer, the cloud computing system140may select a node from the set of unassigned nodes170, and assign the selected node to a particular customer, thus making it a part of the node container for the particular customer. In addition, if the cloud computing system140determines that the particular customer no longer needs the additional computing resources provided by the selected node (e.g., because network demand has decreased), the cloud computing system140may de-assign the selected node from the particular customer and return it to the set of unassigned nodes170. In such a case, all customer data may be deleted from the node when it is de-assigned. Nodes in the set of unassigned nodes170may be of particular node types, such as web security nodes172, reporting nodes174, and sandbox nodes176. These nodes may be configured to perform the functions of their particular node type, but may be “blank” in the sense that they do not include configuration data for any particular customer. The cloud computing system140may select a node of a particular type of the particular customer needs more resources of that type. For example, if the cloud computing system140determines that the amount of network traffic from customer A has increased to a level where two web security nodes are required to deliver or maintain a particular level of performance or latency, the cloud computing system140may select a web security node172from the set of unassigned nodes170, and assign it to customer A. Similarly, if the cloud computing system140determines that additional data storage capacity is needed for customer A, the cloud computing system140may select reporting node174from the set of unassigned nodes170and assign it to customer A. As previously discussed, the nodes of the cloud computing system140may be physical computing devices (physical nodes) or virtual machine instances within virtual machine environments executed by physical computing devices (virtual nodes). The cloud computing system140may include both physical nodes and virtual nodes. In some cases, nodes in the same node container may be virtual nodes on the same computing device or physical nodes in the same chassis or data center to enable low latency communication between the nodes. For example, web security node152and reporting node154included in the customer A node container150may be virtual machine instances executed by the same physical computing device, thereby enabling the nodes to communicate without involving a network. Web security node152and reporting node154included in the customer A node container150may also be cards or blades connected to a data bus and contained within the housing or chassis, enabling the nodes to communicate over the data bus. Web security node152and reporting node154included in the customer A node container150may also be computing devices within a data center, enabling the nodes to communicate over a high speed local network implemented in the data center. In some cases, the web security node152and reporting node154may be physical or virtual nodes associated with computing devices in different geographic areas, and may communicate over a network. The web security node152and reporting node154may also be physical or virtual nodes associated with computing devices located on the customer A network110, similar to the configuration shown for web security node124on the customer B network120. The above configurations of the nodes in the cloud computing system140are merely exemplary, and other configurations are contemplated by the present disclosure. In some implementations, each node in the cloud computing system140may be or be executed by a self-contained computing device including all resources it needs to perform its processing tasks. For example, the node may include one or more processors, one or more storage devices, and other computing components, and such components may be utilized only by the node itself (or other nodes executing on the same physical computing device in the case of virtual nodes). Because sharing of such physical computing components is limited, each node or set of virtual nodes may be self-contained, enabling data associated with the customer assigned to the node or set of virtual nodes to be effectively isolated. FIG.2is a block diagram of an example configuration200of nodes assigned to a particular customer X. As shown, the customer X node container205includes a web security node210, a reporting node220, a sandbox node230, and a risk assessment node240. The components may be configured according to any of the techniques described above relative toFIG.1. As shown, the web security node210receives network traffic204associated with customer X. The web security node210processes the network traffic as described relative toFIG.1. The web security node210provides data based on the received network traffic204to the reporting node220, the sandbox node230, and the risk assessment node240. In some cases, the web security node210may provide data generated based on the customer network traffic204and/or the customer network traffic204itself. In some cases, the web security node210may provide different data to different nodes based on the node type of the receiving node. The nodes220,230,240perform different processing actions on the data received from the web security node210based on their particular node type. Examples of these processing actions are described above the description ofFIG.1. FIG.3is a swim lane diagram showing a process300for delivering a distributed network security service providing isolation of customer data. The process involves interaction between a customer X302, a customer Y304, a DNS server306, and nodes308,310assigned to customer X and customer Y, respectively. The nodes308,310are included in a cloud computing system, such as that described relative toFIG.1. The DNS server306may also be included in or be separate from the cloud computing system. At320, customer X302sends a DNS query including a domain name. In some cases, the domain name may be a “virtual” domain name, meaning that domain name does not refer to a specific server, but instead to any node or server hosting a node that can serve as an entry point into the customer's particular node container. For example, a virtual domain name sent by customer X might include a sub-domain identifying the customer (e.g., “customerx.blah.com”). In some cases, the DNS server may identify a customer sending the request based on other information, such as the originating IP address or MAC address. At325, the DNS server306selects a node assigned to customer X to process the received network traffic. In some cases, this determination may be performed by another component within the system, such as a load balancer configured to distribute traffic among different nodes assigned to customer X. The system may also select the node based on its proximity the current location of the customer X device that sent the request, such as by geo-locating the device based on the originating address for the request. At330, the DNS server306returns the address of the selected node (308) to customer X. At335, customer X302sends encrypted network traffic to node308, which is assigned to customer X. At340, the node308decrypts and processes the customer X network traffic. At345, customer Y304sends a DNS query including a domain name. In some cases, the domain name may be a “virtual” domain name, meaning that domain name does not refer to a specific server, but instead to any node or server hosting a node that can serve as an entry point into the customer's particular node container. For example, a virtual domain name sent by customer Y might include a sub-domain identifying the customer (e.g., “customery.blah.com”). In some cases, the DNS server may identify a customer sending the request based on other information, such as the originating IP address or MAC address. At350, the DNS server306selects a node assigned to customer Y to process the received network traffic, as described above. At355, the DNS server306returns the address of the selected node (310) to customer Y. At360, customer Y304sends encrypted network traffic to node310, which is assigned to customer Y. At365, the node310decrypts and processes the customer Y network traffic. In some implementations, the cloud computing system may not include the DNS server306, and the customers may be configured to send network traffic directly to a node to which they are assigned, such as by utilizing encrypted tunnels to the assigned nodes. In either case, the network traffic is only decrypted by a node that is dedicated to that particular customer, and thus is kept isolated from network traffic and data from other customers. FIG.4is a swim lane diagram showing a process for assigning and de-assigning a node from a particular customer in a distributed network security system. At410, a controller402determines that customer X requires a new node, such as in response to increased network traffic, increase utilization on existing assigned nodes, or other events. At415, the controller402signs and unassigned node404to customer X. In response, at420, the node404retrieves configuration data for customer X from other nodes assigned to customer X. In some cases, the node404may receive the configuration data from other nodes of the same type (e.g., other web security nodes if the node404is a web security node). This process results in the configuration data for customer X only being stored at nodes assigned to customer X, thereby ensuring the data isolation previously discussed. At425, the node404processes network traffic received from customer X, as described previously relative toFIG.1. At430, the controller402determines that customer X no longer requires node404, such as in response to decreased network traffic, decreased utilization across nodes assigned to customer X, or other events. At435, controller402de-assigns node404from customer X, such as by sending a command to the node404over a network. In response, at440, the node404deletes any local data it has stored associated with customer X, and returns to the set of unassigned nodes as a “blank” node. FIG.5is an example user interface500for the distributed network security system. The user interface500may be presented to a user, such as through a web browser, and may receive input from the user, for example in the form of keystrokes or mouse clicks. The user interface500includes an array of visual tiles (e.g.,510,520) each associated with the particular function of the cloud computing system. Before accessing user interface500, the user may have provided login credentials to a multi-tenant authentication system, and a system that presents the user interface500may itself be multi-tenant. When the user activates one of the visual tiles, a request to a node associated with the particular function denoted by the tile is generated. This request is sent to a single-tenant node assigned to the customer with which the user is associated. The single-tenant node may respond with a subsequent user interface (e.g., a webpage to be rendered in the user's browser) allowing the user to access or change data associated with the particular customer. For example, when a user associated with a customer A clicks on the web security tile510, a request may be sent to a web security node assigned to customer A (e.g., web security node152inFIG.1). The web security node may respond to the user with a webpage including configuration or other data associated with customer A. If a user from another customer clicks on the web security tile510, a request would be generated to a different web security node associated with that customer. In this way, a global, multi-tenant user interface may be implemented to service multiple customers of the cloud computing system, while requests involving customer data are still handled by single-tenant nodes dedicated to that particular customer. FIG.6is a flow chart showing a process600for delivering a distributed network security service providing isolation of customer data. The process600may be performed in the context any of the systems previously described. At605, a first node in a distributed network is assigned to a first customer. In some cases, the first node is selected from a set of unassigned nodes that are not assigned to any customer. The first node may be assigned to the first customer based on a determination that the first customer requires additional processing resources. In some implementation, the first node is a virtual machine instance executed by a physical computing device. The first node may also be a physical computing device located on a local network controlled by the first customer. The first node may be an administrative node, a web security node, a reporting node, a sandbox node, an uptime node, a risk assessment node, or any other type of node. In some cases, the first node is a web security node, and processing the network traffic associated with the first customer includes applying a network policy to the network traffic. In some implementations, the first node is a reporting node, and processing the network traffic associated with the first customer includes storing data associated with the network traffic of the first customer. At610, a second node in the distributed network is assigned to a second customer, the second node being different than the first node and the second customer being different than the first customer. The second node may be selected from the set of unassigned nodes, and may include all functionality described relative to the first node. At615, the assigned first node is configured to process network traffic only from the first customer. In some cases, configuring the assigned first node includes receiving, by the assigned first node, configuration information specific to the first customer only from one or more other nodes assigned to the first customer. At620, the assigned second node is configured to process network traffic only from the second customer. At625, the assigned first node processes network traffic associated with the first customer, wherein the network traffic of the first customer is isolated from the network traffic of the second customer. At630, the assigned second node processes network traffic associated with the second customer, wherein the network traffic of the first customer is isolated from the network traffic of the second customer. In some cases, the process600further includes determining that the first customer no longer requires the first node after assigning the first node to the first customer, and de-assigning the first node from the first customer including deleting data associated with the first customer from the first node, and returning the first node to set of unassigned nodes. The process600may include assigning an additional node to the first customer, wherein the additional node is a virtual machine executed by a physical computing device located on a different network than the first node. The additional node assigned to the first customer may be of the same or a different node type than the first node. In some implementations, the process600includes determining that the first customer requires additional processing resources in a particular geographic location based on at least one request associated with the first customer received from the particular geographic location, wherein assigning the first node to the first customer includes selecting the first node from the set of unassigned nodes based on a proximity of the location of a physical computing device associated with the first node to the particular geographic location. In some cases, the process600includes receiving, from a client associated with the first customer, a request to access a multi-tenant user interface; authenticating the client to the multi-tenant user interface using credentials associated with the first customer; receiving a request to access data associated with the first customer from the client via the multi-tenant user interface; and in response to receiving the request from the client via the multi-tenant user interface, generating a request to the first node assigned to the first customer. FIG.7is a block diagram of computing devices700,750that may be used to implement the systems and methods described in this document, as either a client or as a server or plurality of servers. Computing device700is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Computing device750is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, and other similar computing devices. Additionally computing device700or750can include Universal Serial Bus (USB) flash drives. The USB flash drives may store operating systems and other applications. The USB flash drives can include input/output components, such as a wireless transmitter or USB connector that may be inserted into a USB port of another computing device. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document. Computing device700includes a processor702, memory704, a storage device706, a high-speed interface708connecting to memory704and high-speed expansion ports710, and a low speed interface712connecting to low speed bus714and storage device706. Each of the components702,704,706,708,710, and712, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor702can process instructions for execution within the computing device700, including instructions stored in the memory704or on the storage device706to display graphical information for a GUI on an external input/output device, such as display716coupled to high speed interface708. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices700may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). The memory704stores information within the computing device700. In one implementation, the memory704is a volatile memory unit or units. In another implementation, the memory704is a non-volatile memory unit or units. The memory704may also be another form of computer-readable medium, such as a magnetic or optical disk. The storage device706is capable of providing mass storage for the computing device700. In one implementation, the storage device706may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory704, the storage device706, or memory on processor702. The high speed controller708manages bandwidth-intensive operations for the computing device700, while the low speed controller712manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In one implementation, the high-speed controller708is coupled to memory704, display716(e.g., through a graphics processor or accelerator), and to high-speed expansion ports710, which may accept various expansion cards (not shown). In the implementation, low-speed controller712is coupled to storage device706and low-speed expansion port714. The low-speed expansion port, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. The computing device700may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server720, or multiple times in a group of such servers. It may also be implemented as part of a rack server system724. In addition, it may be implemented in a personal computer such as a laptop computer722. Alternatively, components from computing device700may be combined with other components in a mobile device (not shown), such as device750. Each of such devices may contain one or more of computing device700,750, and an entire system may be made up of multiple computing devices700,750communicating with each other. Computing device750includes a processor752, memory764, an input/output device such as a display754, a communication interface766, and a transceiver768, among other components. The device750may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of the components750,752,764,754,766, and768, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate. The processor752can execute instructions within the computing device750, including instructions stored in the memory764. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. Additionally, the processor may be implemented using any of a number of architectures. For example, the processor752may be a CISC (Complex Instruction Set Computers) processor, a RISC (Reduced Instruction Set Computer) processor, or a MISC (Minimal Instruction Set Computer) processor. The processor may provide, for example, for coordination of the other components of the device750, such as control of user interfaces, applications run by device750, and wireless communication by device750. Processor752may communicate with a user through control interface758and display interface756coupled to a display754. The display754may be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface756may comprise appropriate circuitry for driving the display754to present graphical and other information to a user. The control interface758may receive commands from a user and convert them for submission to the processor752. In addition, an external interface762may be provided in communication with processor752, so as to enable near area communication of device750with other devices. External interface762may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used. The memory764stores information within the computing device750. The memory764can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory774may also be provided and connected to device750through expansion interface772, which may include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory774may provide extra storage space for device750, or may also store applications or other information for device750. Specifically, expansion memory774may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, expansion memory774may be provide as a security module for device750, and may be programmed with instructions that permit secure use of device750. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner. The memory may include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory764, expansion memory774, or memory on processor752that may be received, for example, over transceiver768or external interface762. Device750may communicate wirelessly through communication interface766, which may include digital signal processing circuitry where necessary. Communication interface766may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver768. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module770may provide additional navigation- and location-related wireless data to device750, which may be used as appropriate by applications running on device750. Device750may also communicate audibly using audio codec760, which may receive spoken information from a user and convert it to usable digital information. Audio codec760may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device750. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device750. The computing device750may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone780. It may also be implemented as part of a smartphone782, personal digital assistant, or other similar mobile device. Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input. The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), peer-to-peer networks (having ad-hoc or static members), grid computing infrastructures, and the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. Although a few implementations have been described in detail above, other modifications are possible. In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. Other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims. | 49,098 |
11943298 | DETAILED DESCRIPTION In general, a device having a function of performing communication conforming to an Ethernet standard (hereinafter, a communication device) includes a physical layer (PHY) and a medium access controller (MAC) that executes medium access control. In general, a communication device includes a plurality of PHYs and a plurality of MACs corresponding to the PHYs, respectively. The MAC operates based on a clock signal input from the PHY or the like. For example, when the PHY and the MAC are configured to communicate by media independent interface (MII), the MAC transmits and receives data with the PHY connected to the MAC based on the clock signal input from the PHY. Further, in reduced MII (RIII), the MAC transmits and receives data with the PHY using a reference clock signal output by a clock generator prepared separately from the PHY. The communication device here includes an information processing device such as a computer in addition to a router and a switch. Hereinafter, for convenience, a configuration that outputs the clock signal used for the MAC to transmit and receive data with the PHY, such as the PHY in MII and the clock generator in RMII, is referred to as a clock output source. Further, a signal line through which the clock signal used for the MAC to transmit and receive data with the PHY is referred to as a clock line. In a communication device, an external noise superimposed on a communication cable may cause a fault in the clock output source, a phenomenon in which the clock line connected to the clock output source is fixed at a certain level (hereinafter, clock fixing phenomenon) may occur. In particular, since there are many noise sources such as actuators in vehicles, noise is more likely to be superimposed on a communication cable than in offices, schools, homes or the like (hereinafter, offices or the like). Therefore, in a communication device used in a vehicle (that is, a vehicular communication device), the above-described clock fixing phenomenon is more likely to occur than in a general communication device used in offices or the like. The vehicular communication device is, for example, an electronic control unit (hereinafter referred to as an ECU). In addition, relay devices such as a router and a switching hub are also included in vehicular communication devices. When a clock fixing phenomenon occurs in the vehicular communication device, the MAC connected to the clock line where the clock fixing phenomenon occurs can no longer transmit data to the PHY or receive data output from the PHY. Accordingly, the vehicular communication device can no longer transmit and receive data with other communication devices via the MAC. On the other hand, there are some combinations of in-vehicle ECUs that should share information in real time from the viewpoint of safety. Therefore, there is a demand for releasing the clock fixing phenomenon in vehicular communication devices as soon as possible. In the case of a communication device used in offices or the like (that is, a communication device for the general public), even if a fault occurs due to the clock fixing phenomenon, a user can take measures, such as restarting the communication device manually, for example, by pressing a power button for a long time. However, in-vehicle communication devices such as an ECU and a relay device are stored inside vehicles and are not configured to be easily operated by a user such as a driver. A vehicular communication device according to an aspect of the present disclosure is configured to perform communication in accordance with an Ethernet standard, and includes a medium access controller (MAC), a clock monitor, and a return processor. The MAC is configured to execute medium access control. The clock monitor is configured to monitor a clock signal for operating the MAC. The return processer is configured to restart a clock output source that outputs the clock signal, in response to the clock monitor detecting that the clock signal is stopped. According to the above configuration, when the clock signal for operating the MAC is fixed, the clock monitor detects a stop of the clock signal, and the return processor restarts the clock output source. When the clock signal is stopped, it is assumed that the clock output source has a fault due to external noise superimposed on a communication cable. Therefore, restarting the clock output source will release the fixing of a clock line. That is, according to the above configuration, the fixing of the clock signal for operating the MAC can be automatically released. Embodiments of the present disclosure will be described below with reference to the drawings.FIG.1is a diagram showing a configuration example of an in-vehicle communication system100according to the present disclosure. The in-vehicle communication system100is a communication system built in a vehicle. The in-vehicle communication system100according to the present embodiment is configured according to the in-vehicle Ethernet standard. Hereinafter, data communication in accordance with the Ethernet communication protocol is referred to as Ethernet communication. Further, hereinafter, a communication frame refers to a communication frame in accordance with the Ethernet communication protocol (so-called Ethernet frame). The in-vehicle communication system100includes at least one relay device2and a plurality of electronic control units (ECUs)2as nodes. The in-vehicle communication system100shown inFIG.1includes six ECUs1and two relay devices (RELAY)2as an example. When distinguishing each of the two relay devices2from the other, the two relay devices2are described as relay devices2aand2b. Further, when distinguishing each of the six ECUs1from the others, the six ECUs1are described as ECUs1ato1f. The ECUs1ato1care connected to the relay device2avia communication cables9, respectively, so as to be able to communicate with each other. The ECUs1dto1fare connected to the relay device2bvia communication cables9, respectively, so as to be able to communicate with each other. The relay device2aand the relay device2bare also connected to each other via a communication cable9so as to be able to communicate with each other. The communication cables9may be twisted pair cables. The number of ECUs1and relay devices2constituting the in-vehicle communication system100is an example, and can be changed as appropriate. Further, the network topology of the in-vehicle communication system100shown inFIG.1is an example and is not limited thereto. The network topology of the in-vehicle communication system100may be a mesh type, a star type, a bus type, a ring type, or the like. The network shape can also be changed as appropriate. As described above, the ECUs1are connected to the relay devices2via the communication cables9. The plurality of ECUs1provide different functions. For example, the ECU1ais an ECU that provides an autonomous driving function (so-called autonomous driving ECU). The ECU1bis an ECU that acquires a program for updating a software of an ECU by wirelessly communicating with an external server and updates the software of the ECU to which the program is applied. The ECU1cis an ECU that provides a function as a gateway when the in-vehicle communication system100is connected to an external tool by wire. The external tool here refers to a tool for updating or rewriting the software of a target ECU by wired communication (so-called reprogramming tool) or a diagnostic tool. The relay devices2can be connected with the ECUs1that provide various functions as nodes. Each of the ECUs1performs transmission and reception of data with another ECU1via the relay device2according to the Ethernet communication protocol. Each of the ECUs1directly communicates only with the relay device2. Nodes connected to the relay device2may be nodes other than the ECU1, such as a sensor. The node may be an external tool capable of dynamically changing the connection state to the in-vehicle communication system100by a user or an inspector. The relay device2can also correspond to a node from another point of view. For example, for the relay device2a, the relay device2bcorresponds to one of the nodes connected to the relay device2a. Each of the ECUs1and each of the relay devices2are assigned with unique identification information. The identification information includes a MAC address. Each of the relay devices2is a device that transmits a communication frame received from a certain communication cable9to a communication cable9according to the destination of the communication frame. As shown inFIG.2, each of the relay devices2includes a plurality of PHYs3, a controller (CTRL)4, and a microcomputer (MC)5, and a power supply circuit (PS CIR)6. Each of the PHYs3is connected to the communication cable9and provides a physical layer in the OSI reference model. Each of the PHYs3includes a port31electrically connected to the communication cable9. In the present embodiment, as an example, one communication cable9is connected to one PHY3. That is, each of the PHYs3includes one port31to be connected with one of the communication cables9. For example, one PHY3included in the relay device2ais connected to the ECU1avia the communication cable9, and another PHY3included in the relay device2ais connected to the ECU1bvia the communication cable9. In addition, the relay device2aincludes a PHY3connected to the ECU1cvia the communication cable9, a PHY3connected to the relay device2bvia the communication cable9. The number of PHYs3included in the relay device2corresponds to the number of nodes to which the relay device2can be connected. As an example, the relay device2of the present embodiment includes six PHYs3so as to enable Ethernet communication with six nodes at the maximum. As another configuration, the number of PHYs3included in the relay device2may be four or eight. Further, each of the PHYs3may include a plurality of ports31. For example, each of the PHYs3may have two ports31. A unique port number is set for each of the plurality of ports31included in the relay device2. For convenience, when the plurality of ports31included in the relay device2are distinguished, each of the ports31is described as the Kth port using the port number K set to each of the ports31. For example, a first port refers to the port31whose port number is set to 1, and a second port refers to the port31whose port number is set to 2. Generally, the PHY3converts the signal input from the connected communication cable9(hereinafter, the connection cable) into a digital signal that can be processed by the controller4, and outputs the digital signal to the controller4(specifically, a MAC41). Further, the PHY3converts the digital signal input from the controller4into an analog signal, which is an electric signal capable of being transmitted to the communication cable9, and outputs the converted signal to a predetermined communication cable9. In addition to the above-described signal conversion, the PHY3also performs frame coding, serial-parallel conversion, signal waveform conversion, and the like. The PHY3is an integrated circuit (IC) including an analog circuit, that is, a hardware circuit. Each of the PHYs3and the controller4(specifically, MAC41) are configured to communicate with each other according to an MII standard, as will be described later. The controller4is connected to each of the PHYs3and is also connected to the microcomputer5so as to be able to communicate with each other. The controller4is programmed to execute functions of a second layer (data link layer) to a third layer (so-called network layer) in the OSI reference model. The controller4includes a plurality of MAC41s, a plurality of clock monitors (CL MNT)42, a switch processor (SW PRC)43, and a third layer provider (3L PRV) L3 as functional blocks. Each of the MACs41performs medium access control in the Ethernet communication protocol. The MACs41are prepared for the PHYs3, respectively. That is, the number of MACs41are the same as the number of the PHYs3. The MAC41sare connected to different one of the PHYs3. Each of the MACs41provides the switch processor43with a communication frame (hereinafter, also referred to as a reception frame) input from the PHY3connected to each of the MACs41. In addition, each of the MACs41outputs the communication frame input from the switch processor43to the PHY3corresponding to each of the MACs41, and transmits the communication frame to the communication cable9. Each of the MACs41performs carrier sense multiple access/collision detection (CSMA/CD) cooperatively with the corresponding PHY3. Each of the MACs41may be configured to provide the functions specified by IEEE 802.3. Each of the clock monitors42is configured to monitor the clock signal input from the PHY3to the MAC41. The clock monitor42is prepare for every MAC41(in other words, every PHY3), for example. That is, the controller4includes the plurality of clock monitors42so as to correspond to the respective MACs41/PHYs3. The details of the clock monitors42will be described later. The switch processor43identifies the PHY3(strictly speaking, the port31) to which the communication frame received from the MAC41is to be transmitted based on the destination MAC address and an address table included in the communication frame. Then, the reception frame is relayed by outputting the communication frame to the MAC41corresponding to the identified PHY3. The address table is data indicating the MAC address of the node connected to each PHY3(strictly speaking, each port31). The MAC address for each PHY3is learned by various methods such as learning bridge and address resolution protocol (ARP). A detailed description of the method of generating the address table will be omitted. The controller4may be provided with the function of learning the MAC address of the connection destination for each PHY3(hereinafter, the address table update function), or the microcomputer5may be provided with the address table update function. The third layer provider performs relay processing using an internet protocol (IP) address. In other words, the third layer provider relays communication frames between different networks. The function of the third layer in the OSI reference model may be provided in the microcomputer5. The functional arrangement in the relay device2can be changed as appropriate. The controller is realized by using, for example, a field-programmable gate array (FPGA). The controller4may be realized by using an application specific integrated circuit (ASIC). Further, the controller4may be realized by using a microprocessor unit (MPU), a central processing unit (CPU), or a graphical processing unit (GPU). The controller4having the above-described functions corresponds to a configuration that operates as a switch (in other words, a switching hub) or a router. The microcomputer5is a computer including a CPU, a read only memory (ROM), a random access memory (RAM), an input-output part (I/O), and a bus line for connecting these components. The ROM stores a program for causing a general-purpose computer to function as the microcomputer5. The microcomputer5provides the functions from a fourth layer to a seventh layer of the OSI reference model by the CPU executing the program stored in the ROM while using the temporary storage function of the RAM. That is, the microcomputer5includes a fourth layer provider (4L PRV) L4, a fifth layer provider (5L PRV) L5, a sixth layer provider (6L PRV) L6, and a seventh layer provider (7L PRV) L7 corresponding to each layer from the fourth layer to the seventh layer. The fourth layer provider L4 is configured to execute a process as the fourth layer (that is, a transport layer), and executes inter-program communication, data transfer guarantee, and the like. The fifth layer provider L5 is configured to execute a process as the fifth layer (that is, a session layer). The sixth layer provider L6 is configured to execute a process as the sixth layer (that is, a presentation layer). The seventh layer provider L7 is configured to execute a process as the seventh layer (that is, an application layer). Such a configuration corresponds to a configuration in which the fourth to seventh layers are realized by software processing. The storage medium for storing the program executed by the CPU is not limited to the ROM but may be stored in a non-transitional substantive recording medium. Further, as shown separately inFIG.3, the microcomputer5includes a return processor (RTN PRC)51. The return processor51executes a predetermined clock fixing releasing process based on the notification from the clock monitor42. The details of the operation of the return processor51will be described later. Next, the configuration and operation of the clock monitor42will be described with reference toFIG.3. The MAC41of the present embodiment is configured to transmit and receive data by the PHY3and MII. That is, the PHY3includes a transmission clock output terminal P11, a transmission data input terminal P12, a reception clock output terminal P13, and a reception data output terminal P14. Further, the MAC41includes a transmission clock input terminal P21, a transmission data output terminal P22, a reception clock input terminal P23, and a reception data input terminal P24. The MAC41further includes a transmission controller (TX CTRL)411and a reception controller (RX CTRL)412as a configuration for performing communication by MII. The PHY3further includes a reset input terminal P15. The transmission clock output terminal P11is connected to the transmission clock input terminal P21by a signal line, and the transmission data input terminal P12is connected to the transmission data output terminal P22by a signal line. The reception clock output terminal P13is connected to the reception clock input terminal P23by a signal line, and the reception data output terminal P14is connected to the reception data input terminal P24by a signal line. For convenience, the signal line connecting the transmission clock output terminal P11and the transmission clock input terminal P21is referred to as a transmission clock line Ln1. Further, the signal line connecting the reception clock output terminal P13and the reception clock input terminal P23is referred to as the reception clock line Ln2. The MAC41further includes a terminal for outputting a transmission enable signal, a terminal for receiving a signal indicating that valid reception data is being received, and the like (both are not shown). Further, the MAC41may be provided with terminals for inputting or outputting various signals such as an input terminal for a carrier detection signal and an input terminal for a collision detection signal. When the PHY3is operating normally, the PHY3sequentially outputs a transmission clock signal (TX_CLK) of a predetermined frequency (for example, 25 MHz) from the transmission clock output terminal P11. The transmission clock signal output from the transmission clock output terminal P11is input to the transmission clock input terminal P21of the MAC41. The transmission clock signal is a clock for operating the transmission controller411of the MAC41. The transmission controller411is configured to execute a process for outputting the communication frame input from the switch processor43to the PHY3. Based on the transmission clock signal being inputted, the transmission controller411outputs a data constituting the communication frame input from the switch processor43to the PHY3by 4 bits at a time. TXD shown inFIG.3represents 4-bit transmission data. The transmission data is output from the transmission data output terminal P22of the MAC41and input to the transmission data input terminal P12of the PHY3. The transmission data input to the PHY3is subjected to processing such as modulation by a transmission circuit (not shown) and transmitted to the communication cable9. When the transmission clock signal is stopped, the transmission controller411stops operating. Therefore, when the output of the transmission clock output terminal P11of a certain PHY3is fixed at a certain level (for example, a high level or a low level), the relay device2can no longer output a data to another communication device connected to the PHY3. Further, when the PHY3is operating normally, the PHY3sequentially outputs a reception clock signal (RX_CLK) of a predetermined frequency (for example, 25 MHz) from the reception clock output terminal P13. The reception clock signal output from the reception clock output terminal P13is input to the reception clock input terminal P23of the MAC41. The reception clock signal is a clock for operating the reception controller412of the MAC41. Further, the PHY3receives a data input from the communication cable9and outputs the data from the received data output terminal P14by 4 bits at a time. A reception data (RXD) output from the reception data output terminal P14is input to the reception data input terminal P24of the MAC41. The reception controller412of the MAC41is configured to execute a process for outputting the reception data provided from the PHY3to the switch processor43. Based on the reception clock signal being inputted, the reception controller412operates and acquires the data input to the reception data input terminal P24. When the reception clock signal is stopped, the reception controller412stops operating. Therefore, when the reception clock signal is stopped, the reception controller412does not receive the data even if the reception data is output from the reception data output terminal P14. Therefore, when the output of the reception clock output terminal P13of a certain PHY3is fixed at a certain level (for example, a high level), the relay device2can no longer receive a date from another communication device connected to the PHY3. Hereinafter, for convenience, when the reception clock signal and the transmission clock signal are not distinguished from each other, they are referred to as clock signals. Further, when the transmission clock output terminal P11and the reception clock output terminal P13are not distinguished, they are simply described as clock output terminals. In general, a plurality of types of clock signals such as a system clock is present in the relay device2, but the clock signals here indicate clock signals for operating the transmission controller411and the reception controller412of the MAC41. The PHY3that outputs the clock signal corresponds to the clock output source. The clock monitor42is configured to detect the fixing of the transmission clock signal and the reception clock line. The clock monitor42of the present embodiment includes a transmission clock monitor (TX CLK MNT)421and a reception clock monitor (RX CLK MNT)422as a finer configuration. The transmission clock monitor421is configured to receive the output signal of the transmission clock output terminal P11. For example, the transmission clock monitor421is electrically connected to the transmission clock line Ln1. The transmission clock monitor421sequentially monitors a voltage applied to the transmission clock line Ln1and detects a stop of the transmission clock signal. Sequentially monitoring the voltage applied to the transmission clock line Ln1corresponds to sequentially monitoring the signal input to the transmission clock input terminal P21of the MAC41and the signal output from the transmission clock output terminal P11of the PHY3. The transmission clock monitor421determines that the PHY3is operating normally based on a periodic input of the pulse signal from the transmission clock output terminal P11. Further, the transmission clock monitor421detects a fault operation of the PHY3based on a fact that the periodic pulse signal is no longer input from the transmission clock output terminal P11. For example, as shown inFIG.4, the transmission clock monitor421determines that the transmission clock line is fixed when a rising edge is not observed for a predetermined standby time Tq or more. The state in which the transmission clock line is fixed corresponds to a state in which the output of the transmission clock output terminal P11is fixed at a certain level such as a high level or a low level. Determining that the transmission clock line is fixed corresponds to detecting a fixing of the transmission clock line. The standby time Tq, which is a parameter for determining that the transmission clock line is fixed, may be set to, for example, four times a rising edge interval Tp in a case where the PHY3is operating normally. A specific value of the standby time Tq may be appropriately designed. The standby time Tq may be set to a value longer than the rising interval Tp of the pulse signal. The case where the rising edge of the transmission clock signal is not observed for the predetermined standby time Tq or more means that the transmission clock signal remains at a constant level (for example, the high level or the low level) for the predetermined standby time Tq or more. When the transmission clock monitor421detects that the transmission clock line is fixed, the transmission clock monitor421outputs a fault notification data to the microcomputer5. The fault notification data is data indicating that the transmission clock line of the PHY3to be monitored is fixed. It is preferable that the fault notification data includes information indicating which of the plurality of PHYs3included in the relay device2has the fixing of the clock line. In the present embodiment, as an example, a unique identification number (hereinafter, PHY number) is set in advance for each PHY3provided in the relay device2, and the transmission clock monitor421is configured to notify of the PHY number of the PHY3whose clock line is fixed. The PHY3whose clock line is fixed refers to the PHY3in which the output level of the clock output terminal is fixed to a certain level. The return processor51of the microcomputer5outputs a reset signal to the PHY3whose clock line is fixed, based on the fault notification data input from the transmission clock monitor421. In this way, the microcomputer5outputs the reset signal to the PHY3detected not to be operating normally by the clock monitor42, thereby restarting the PHY3. Accordingly, the PHY3whose clock line is fixed is returned to the normal state. The output process of the reset signal is an example of a process performed by the return processor51in order to normalize the operation of the PHY3. The content of the process performed by the return processor51to normalize the operation of the PHY3is not limited to the above example. The details of the operation mode of the return processor51will be described later. The reception clock monitor422is configured to sequentially monitor a voltage applied to the reception clock line Ln2to detect the fixing of the reception clock line. The reception clock monitor422is configured to receive the output signal of the reception clock output terminal P13. For example, the reception clock monitor422is electrically connected to the reception clock line Ln2. Sequentially monitoring the voltage applied to the reception clock signal line Ln2corresponds to sequentially monitoring the signal input to the reception clock input terminal P23of the MAC41and the signal output from the reception clock output terminal P13of the PHY3. The method by which the reception clock monitor422detects the fixing of the reception clock line can be the same as the method by which the transmission clock monitor421detects the fixing of the transmission clock line. That is, the reception clock monitor422determines that the reception clock signal is stopped when the rising edge is not observed for the predetermined standby time Tq or more. When the reception clock monitor422determines that the reception clock line is fixed, the reception clock monitor422outputs a fault notification data to the microcomputer5. The content of the fault notification data is similar to the fault notification data output by the transmission clock monitor421. The transmission clock monitor421and the reception clock monitor422may be configured to determine that the PHY3is operating abnormally also when the interval between the rising edges of the clock signal is too short or too long. Hereinafter, when the transmission clock monitor421and the reception clock monitor422are not distinguished from each other, they are simply referred to as the clock monitor42. Here, the clock fixing releasing process performed by the return processor51in cooperation with the clock monitor42and the like will be described using the flowchart shown inFIG.5. The clock fixing releasing process is a process for returning the PHY3whose clock signal is fixed to the normal state. The return processor51executes the clock fixing releasing process based on the input of the fault notification data from the clock monitor42. The clock fixing releasing process of the present embodiment includes S101to S109as an example. The conditions under which the return processor51executes the clock fixing releasing process can be changed as appropriate. For example, the return processor51may be configured to execute the clock fixing releasing process when the stop of the clock signal is detected a plurality of times in the same PHY3within a certain time. First, in S101, the return processor51outputs the reset signal to the PHY3for which the fixing of the clock line is detected (hereinafter, the target PHY). The target PHY may be specified by, for example, the PHY number included in the fault notification data. When the process in S101is completed and the predetermined PHY starting time elapses, S102is executed. The PHY starting time is an estimated value of a time required for restarting the PHY3. The specific value of the PHY starting time may be appropriately designed. In the present embodiment, as an example, it is assumed that the reset signal is directly input from the microcomputer5to the target PHY, but the present embodiment is not limited to this. The microcomputer5may be configured to instruct the controller4to reset the target PHY, and the controller4may be configured to reset the target PHY based on the instruction from the microcomputer5. In S102, the clock monitor42corresponding to the target PHY determines whether the target PHY is normally outputting the clock signal. The clock monitor42corresponding to the target PHY is the clock monitor42configured to monitor the clock signal output from the target PHY among the plurality of clock monitors42. In other words, the clock monitor42connected to the clock output terminal of the target PHY corresponds to the clock monitor42corresponding to the target PHY. The determination of whether the target PHY is normally outputting the clock signal may be performed in the same manner as the determination of whether the clock line is fixed. When it can be confirmed that the target PHY is normally outputting the clock signal, S102is affirmatively determined and this flow ends. On the other hand, when the target PHY is not normally outputting the clock signal, that is, when the clock line of the target PHY is still fixed, S102is negatively determined and S103is executed. In S103, the return processor51cooperates with the power supply circuit6to temporarily interrupt the power supply to the target PHY, thereby restarting the target PHY in terms of hardware (that is, hardware rebooting). Specifically, the return processor51outputs a control signal to the power supply circuit6to interrupt the power supply to the target PHY. Then, after the lapse of the predetermined time, the return processor51outputs a control signal for supplying power to the target PHY to the power supply circuit6. When the process in S103is completed, S104is executed. In S104, in a manner similar to S102, the clock monitor42corresponding to the target PHY determines whether the target PHY is normally outputting the clock signal. When it can be confirmed that the target PHY is normally outputting the clock signal, S104is affirmatively determined and this flow ends. On the other hand, when the target PHY is not normally outputting the clock signal, S104is negatively determined and S105is executed. In S105, the return processor51restarts the controller4and proceeds to S106. In S106, in a manner similar to S102, the clock monitor42corresponding to the target PHY determines whether the target PHY is normally outputting the clock signal. When it can be confirmed that the target PHY is normally outputting the clock signal, S106is affirmatively determined and this flow ends. On the other hand, when the target PHY is not normally outputting the clock signal, S106is negatively determined and S107is executed. In S107, the return processor51restarts the relay device2including the microcomputer5and proceeds to S108. Hereinafter, for convenience, restarting the relay device2will also be referred to as a device restart. In S108, in a manner similar to S102, the clock monitor42corresponding to the target PHY determines whether the target PHY is normally outputting the clock signal. When it can be confirmed that the target PHY is normally outputting the clock signal, S108is affirmatively determined and this flow ends. On the other hand, when the target PHY is not normally outputting the clock signal, S108is negatively determined and S109is executed. In S109, the return processor51executes the PHY fixing notification process. The PHY fixing notification process is a process of notifying the user or an external device of a fault in the PHY3(specifically, the fixing of the clock line). The PHY fixing notification process corresponds to a process of notifying that a fault has occurred in the clock signal for operating the MAC41. For example, as the PHY fixing notification process, the return processor51cooperates with a wireless communication device (not shown) to notify a center, which manages the vehicle, that a fault has occurred in the PHY3of the relay device2. Alternatively, as the PHY fixing notification process, the return processor51notifies via a display, an indicator, a speaker, or the like that a fault has occurred in the communication network constructed in the vehicle. When the processing in S109is completed, this flow ends. In the above-described configuration, for example, when the clock line of the PHY3connected to the communication cable9is fixed due to the influence of noise superimposed on the communication cable9, the clock monitor42detects the occurrence of the fixing, and the recovery processor51restarts the target PHY. According to such a configuration, the relay device2can automatically return the target PHY to the normal state. Further, even when the clock line of the PHY3is fixed, it is not necessary for a user such as a driver to restart the relay device2by a manual operation. In addition, in the above embodiment, as the clock fixing releasing process, the target PHY is attempted to be normalized by individually restarting only the target PHY (S101, S103). According to such a configuration, even while the target PHY is restarted, the communication between the devices via the other PHY3for which the fixing of the clock line is not detected is continued. Therefore, the influence on the in-vehicle communication system100can be reduced as compared with the case where the device restart is executed. In addition, as a method for individually restarting only the target PHY, two methods, that is, restart by a reset signal and restart by interrupting and turning on the power, are tried. As described above, according to the configuration in which the target PHY is restarted by a plurality of approaches, the possibility that the target PHY returns to the normal state can be increased. By the way, when the device restart is executed, the function provided by the relay device2(for example, the relay function of the communication frame) is temporarily stopped. As a matter of course, when the relay function by the relay device2is stopped, the plurality of ECUs1connected to the relay device2cannot communicate with other ECUs1. That is, it affects the communication between the ECUs1. Therefore, there is a circumstance that it is desired to avoid restarting the relay device2as much as possible while the vehicle is running. In response to such a situation, in the above configuration, before executing the device restart as S107, first, the target PHY is attempted to be normalized by restarting only the target PHY (S101, S103). Then, when the fixing of the clock line is released by the individual restart of the target PHY, the device restart is not executed. In this way, according to the configuration in which the device is restarted as a preliminary means after attempting to normalize the target PHY by restarting only the target PHY, the frequency of restarting the entire device in order to release the fixing of the clock line can be suppressed. Although the present embodiment describes an example in which the return processor51executes S101to S109as the clock fixing releasing process, the present disclosure is not limited to the above example. The clock fixing releasing process may be only S101or only S103. The clock fixing releasing process may also be only S107. The content of the clock fixing releasing process can be changed as appropriate. While the embodiment of the present disclosure has been described above, the present disclosure is not limited to the embodiment described above, and various modifications to be described below are included in the technical scope of the present disclosure, and may be implemented by various modifications within a scope not departing from the spirit described below. For example, various modifications to be described below can be implemented in combination as appropriate within a scope that does not cause technical inconsistency. Note that members having the same functions as those described in the above embodiment are denoted by the same reference numerals, and a description of the same members will be omitted. When only a part of the configuration is referred to, the configuration of the embodiment described above can be applied to other parts. First Modification In the above-described embodiment, the configuration in which the clock monitor42is individually provided for each of the plurality of MACs41is disclosed, but the present disclosure is not limited to this configuration. One clock monitor42may be configured to monitor clock signals input to the plurality of MACs41. In other words, as shown inFIG.6, the plurality of clock monitors42may be integrated into one module. Second Modification The return processor51may also be configured to identify the number of PHYs3whose clock line is fixed (hereinafter, the number of fixed PHYs) based on the notification from the clock monitor42, and change the content of the action to be executed as the clock fixing releasing process. For example, the return processor51determines whether the number of fixed PHYs is less than a predetermined device reset threshold value, as shown inFIG.7(S201). Then, when the number of fixed PHYs is less than the device reset threshold value, the PHYs3in which the clock fixing occurs are individually restarted (S202). As a means for individually restarting the PHYs as S202, a reset signal input, power supply control, or the like can be adopted. The process of S202corresponds to an individual restart process. On the other hand, when the number of fixed PHYs is equal to or higher than the predetermined device reset threshold value (NO in S201), the relay device2is restarted (S203). The device reset threshold value may be set to a value of 2 or more, for example, 2, 3, 4, or 5. For example, the device reset threshold is set to 2. According to such a setting mode, when the number of fixed PHYs is one, the PHY3alone is restarted, while when the number of fixed PHYs is plural, the entire relay device2is restarted (S203). As a cause of clock fixing in the PHY3, for example, noise superimposed on the communication cable9connected to the PHY3can be considered. If clock fixing occurs due to noise superimposed on the communication cable9, it is expected that the number of PHYs3in which clock fixing is observed will be one. This is because the communication cables9are provided in different modes, and the noise superimposition mode is also different for each communication cable9. Therefore, when clock fixing occurs in a plurality of PHYs3at the same time, paradoxically, there is a high possibility that the cause of the clock fixing is not caused by noise superimposed on the communication cable9, but a fault occurs in the device itself. The present modification is created from the above-described point of view, and the return processor51of the present modification executes the action according to the cause of the clock fixing. Therefore, it is possible to return the relay device2to the normal state more quickly. From another point of view, the device reset threshold value corresponds to a parameter for distinguishing whether the clock fixing is caused by noise superimposed on the communication cable or the device itself has a fault. The above control mode corresponds to a mode in which when the number of fixing PHYs is less than the device reset threshold value, the cause of clock fixing is regarded as noise superimposed on the communication cable and the clock fixing releasing process is executed. Third Modification As described above, when the relay device2is restarted, the ECUs1connected to the relay device2cannot communicate with the other ECUs1until the start of the relay device2is completed. Therefore, if the relay device2is connected to the ECU1that controls the traveling of the vehicle, such as the ECU1that provides the autonomous driving function, it is not preferable to execute the device restart. It is preferable that the return processor51is configured to execute the device restart only in a situation where the relay function of the relay device2is stopped, such as when the vehicle is stopped. The present modification is created based on the above-described technical idea, and the return processor51of the present modification operates according to the flowchart shown inFIG.8in the case where it becomes necessary to restart the entire device. The case where it becomes necessary to restart the entire device is, for example, the case where S106inFIG.5is negatively determined and the process proceeds to S107, or the case where S201inFIG.7is negatively determined and the process proceeds to S203. The return processor51of the present modification determines whether the current state of the vehicle satisfies a predetermined restart permission condition based on the vehicle information provided from various ECUs1mounted on the vehicle in which the relay device2is used (S301). The return processor51that executes S301corresponds to a vehicle state determiner. The vehicle information here refers to, for example, a vehicle speed, a shift position, an on-off of a parking brake, a current position of the vehicle, a depressed state of a brake pedal, and the like. The restart permission condition is a condition for the return processor51to execute the device restart. The restart permission condition is set in advance. For example, the restart permission condition is that the vehicle speed is 0 km/h and the shift position is set to the parking position. According to such a setting, the return processor51will execute the device restart while the vehicle is stopped with the shift position set to the parking position. When the state of the vehicle satisfies the restart permission condition (YES in S301), the return processor51notifies the ECU1and other relay devices2connected to the relay device2of execution of the device restart (S302). Then, the return processor51executes the device restart (S303). On the other hand, if the current state of the vehicle does not satisfy the restart permission condition (NO in S301), the execution of the device restart is suspended and a restart preparation process is executed (S304). For example, the return processor51proposes to the driver to drive the vehicle to an escape area, or instructs the autonomous driving ECU to stop in the escape area. The restart preparation process is a process for shifting the state of the vehicle to a state that satisfies the predetermined restart permission condition. The specific content of the restart preparation process may be appropriately designed according to the content of the restart permission state. According to the above configuration, the relay device2does not suddenly restart while the vehicle is running. Since the restart is executed after the predetermined restart permission condition is satisfied, it is possible to suppress the influence on the communication between the ECUs1and the influence on the running control of the vehicle. The restart permission condition may include that the current position of the vehicle is in a predetermined escape area. The escape area is a place where the vehicle can stop without interrupting other traffic. The escape area includes, for example, an emergency parking zone, which is a space provided on a shoulder of a road so that a broken vehicle, an emergency vehicle, a road management vehicle, and the like can stop and a passing space that is a space for vehicles to pass each other. Whether the vehicle is in the escape area may be specified by using a positioning result by a global navigation satellite system (GNSS) receiver and a map data including information of the escape area. Fourth Modification The return processor51may also be configured not to execute the clock fixing releasing process when the microcomputer5or the controller4is executing a specific process in which the clock signals of the PHYs3can be stopped. When the specific process in which the clock signals of the PHYs3can be stopped is being executed is, for example, when the microcomputer5is being started, when the controller4is being reprogrammed, when the PHYs3are intentionally stopped, or when the PHYs3are being started. According to the above-described configuration, when the clock signal is stopped due to a factor other than a fault in the PHYs3, the clock fixing releasing process is not executed. That is, it is possible to suppress the unnecessary execution of the clock fixing releasing process. Fifth Modification The arrangement mode of each function described above is an example and can be changed as appropriate. For example, the return processor51may be included in the controller4or may be included in the clock monitor42. The return processor51may be realized by the cooperation between the controller4and the microcomputer5. Further, the clock monitor42may be provided in the microcomputer5. The MACs41may be realized as chips independent of the controller4. Sixth Embodiment The present disclosure is applied to the relay devices2in the above-described embodiment, but the present disclosure may also be applied to other vehicular communication devices such as the ECUs1. The vehicular communication devices here refer to vehicular devices configured to enable communication conforming to the Ethernet standard. For example, various ECUs1, the relay devices2, peripheral monitoring sensors such as an object recognition device, and the like can correspond to vehicular communication devices. The restart permission condition for the ECU1that provides the autonomous driving function may be set, for example, that the vehicle is stopped or that the driving authority is transferred to the driver. Further, it is preferable that the restart permission condition for the ECU1that controls the running of the vehicle includes that the vehicle is stopped. According to such a setting, the restart of the ECU1is suspended until the vehicle is stopped, and the possibility of affecting the running control can be reduced. It is preferable that each of the ECUs1is configured so that the restart is not executed during execution of a software update. That is, it is preferable that the restart permission condition includes that the software update is not in progress. If a restart is required during the software update, it is preferable that the update process is executed after saving data related to the update from the beginning or the middle of the update process, as the restart preparation process. Seventh Modification In the above-described embodiment, the configuration in which the PHYs3and the MACs41are configured to communicate with each other by MII has been disclosed, but the present disclosure is not limited to this configuration. The PHYs3and the MACs41may also be configured to communicate with each other by reduced MII (RMII), reduced gigabit MII (RGMII), or the like. Means and/or functions provided by the relay device2may be provided by software recorded in a substantive memory device and a computer that can execute the software, software only, hardware only, or some combination of them. Some or all of the functions of the relay device2may be realized as hardware. A configuration in which a certain function is realized as hardware includes a configuration in which the function is realized by use of one or more ICs or the like. For example, when a part of the functions or all of the functions of the vehicle relay device2is provided by an electronic circuit being hardware, it may be possible to provide it by a digital circuit including multiple logic circuits or analog circuits. The same applies to the means and/or functions provided by the ECU1. | 49,835 |
11943299 | DETAILED DESCRIPTION OF THE DRAWINGS While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims. References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device). In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features. Referring now toFIG.1, a system100for remotely monitoring servers of brewed beverages includes, in the embodiment shown, a brewing device102, a plurality of stands104remote from the brewing device102, and a plurality of servers106each with a body defining a chamber containing a brewed beverage on a platform of corresponding stands104. Although the example shown inFIG.1includes a single brewing device102, multiple brewing devices could be included in the system100depending on the circumstances. The term “brewing device” is broadly intended to mean any device that could be used to produce a brewed beverage, such as coffee, tea, tisane, herbal teas, or other beverages.FIG.1shows four stands104with two servers106on each stand for purposes of example, but more or less stands104and/or servers106could be provided depending on the circumstances. In the embodiment shown, the plurality of stands104are in wireless communication with the brewing device102, such as using Bluetooth™ low energy communications, to provide status information regarding respective servers106connected to the stands104. For example, the brewing device102may include a Bluetooth™ radio110that communicates with a Bluetooth™ radio112of the stands104. Although wireless communications between the brewing device102and plurality of stands104may be Bluetooth™ in some embodiments, numerous other wireless communication protocols could be used depending on the circumstances, including but not limited to Zigbee™, Z-Wave™, and/or Ultra-wide Band (UWB). By way of example only, the stands104could provide approximately real-time updates to the brewing device102regarding the stands104and servers106, including but not limited to freshness time, empty status, recipe name, serial number, signal strength batch size, model (e.g., 1.5 gallon vs. 1.0 gallon) and/or location. In some embodiments, the stands104may receive updates on server status from the respective servers106connected with the stands104using wired or wireless communications, such as power line communications (PLC) with servers106plugged into stands104. Although the brewing device102is shown to be in wireless communication with the stands104inFIG.1, the servers106could include wireless communication circuitry to communicate directly with the brewing device102(instead of communicating with the stands104, which in turn, communicate with the brewing device102) in some embodiments. For example, the servers106could include wireless communication circuitry to communicate with the brewer device102using one or more of Bluetooth™, Zigbee™, Z-Wave™, Ultra-wide Band (UWB) and/or other communication protocol. In some such embodiments, the servers106could provide approximately real-time updates directly to the brewer102. In some embodiments in which the servers106include wireless communication circuitry to communicate directly with the brewing device102, the wireless communication circuitry in the stands104could be optional. Additionally, in some embodiments, the brewing device102may include a port to which the servers106could plug in for electrical power and data communications. As shown, the system100includes a plurality of mobile computing devices114, such as a cell phone, tablet, laptop computer, desktop computer, etc., that are wirelessly connected with the brewing device102, such as using a Wi-Fi radio116or other wireless communications. For example, the brewing device102could establish a Wi-Fi hotspot for connection by the plurality of mobile computing devices108(without needing an Internet connection in some embodiments). In some cases, the brewing device102could host a web interface accessible by a browser118on the mobile computing devices108that can be used to, among other things, view approximately real-time updates on the status of servers106(SeeFIGS.10-13). AlthoughFIG.1shows four mobile devices114in wireless communication with the brewing device102, more or less mobile devices114could be connected with the brewing device102. FIG.2illustrates an embodiment of the brewing device102. In this example, the brewing device102includes a controller200, memory202, input/output (I/O) subsystem204, brewing subsystem206, and wireless communication subsystem208. For example, the controller200could be any type of processor capable of performing the functions described herein. The controller200may be embodied as a single or multi-core processor(s), microcontroller, or other processor or processing/controlling circuit. The memory202may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In operation, the memory202may store various data and software used during operation of the brewing device102. The memory202is communicatively coupled to the controller200via the I/O subsystem204, which may be embodied as circuitry and/or components to facilitate input/output operations with the controller200, the memory202, and other components of the brewing device102. The brewing subsystem206may be any type of circuitry or components to facilitate brewing a brewed beverage. During operation, the controller200may be communicatively coupled to the brewing subsystem206to control brewing of brewed beverages. The wireless communication subsystem208may be embodied as any communication circuit, device, or collection thereof, capable of enabling wireless communications between the brewing device and other devices, such as the stands104and mobile computing devices114. The wireless communication subsystem208may be configured to use any one or more communication technology and associated protocols (e.g., Wi-Fi®, Bluetooth™, WiMAX, 3G, 4G LTE, etc.) to effect such communication. In some embodiments, the memory202, I/O subsystem204, and/or wireless communication subsystem208may form a portion of a SoC and be incorporated along with the controller200and other components of the brewing device102on a single integrated circuit chip. FIG.3illustrates an embodiment of the stand104. In this example, the stand104includes a controller300, memory302, input/output (I/O) subsystem304, wireless communication subsystem306, and power line communication subsystem308. For example, the controller300could be any type of processor capable of performing the functions described herein. The controller300may be embodied as a single or multi-core processor(s), microcontroller, or other processor or processing/controlling circuit. The memory302may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In operation, the memory302may store various data and software used during operation of the stand104. The memory302is communicatively coupled to the controller300via the I/O subsystem304, which may be embodied as circuitry and/or components to facilitate input/output operations with the controller300, the memory302, and other components of the stand104. The wireless communication subsystem306may be embodied as any communication circuit, device, or collection thereof, capable of enabling wireless communications between the stand104and brewing device102. The wireless communication subsystem306may be configured to use any one or more communication technology and associated protocols (e.g., Wi-Fi®, Bluetooth™ WiMAX, 3G, 4G LTE, etc.) to effect such communication. As discussed above, the wireless communication subsystem306could be optional in embodiments where the servers106include wireless communication circuits to report approximately real-time updates directly to the brewer device102. The power line communication subsystem308may be embodied as circuitry for communicating with server(s) plugged into the stand104via a power line connection. For example, the stand104may include one or more power ports into which servers106may connect, which both provides power to the server106and establishes data communications with the server106over the power line. The server106may communicate status information, such as freshness, empty status, location, etc., to the stand104via the power line using the power line communication subsystem308. In some embodiments, the memory302, I/O subsystem304, wireless communication subsystem306and/or power line communication subsystem308may form a portion of a SoC and be incorporated along with the controller300and other components of the stand104on a single integrated circuit chip. FIG.4illustrates an embodiment of the server106. In this example, the server106includes a controller400, memory402, input/output (I/O) subsystem404, power line communication subsystem406, and wireless communication subsystem408. In some embodiments, the server has no power, but plugs into an electric docking point on the brewer and/or stand. Once docked, the server monitors temperature of the brewed beverage and provides control of heat to maintain the optimum temperature of the brewed beverage. For example, the brewing device102could be set up in a separate location and once brewing is complete, the server106could be undocked and taken to the desired service location. A remote stand104can be placed in service areas that are not ideal for brewer setup. Once connected to the remote stand104, the server106has the capability of communicating and exchanging data, such as freshness time, empty status, hold temperature, recipe name, serial number, signal strength, batch size, model (e.g., 1.5 gallon vs. 1.0 gallon), etc., over the power line that is docked to the stand104. In some embodiments, such as shown inFIG.14, the server106is configured to wirelessly communicate status information directly with the brewer device102, such as using Bluetooth™ communications. For example, the server106may include internal batteries to power the wireless communication subsystem408, and could be configured to periodically or approximately in real-time transmit wireless updates on, among other things, freshness time, empty status, hold temperature, recipe name, serial number, signal strength, batch size, model (e.g., 1.5 gallon vs. 1.0 gallon), location, etc. to the brewer device102, which could then provide such information to the mobile devices114. The controller400could be any type of processor capable of performing the functions described herein. The controller400may be embodied as a single or multi-core processor(s), microcontroller, or other processor or processing/controlling circuit. The memory402may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In operation, the memory402may store various data and software used during operation of the server106. The memory402is communicatively coupled to the controller400via the I/O subsystem404, which may be embodied as circuitry and/or components to facilitate input/output operations with the controller400, the memory402, and other components of the server106. The power line communication subsystem406may be embodied as circuitry for communicating with stand104when the server106is plugged into the stand104via a power line connection. For example, the server106may include a power cord that plugs into a power port on the stand104, which both provides power to the server106and establishes data communications with the stand104over the power line. The server106may communicate status information, such as freshness, empty status, location, etc., to the stand104via the power line using the power line communication subsystem405. In some embodiments, the memory402, I/O subsystem404, power line communication subsystem406, and/or wireless communication subsystem408may form a portion of a SoC and be incorporated along with the controller400and other components of the server106on a single integrated circuit chip. Referring now toFIG.5, in an illustrative embodiment, the brewing device102establishes an environment500during operation to, among other things, remotely monitor status of the servers106. The illustrative environment500includes a network connection manager502, a brewing operation manager504, a monitoring manager506, and a user interface manager508. As shown, the various components of the environment500may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of the environment500may be embodied as circuitry or collection of electrical devices (e.g., network connection circuitry, brewing operation circuitry, monitoring circuitry, and a user interface circuitry). It should be appreciated that, in such embodiments, one or more of the network connection manager502, a brewing operation manager504, a monitoring manager506, and a user interface manager508may form a portion of the controller200, the I/O subsystem204, and/or other components of the brewing device102. Additionally, in some embodiments, one or more of the illustrative components may form a portion of another component and/or one or more of the illustrative components may be independent of one another. The network connection manager502is configured to establish a connection with other devices. For example, the network connection manager502is configured to establish a connection with one or more of the stands104. In some embodiments, for example, the network connection manager502could use the Bluetooth™ low energy protocol to establish communications with multiple stands104within range of the brewing device102. In some cases, the network connection manager502could be configured to use the master/slave framework of the Bluetooth™ protocol to establish communications between the brewing device102and the stands104. For example, the brewing device102could establish an interface from which a user can search for remote stands104. For example, the interface could be displayed on a screen of the brewing device102or on the mobile computing devices114connected to the brewing device102. Once the search is complete, the user can select the stand(s) associate in the network to form the connection. Once the connection is established, the network connection manager502could manage communications between the brewing device102and the stands104to remotely monitor the servers106. In some embodiments, the network connection manager502may be configured to establish communications with one or more mobile computing devices114. For example, the network connection manager502could establish a hotspot to which the mobile computing devices114can connect and establish communications with the brewing device102. This can facilitate, for example, the mobile computing devices114to view status information regarding servers106remote from the brewing device102using the user interface manager508. The brewing operation manager504is configured to control brewing of a brewed beverage. For example, the brewing operation manager504may be configured to receive input of a recipe for a brewed beverage and control brewing components of the brewing device102to facilitate the brewing process for the brewed beverage. The monitoring manager506is configured to monitor servers106corresponding to stands104connected to the brewing device102. For example, the monitoring manager506could be configured to track status information about the servers106received from the stands104, such as freshness time, empty status, recipe name, server/stand serial number, signal strength of stand, batch size, model (1.5 gallon vs. 1.0 gallon), stand location, etc. By way of example, the monitoring manager could store the status information about the servers106in memory202. The user interface manager508is configured to provide an interface to mobile computing devices114connected to the brewing device102. For example, the user interface manager508could be configured to allow the mobile computing devices108to adjust settings for the brewing device102, check on status of servers106, and/or other functions.FIGS.10-13show an example user interface that could be provided by the user interface manager508. In some cases, the user interface could be embodied as a webpage stored on the brewing device102or a database with information regarding server status to which an app on the mobile device interfaces when connected to the brewing device102. Referring now toFIG.6, in an illustrative embodiment, the stand104establishes an environment600during operation to, among other things, establish communications with the brewing device102and servers106, and provide status information on servers106. The illustrative environment600includes a network connection manager602and a reporting manager604. As shown, the various components of the environment600may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of the environment600may be embodied as circuitry or collection of electrical devices (e.g., network connection circuitry and reporting circuitry). It should be appreciated that, in such embodiments, one or more of the network connection manager602and a reporting manager604may form a portion of the controller300, the I/O subsystem304, and/or other components of the stand104. Additionally, in some embodiments, one or more of the illustrative components may form a portion of another component and/or one or more of the illustrative components may be independent of one another. For example, as discussed above, the server106may provide wireless reporting on status information directly to the brewer device102. In such embodiments, all or part of the network connection manager602and/or reporting manager604may form part of the environment established by the server106. The network connection manager602is configured to establish a wireless connection with the brewing device102. For example, the network connection manager602could be configured to use the Bluetooth™ low energy protocol to establish communications with a brewing device102within range of the stand104. The network connection manager602could also be configured to establish one-way or two-way communications with one or more servers106connected to a power port of the stand104via power line communications. The stand104is able to obtain status information (and/or other information) from the server(s)106connected via the power port to the stand104using power line communications. The reporting manager604is configured to send status updates regarding server(s)106connected to the power port of the stand104(or the reporting manager604embodied on the server106may send updates to the brewer device102directly). For example, the reporting manager604could be configured to periodically send status updates regarding server(s)14plugged into the stand's104power port to the brewing device102. By way of example only, the stand104could send an update every few second, every minute, or in other intervals. Alternatively, or in addition to periodic updates, the reporting manager604could be considered to respond to queries from the brewing device102for status updates. Referring now toFIG.7, in an illustrative embodiment, the server106establishes an environment700during operation to, among other things, establish communications with the stand104to which the server106is connected (and, in some embodiments, communications with the brewer device102in embodiments with wireless reporting directly to the brewer device102from servers106). The illustrative environment700includes a temperature controller702and a status manager704. As shown, the various components of the environment700may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of the environment700may be embodied as circuitry or collection of electrical devices (e.g., temperature control circuitry and reporting circuitry). It should be appreciated that, in such embodiments, one or more of the temperature controller702and status manager704may form a portion of the controller400, the I/O subsystem404, and/or other components of the server106. Additionally, in some embodiments, one or more of the illustrative components may form a portion of another component and/or one or more of the illustrative components may be independent of one another. The temperature controller702is configured to control the temperature of the brewed beverage within the server106, similar to the manner by which a thermostat operates. The status manager704is configured to provide status information regarding the server106to the stand104in which the server106is plugged in via power line communications. There is a variety of information that the status manger704could send to the stand104, including but not limited to freshness time, empty status, recipe name, serial number, signal strength and/or location. Referring now toFIG.8, in use, the brewing device102may execute a method800for connecting with one or more stands104. It should be appreciated that, in some embodiments, the operations of the method800may be performed by one or more components of the environment500of the brewing device102as shown inFIG.5. The method800begins in block802, which could be performed upon power-up of the brewing device102or upon selecting a connection button on the brewing device102or other initiating action, the brewing device102determines whether a previous network configuration is saved in memory202. If a prior network configuration is found in memory202, the method800advances to block804in which the brewing device102attempts to connect with one or more stands104based on the saved network configuration. Next, a determination is made whether a connection was successfully established (block806). If the brewing device102is unable to connect successfully, the method800proceeds to block808in which user can attempt to manually configure the network connection using the user interface, such as shown inFIGS.10-13. If the brewing device102is able to successfully connect to one or more servers106, the method800advances to block810in which a determination is made whether any network information should be saved to memory202. If so, the method proceeds to block812in which network information is saved in memory202. If there is no need to save network information, the method800proceeds to block814in which the method ends until another connection is desired. Referring again to block802, if there is not a previous network configuration already stored in memory202, the method800advances to block816in which a determination is made whether a brew cycle has been completed. If not, the method800waits for the brew cycle to complete. Once a brew cycle has completed, the method800advances to block818in which data regarding the brew (and possibly other information) is transferred to memory402in the server106. For example, the brewing device102could transfer recipe information, an identification of the brewing device that dispensed the brewed beverage into the server106, a time for completion of the brew, etc., to the server106. The method800then proceeds to block820in which the brewing device102waits for a pair request from one or more stands104within range. When a pair request is received from a stand104, the method800progresses to block822in which a determination is made whether the connection was successfully established. If so, the network configuration information is saved in memory202. If the pairing is unsuccessful, the method800advances to block808in which the user can attempt to manually configure the connection using a mobile computing device114. Referring now toFIG.9, in use, the stand104may execute a method900for connecting with one or more stands104. It should be appreciated that, in some embodiments, the operations of the method900may be performed by one or more components of the environment600of the stand104as shown inFIG.6. In some embodiments in which servers106report status updates directly to the brewer device102, the servers106may execute all or a part of the operations in the method900for connecting the servers106to the brewer device102. In the embodiment shown, the method900begins in block902, which could be performed upon power-up of the stand104or upon selecting a connection button on the stand104or other initiating action, the stand104determines whether a previous network configuration is saved in memory302. For example, the previous network configuration could include the Bluetooth™ name of a brewer to which the stand104has connected. In embodiments in which the servers106directly report status updates to the brewer device102, the block902may be performed by the server106upon plugging in the server106into the power port on the stand104or otherwise upon power-up of the server106. If a prior network configuration is found in memory302, the method900advances to block904in which the stand104attempts to connect with a brewing device102based on the saved network configuration. Next, a determination is made whether a connection was successfully established (block906). If the stand104is unable to connect successfully with a brewing device102, the method900proceeds to block908in which user can attempt to manually configure the network connection using the user interface, such as shown inFIGS.10-13. If the stand104is able to successfully connect to a brewing device102, the method900advances to block910in which a determination is made whether any network information should be saved to memory302. If so, the method proceeds to block912in which network information is saved in memory302. If there is no need to save network information, the method900proceeds to block914in which the method ends until another connection is desired. Referring again to block902, if there is not a previous network configuration already stored in memory302, the method900advances to block916in which a determination is made whether a server106has been placed on the stand104. If not, the method900waits for a server106to be placed on the stand104. Once a server106has been placed on the stand104, and plugged in to a power port, the method900advances to block918in which data regarding the server, such as recipe info, brewer ID, brew time (and possibly other information) is retrieved by the stand104from the server106. The method900then proceeds to block920in which the stand104determines whether there is already an active connection to a brewing device102. If the stand is already connected to a brewing device102, the method900proceeds to block914. If no active connection is already ongoing, the method900continues to block922in which the stand104attempts to pair with a brewing device102based on information retrieved from the server106, such as a brewer identification for Bluetooth™ pairing stored in memory402of the server106, which has been retrieved by the stand104. In embodiments in which the servers106include the wireless communication subsystem408, and directly report status updates to the brewer device102, the server106could establish a connection with the brewer ID data transferred from the brewer device102and stored in memory. As discussed herein, Next, the method900progresses to block924in which a determination is made whether the connection was successfully established. If so, the network configuration information is saved in memory302of the stand104. If the pairing is unsuccessful, the method900advances to block908in which the user can attempt to manually configure the connection using a mobile computing device114. FIGS.10-13illustrate an example interface that could be accessed by one or more mobile devices to view and/or adjust certain parameters of the brewing device102, stand104, and/or server106. In some embodiments, at least a portion of the interface could be displayed on a screen of the brewing device102, stands104, and/or server106. In the example shown, there is a tab for selecting a dashboard1000, a tab for selecting system settings1002, a tab for selecting Wi-Fi settings1004, and a tab for selecting browser network settings1006. For example, the first time the brewing device102is powered, the brewing device102could establish a Wi-Fi hotspot in which the default network name could have a unique name, such as a predetermined prefix word or phrase with a portion of the brewing device's102serial number or other identifier. The brewing device102could be configured to establish a default password for the network, such as a predetermined word or phrase. Once the mobile computing devices114connect to the Wi-Fi hotspot, such as using a browser118, the interface could be provided by having the mobile computing devices114enter a default server address into the browser118, such as a predetermined IP address, such as 192.168.1.1. Once the mobile computing devices114access the interface, such as the example shown inFIGS.10-13, with the browser118, the user can update default parameters, such as the network name, password, etc. FIG.10shows an example dashboard that could be provided by the interface to view the status of the brewing device102, stand104, and/or server106upon selecting the dashboard tab1000. Typically, the dashboard view would show each stand104that has previously connected (or is currently actively connected) with the brewing device102and servers106associated with each of those stands104. In the example shown, the interface includes a brewing device interface element1008, a plurality of stand interface elements1010, and a plurality of server interface elements1012. As shown, the brewing device interface element1008includes a brewer name1014and a brewer serial number1016; other information could be included in the brewing device interface element1008depending on the circumstances. In the embodiment shown, the plurality of stand interface elements1010include a stand name1018, a stand serial number1020, and a signal strength indicator1022(e.g., Bluetooth™ RSSI), but other information could be included in the stand interface elements1010depending on the circumstances. In some cases, the stand name1018could be the location, such as kitchen, front lobby, or conference room, in which the stand is placed. In the example shown, the stand interface elements1010are spatially arranged with respect to server interface elements1012to indicate which servers106are connected to power ports of stands104. As shown, the servers connected with stands have server interface elements1012aligned above the corresponding stand interface elements1012along a vertical axis. In some embodiments, the size of the interface elements could vary depending on the corresponding device. For example, as shown, the stands104with dual ports for connecting two servers106are shown with stand interface elements1010that are twice as wide as stands104with a single port for connecting a single server106. Various colors, flashing indicators or other elements could be used based on status of the brewing device102, stands104, and/or servers106. For example, a server interface element corresponding to a server106that is empty could have text flashing “Empty” in red; by way of another example, a freshness time that has reached zero could be shown with “00:00” flashing in red. In the example shown, the plurality of server interface elements1012include a server name1024, a status indicator1026, and a server serial number1028; depending on the circumstances, other information could be provided in the server interface element1012. In the example shown, the server name1024is the recipe for the brewed beverage contained within the server106(e.g., decaf, French roast, dark roast, etc.), which could be included in data transferred from the brewing device102as described herein. The status indicator1026could indicate a variety of status information about the server106; in the example shown, the status indicator1026could indicate that the server106is brewing, empty status (i.e., if the server no longer contains brewed beverage), freshness status (e.g., elapsed time since brewing), or other information. In the example shown, the dashboard includes a warning indicator1030that denotes attention is needed regarding one of the stands104and/or servers106, such as a server104is empty or a brewed beverage is no longer fresh. Additionally, in the example shown, the dashboard includes a connection status indicator1032with a list of brewing devices, stands, and servers connected. FIG.11illustrates the example user interface upon selecting the system settings tab1002. In the example shown, the interface includes a brewer parameter editing element1100and a stand parameter editing element1102. The brewer parameter editing element1100and the stand parameter editing element1102are configured to allow a user to edit one or more parameters associated with a brewing device102and stands104, respectively. As shown, the brewer parameter editing element1100includes a list of brewing devices1104associated with the interface and a name editing element1106that allows the user to edit names of brewing device(s) as shown in the dashboard, and the stand parameter editing element1102includes a list of stands1108associated with the interface and a name editing element1110that allows the user to edit names of stand(s) as shown in the dashboard. In some cases, other parameters, such as a hidden parameter1112that hides the stand from the dashboard, can be included. In the embodiment shown, the interface includes an ordering element, which as shown, includes an ordered list1114of brewing devices, stands, and/or servers. In this example, the user can select an item from the ordered list1114and then select up1116or down1118to adjust the relative positions of the items for the dashboard shown inFIG.10. A user can use this interface to configure to order of which brewing devices102and stands104will be displayed on the dashboard. One set, the brewing device102will save the configuration and will retain the information on every power cycle. As shown, the interface includes an administrative portion1120in which a password portion1122can lock adjustments to the parameters, such as requiring a password to access the system settings, Wi-Fi settings, and network settings tabs1002,1004,1006. As shown, another adjustment portion1124is for changing what is shown on the dashboard. FIG.12illustrates an interface that may be shown upon selecting the Wi-Fi settings tab1004. This interface facilitates customization and securing of the Wi-Fi network established by the brewing device102. In the embodiment shown, the interface includes an access point option1200and a local area network option1202. Upon selecting the access point option1200, the user is presented with options to edit/configure the access point name (SSID)1204, access point password1206, access point IP address1208, and access point IP mDNS host name1210(e.g., http://www.SmartBrewer.com). If the user selects the local area network option1202, the user is presented with options to search and join a local network, such as using a local area network search element1212, password element1214, and a join element1216. FIG.13illustrates an interface that may be shown upon selecting the network settings tab1006. This interface facilitates configuring the brewer network, and shows the current status of stands104that are joined in the network; from this interface, the user will be able to search for new stands104and/or disconnect from current stands104. In this example, there is a stand delete interface1300configured to allow a user to remove a stand104from the network. In the example shown, the interface includes a connected stand list1302with a list of each stand104that has connected to the brewing device102. Upon selecting a stand104from the stand list1302, the user can select the disconnect element1304to disconnect that stand from the stand list1302and remove that stand from memory302, which means the brewing device102will no longer be connected to that stand104when that stand104is in range. As shown, there is similar disconnected stand list1306, which identifies a remote stands that the brewing device102can no longer find. A user can select a stand from the list1306and then remove that stand104by selecting the remove element1308, which means the brewing device102will no longer attempt to search for and connect to that stand. If a stand104does not get removed, it will be displayed on the dashboard, but will have an indication of “Disconnected” where the RSSI gets displayed. In the embodiment shown, there is a stand manual connect portion1310from which a user can search for stands104within range of the brewing device102using the search element1312, which populates a list of available stands1314within a communication distance of the brewing device102. In this example, the user can select a stand from the list of available stands1314and then add that stand to the network with the add element1316. Once connected, the dashboard will show the status of the stand104and server(s)106connected to the stand104. As shown, there is an interface element1318for enabling the brewing device102to join another brewer network. If joining another brewer network is enabled, the brewing device102will change its Bluetooth™ configuration similar to a stand104and drop all other Bluetooth™ connections. The host brewer (to which the brewing device102connects), will then need to search for the brewing device102in order to add it to the brewer network. This option would typically only be used in the scenario when multiple brewers are present and all of the information needs to be displayed in one wireless Wi-Fi network; although not required in a multiple brewer setup, it could be used when multiple wireless Wi-Fi and Bluetooth™ networks could cause confusion. FIG.14illustrates an embodiment in which one or more servers1400include a wireless beacon1401, such as a Bluetooth™ beacon, to wirelessly transmit a server identifier that identifies a respective server from the other servers1400, along with one or more status indicators of that respective server, such as freshness time, empty status, hold temperature, recipe name, serial number, signal strength, batch size, model (e.g., 1.5 gallon vs. 1.0 gallon), power level, and/or other characteristics of the server and/or the beverage carried by the server, directly to a brewing device1402without needing to rely on a server stand for wireless communications. For example, the wireless beacon1401may be configured to periodically transmit status indicators with the server identifier, such as multiple times per second depending on the circumstances. The wireless beacon1401may be embedded within the body of the server1400, internal to the body of the server1400, attached to the body of the server1400, such as via adhesive or fastener(s). In some embodiments, the servers1400may be battery powered; depending on the circumstances, the servers1400could be powered through a connection with another power source, such as a server station, a wall outlet, etc. In the embodiment shown inFIG.14, there are four servers1400, but depending on the circumstances, there could be more or less servers1400. There is shown a single brewing device1402for purposes of example, but more than one brewing device1402could be provided depending on the circumstances. In some circumstances, the brewing device1402may include a display to provide at least a portion of the status indicators of the servers1400. For example, in some embodiments, the display may be a touch interface through which a user could interact, such as the user interface shown inFIGS.10-13. In some cases, the brewing device1402may make the servers' status indicators received from the wireless beacon1401available to local devices1404for substantial real-time remote monitoring, such as through a WiFi connection with the brewing device1402as described herein. For example, the local device(s)1404, could be a phone, tablet or other computing device, that connects to the brewing device1402via WiFi or other wireless local network connection. For example, the local device(s)1404may be programmed to display digital sight gauges of real-time status information regarding the servers1400received from the brewing device1402. In some cases, the local device(s)1404may display a user interface similar to that show with respect toFIGS.10-13. For example, the local device(s)1404may connect with the brewing device1402with a browser or other app to receive the data from the brewing device1402. There is shown a single local device1404for purposes of example, but more than one local device1404could be provided depending on the circumstances. In some embodiments, the brewing device1402may be connected to an access point or another network component, such as BUNNlink™ by Bunn-O-Matic Corporation of Springfield, Illinois, to make the servers' status indicators available for substantial real-time remote monitoring to any remote devices1406connected to the cloud1406, such as the Internet, or other data connection. For example, in some cases, any remote device1406may be perform substantial real-time remote monitoring of the server(s)1400from any location. There is shown a single remote device1406for purposes of example, but more than one remote device1406could be provided depending on the circumstances. EXAMPLES Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below. Example 1 is a stand for a brewed beverage server. The stand includes a platform for holding a brewed beverage server. There is at least one power port configured to supply power a brewed beverage server connected to the at least one power port. The stand has a power line communication subsystem configured to establish data communications with the brewed beverage server connected to the at least one power port to receive one or more status updates regarding the brewed beverage server. The stand includes a wireless communication subsystem configured to establish wireless communications with a remote brewing device. Also, there is a controller programmed to wirelessly send one or more of the status updates regarding the brewed beverage server received via the power line communication subsystem to the remote brewing device with the wireless communication subsystem. Example 2 includes the subject matter of Example 1, and wherein: the status updates include one or more of an empty status of the brewed beverage server or a freshness status of the brewed beverage server. Example 3 includes the subject matter of Examples 1-2, and wherein: the controller is configured to attempt pairing with the remote brewing device based on data received from the brewed beverage server over the power line communication subsystem. Example 4 includes the subject matter of Examples 1-3, and wherein: the data received from the brewed beverage server over the power line communication system includes a network identification of the remote brewing device. Example 5 includes the subject matter of Examples 1-4, and wherein: the wireless communication subsystem is configured to establish a wireless pairing using the network identification of the remote brewing device received over the power line communication. Example 6 includes the subject matter of Examples 1-5, and wherein: the controller is configured to store the network identification of the remote brewing device in response to a successful pairing. Example 7 is a brewing device with a network connection manager, a brewing operation manager, a monitoring manager, and a user interface manager. The network connection manager is configured to establish a wireless network connection with one or more mobile computing devices and one or more remote stands. The brewing operation manager is configured to facilitate brewing of a brewed beverage. The monitoring manager is configured to receive one or more status updates from the one or more remote stands and/or brewers regarding one or more brewed beverage servers. The user interface manager is configured to establish a user interface from which the one or more mobile computing devices can view the one or more status updates regarding the one or more brewed beverage servers. Example 8 includes the subject matter of Example 7, and wherein: the user interface is configured to provide one or more of an empty status of the brewed beverage server or a freshness status of the one or more brewed beverage servers. Example 9 includes the subject matter of Examples 7-8, and wherein: the user interface is configured to include an alert in response of the empty status. Example 10 includes the subject matter of Examples 7-9, and wherein: the user interface is configured to include an alert in response to the freshness status. Example 11 includes the subject matter of Examples 7-10, and wherein: the user interface includes an interface element to search for one or more stands and/or brewers within a communication range of the brewing device. Example 12 includes the subject matter of Examples 7-11, and wherein: the user interface is configured to generate a graphical user interface with the one or more servers arranged with regard to the one or more stands based on network connections. Example 13 includes the subject matter of Examples 7-12, and wherein: the network connection manager is configured to establish a wireless hotspot for connection by one or more mobile computing devices within range of the brewing device. Example 14 includes the subject matter of Examples 7-13, and wherein: the user interface is configured to provide a location of the one or more stands. Example 15 is a server for a brewed beverage. The server includes a container defining a chamber dimensioned to receive a brewed beverage. There is a power cord configured to plug into a power port of a stand to supply electrical power to the server. The server has a controller, a power line communication subsystem and/or a wireless communication subsystem. The controller is configured to control a temperature of the brewed beverage in the chamber. The power line communication subsystem is configured to establish data communications with the stand over the power cord. The wireless communication subsystem is configured to wirelessly transmit one or more updates to the remote brewer device regarding the status. Example 16 includes the subject matter of Example 15, and wherein: the controller is configured to send a network identifier of a remote brewing device to the stand over the power line communication subsystem. Example 17 includes the subject matter of Examples 15-16, and wherein: the controller is configured to periodically send status updates to the stand over the power line communication subsystem. Example 18 includes the subject matter of Examples 15-17, and wherein: the status updates include an indication whether the chamber is empty. Example 19 includes the subject matter of Examples 15-18, and wherein: the status updates include an indication how long since the brewed beverage in the chamber was brewed. | 50,050 |
11943300 | DETAILED DESCRIPTION As noted in the background, a wide variety of different types of sensors can be employed for monitoring environments in conjunction with a number of different applications. Large numbers of homogeneous and heterogeneous sensors may be deployed in sensor networks to canvas a relatively large area. Such sensors may be communicatively connected with one another and with storage and processing devices that store the data collected by the sensors and perform analytical processing on this data. Existing sensor networks, however, require large amounts of data to be sent from the sensors to more sophisticated devices for processing purposes. This is problematic for a number of reasons. For instance, the networking infrastructure may have to be sufficiently robust to handle the large amounts of data that is being transmitted. Video data in particular is quite voluminous, especially in its raw form. Furthermore, the constant transmission of such data throughout the network can pose a security risk. A nefarious individual may surreptitiously be able to gain access to the network. Even if such a person is not able to access the individual sensors and processing and storage devices on the network, he or she may be able to insert listening devices on the network that effectively see all the data being transmitted thereon. For large-scale sensor networks that employ public infrastructure like the Internet, this is particularly troublesome, since locking down the networks completely can never be totally guaranteed. Techniques disclosed herein overcome these and other existing drawbacks to conventional sensor networks. In particular, software-defined sensing networks are provided. Such a network or system includes both light-leaf nodes (LLNs) and heavy-leaf nodes (HLNs). Each LLN has some type of sensing capability as well as processing capability. Each HLN may or may not have sensing capability, and has processing capability greater than that of the LLNs. The LLNs and the HLNs perform processing based on sensing events captured by the LLNs in correspondence with their processing capabilities. The net effect is to minimize raw data transmission among the LLNs and the HLNs within a software-defined sensing topology. For example, a LLN may perform processing responsive to a sensing event it has captured to generate distilled data from the raw data of the sensing event. The LLN may send the distilled data to an HLN for the HLN to perform decisional or reasoning processing on the distilled data to generate an action from the sensing event. Other or the same LLN(s) may then be directed by the HLN to perform sensing in correspondence with the action. As such, the raw data of the initial sensing event detected may never be sent over the network, maximizing network bandwidth and minimizing security risks. The software-defined sensing that is provided can be predictive, adaptive, and collaborative. Adaptive sensing attempts to infer what has occurred or what is occurring in conjunction with one or more sensing events to draw a conclusion, on which basis a decision can be made. Predictive sensing can then make an informed guess of what will likely occur in the future based on such decisions or the sensing events themselves. Collaborative sensing can be employed so that the LLNs work together to improve the capture of a sensing event, such as when one of the LLNs first detects the event. FIG.1shows an example system100. The system includes multiple LLNs102, multiple HLNs104, and one or more back-end nodes (BENs)106. In general, there is a much larger number of LLNs102than HLNs104, and more HLNs104than BENs106. Three LLNs102, two HLNs104, and one BEN106are depicted inFIG.1just for illustrative convenience and clarity. The nodes102,104, and106are communicatively connected to one another in a mesh topology. Each LLN102is communicatively connected to every other LLN102and to each HLN104, but in at least some implementations is not connected to any BEN106. Each HLN104is connected to every other HLN104and to each BEN106. Where there is more than one BEN106, each BEN106is also connected to every other BEN106. This mesh topology provides for a certain amount of redundancy and helps in minimizing transmission of data over the network by which nodes102,104, and106communicate. Each LLN102has sensing capability108and processing capability110. The sensing capability108of an LLN102permits the LLN102to capture sensing events in its proximity. For example, if the sensing capability108is image sensing, then the sensing events may be an image resulting from changed motion within the purview of the LLN102. Each HLN104has processing capability112as well. The processing capability112of each HLN104is more powerful than the processing capability of each LLN102. Each BEN106also has processing capability114. The BENs106act as communication gateways by which the system100is interacted with externally. As such, the HLNs104are externally accessible through the BENs106, and the LLNs102are externally accessible through the BENs106and the HLNs104—the latter particularly where the LLNs102are not communicatively connected directly to the BENs106. In some implementations, then, the HLNs104and the LLNs102are not directly accessible externally, which adds security to the system100as well. FIG.2shows example operation200of the system100. The LLNs102detect or capture a sensing event202, in a collaborative or individual manner. An example of collaborative sensing, which is a type of software-defined sensing, is a number of LLNs102that have image-capturing capability. A given LLN102may have its camera aimed at a particular area and detects a potential event of interest. This LLN102may inform other LLNs102with potentially better views of the area, or with better image-capturing capability, but whose cameras are not currently aimed at the area, to change their fields of view so that they capture this event as well. Or, the LLN102may direct other LLNs102in adjacent areas to begin image capture at a higher frame rate or better resolution, just in case the subject of the potential event of interest moves into their area. The capture of the sensing event202results in the generation of raw data204by the LLNs102of the event202. For example, in the case of a video camera, the raw data204may be raw video data of the entire sensing event202. The LLNs102perform low- or mid-level processing on this raw data204to generate distilled data206therefrom. The distilled data206may be a particular frame of the raw data204, may be a summarization of the raw data204, such as an anonymization thereof, and/or may in amount be lesser than the raw data204. The LLNs102transmit the distilled data206, but not the raw data204(at least at first) to one or more of the HLNs104. The HLNs104that receive the distilled data206perform mid- or high-level processing on the distilled data206of the sensing event202, in conjunction with that for prior sensing events202potentially. In the example operation200, for instance, such processing can first be inference processing that results in a decision208being made. The decision208can be likened to a conclusion, indicating a more abstracted form of the sensing event202. For example, the distilled data206may be the detection of a face within an image, whereas the decision208may be the identity of the person having this face. Such processing is thus higher level processing on the distilled data206than the lower level processing performed on the raw data204to generate the distilled data206. That is, in general, inferential processing is processing performed on the distilled data206of the sensing event202to draw a conclusion about what has likely occurred in the sensing event202. Also shown in the example operation200is predictive processing, from at least the decision208, to generate an action210. The inference processing through predictive processing process can be considered in sum as decisional or reasoning process to generate the action210from the distilled data206corresponding to the sensing event202. The action210is an action to be performed by one or more of the LLNs102, such that it is transmitted to these LLNs102, which can then prepare themselves to detect the next sensing event that is likely to occur, in correspondence with the action210. As an example, the HLNs104may make the decision208that the face in the image of the distilled data206belongs to a person of interest that should be tracked. Therefore, the HLNs104determine the likely path that the person of interest is taking, and correspondingly generate an action210that is sent to the LLNs102located along this probable path. The LLNs102may normally be reactive in their sensing event capture, such as via motion detection capability. However, the action210may specify that the LLNs102are to immediately start recording video at a high-frame rate, so that, for instance, no video of other people that may be accompanying the person of interest and preceding the person along the path is lost. The example operation200ofFIG.2provides a specific view of the type of software-defined sensing that the system100can provide. More generally, the LLNs102may themselves perform some types of mid-level processing to make less abstract decisions on which basis to generate actions to command other LLNs102, for instance. The LLNs102may decide to which HLNs104to send their distilled data to based on the current utilizations of the HLNs104, or based on the type of mid- or high-level processing that is likely to be performed by the HLNs104, where different HLNs104may be responsible for different types of such processing. Further, there can be immediate feedback between the HLNs104that receive the distilled data206and the LLNs102that transmitted the data206. For example, the HLNs104may make an initial decision, and based thereon determine that additional or different distilled data206—or even some of the raw data204itself—is needed for an ultimate decision. As such, the HLNs104may request the LLNs102in question to provide this additional data. Ultimately, as has been described above, the software-defined sensing that is provided by the system100is adaptive, predictive, and collaborative, in at least one of two ways. First, the software-defined sensing can minimize data communication over the network underlying the communicative interconnection among the LLNs102and the HLNs104. Second, the software-defined sensing can provide for more quickly making decisions, due to the LLNs102performing processing in addition to the HLNs104. The LLNs102may in some highly secure environments never transmit their raw data204to any HLN104. More generally, processing within the system100is performed as close as possible to where the raw data204of the sensing events202are captured. The LLNs102perform as much processing as they can in some implementations, before sending their initial results in the form of the distilled data206to the HLNs104for more complex processing. The above description of the example operation200of the example system100can be implemented in a variety of different ways depending on the desired application in question. Two specific non-limiting examples are now presented. The first example is with respect to multiple-tier security access. An LLN102may capture an image of a person utilizing an identification card to attempt to secure entry through a door. The LLN102is communicatively connected to the card scanner, and sends the card number of the identification card to an HLN104that hosts a database of such cards. In turn, the HLN104retrieves an image of the face of the person that owns the card number, and returns an identifying facial descriptor of this face back to the LLN102. The LLN102may then generate a corresponding facial descriptor of the face of the image of the person who used the card to attempt to secure entry, and compare it to the identifying signature provided by the HLN104. If the descriptors match, the LLN102may grant access through the door, either by itself, or by communicating with another LLN102to instruct this latter LLN102to grant access. If the descriptors do not match, however, then the LLN102does not grant access, and instead sends a request to one or more HLNs104to retrieve the identification of other, proximate LLNs102. The LLN102then instructs each such proximate LLN102to begin capturing video, so that better images of the attempted intruder can be captured. The second example is with respect to a probabilistic approach to detect, estimate, and infer a person's previous and future location, time thereat, intent thereat, and so on. People are identified across the network of cameras provided by the LLNs102, which may be deployed at person-level height. Person detection is performed on each LLN102. Feature descriptors of the detected people are generated by the LLNs102, and sent to one or more HLNs104that host a database thereof. The LLNs102can perform low-level image processing and clustering, and the HLNs104can probabilistically perform person identification based on the distilled data they receive from the LLNs102, to track specific persons as they move throughout the area in which the LLNs102are located. FIG.3shows an example implementation of an LLN102as including communication hardware302, one or more sensors304(althoughFIG.3shows just one sensor304), and a processor306. The LLN102can include other hardware as well, such as computer-readable media. For such, such media may include volatile and non-volatile storage hardware to store data generated by the sensor304and the computer-executable instructions executable by the processor306to perform processing. The communication hardware302is hardware that permits the LLN102to communicate with other LLNs102and with the HLNs104. The hardware may provide the LLN102with wireless or wired networking capability, for instance. The sensor304is hardware that provides the sensing capability of the LLN102. As a non-exhaustive list, the sensor304may be one or more of a video-capturing sensor, an image-capturing sensor, and audio-recording sensor, a temperature sensor, and a humidity sensor. As other examples, the sensor304may additionally or alternatively be one or more of a Bluetooth sensor, a Wi-Fi sensor, a passive sensor, a radio frequency identification (RFID) sensor, and an infrared sensor. The sensor304thus captures sensing events in the proximity of the physical location of the LLN102, where these sensing events include raw data as has been described. The processor306may be a general-purpose processor, or a special-purpose processing such as an application-specific integrated circuit (ASIC) or field-programmable gate array (FPGA) that is preprogrammed. The processor306performs less powerful processing than that performed by the HLNs104in that the processor306has less processing capacity or capability than that of the HLNs104. As such, the processor306generates distilled data from the raw data of the sensing events, and can send this distilled data to one or more of the HLNs104for performing more computationally intensive and abstract processing thereon. For instance, the processing performed by the HLNs104may require more processing power than the processing capability of the LLNs102can achieve in a desired amount of time. Abstract processing, such as advanced image processing, is an example of such processing. Responsive to an instruction received by another LLN102or by an HLN104, the processor306may predictively and adaptively modify how the sensor304is to detect future sensing events. For example, the processor306may cause a video sensor to begin capturing video (raw) data prior to an anticipated event occurring. The processor306may cause such a video sensor to have its field of view positioned in an area of interest, record at a certain frame rate, and so on. This is one type of predictive sensing, which can thus be defined as causing sensing to be performed in anticipation of a likely event occurring. Responsive to additional information received from an HLN104generated by the HLN104from the distilled data that the processor306or a different LLN102provided, the processor306may perform further processing as well. For example, the processor306may receive an image feature descriptor from the HLN104and compare it to a corresponding descriptor that the processor306generated. Based thereon, the processor306may then generate an instruction to send to one or more other LLNs102to predictively and adaptive modify how the other LLNs102are to perform sensing, such as in the multiple-tier security access example described above. FIG.4shows an example method400that is performed by an HLN104. An HLN104can be implemented as an appropriately programmed general-purpose computing device commonly available from a wide variety of different manufacturers, such as a server computing device, a desktop computing device, a laptop computing device, and so on. As such, the HLN104typically includes a general-purpose processor that executes computer-executable instructions from a computer-readable data storage medium, like a memory, a hard disk drive, and so on. Execution of these instructions results in performance of the method400. The HLN104receives distilled data from an LLN102that the LLN102generated from the raw data of a sensing event (402). The HLN104performs processing on this distilled data, to generate an action and/or further information (404). As to the former, the action corresponds to further sensing that one or more LLNs102are to perform collaboratively or predictively. As such, predictive and/or inferential sensing is achieved. In this case, then, the HLN104directs the LLN(s)104in question to perform sensing in correspondence with the action (406). As to the latter, the further information is yielded by higher level processing performed by the HLN104that is then used by the LLN102that generated the distilled data that was received. The information is thus returned to this LLN102(408). The LLN102can then itself perform further lower level processing on the raw data to yield a conclusion regarding the sensing event without having to send any of the raw data to the HLN104. Finally, the HLN104can act in concert with one or more other HLNs104, to leverage the computational power available across the HLNs104. In this respect, the HLN104can communicate with one or more other HLNs104to perform abstracted decisional or reasoning processing in a distributed manner (410). Computational load balancing may be leveraged across the HLNs104, for instance, where the HLNs104can perform the same type of processing. Additionally or alternative, different HLNs104may be specialized for performing different types of processing, such that overall abstracted decisional or reasoning is accomplished by more than one HLN104together. | 18,956 |
11943301 | DETAILED DESCRIPTION An integrated security system is described that integrates broadband and mobile access and control with conventional security systems and premise devices to provide a tri-mode security network (broadband, cellular/GSM, POTS access) that enables users to remotely stay connected to their premises. The integrated security system, while delivering remote premise monitoring and control functionality to conventional monitored premise protection, complements existing premise protection equipment. The integrated security system integrates into the premise network and couples wirelessly with the conventional security panel, enabling broadband access to premise security systems. Automation devices (cameras, lamp modules, thermostats, etc.) can be added, enabling users to remotely see live video and/or pictures and control home devices via their personal web portal or webpage, mobile phone, and/or other remote client device. Users can also receive notifications via email or text message when happenings occur, or do not occur, in their home. Although the detailed description herein contains many specifics for the purposes of illustration, anyone of ordinary skill in the art will appreciate that many variations and alterations to the following details are within the scope of the embodiments described herein. Thus, the following illustrative embodiments are set forth without any loss of generality to, and without imposing limitations upon, the claimed invention. As described herein, computer networks suitable for use with the embodiments described herein include local area networks (LAN), wide area networks (WAN), Internet, or other connection services and network variations such as the world wide web, the public internet, a private internet, a private computer network, a public network, a mobile network, a cellular network, a value-added network, and the like. Computing devices coupled or connected to the network may be any microprocessor controlled device that permits access to the network, including terminal devices, such as personal computers, workstations, servers, mini computers, main-frame computers, laptop computers, mobile computers, palm top computers, hand held computers, mobile phones, TV set-top boxes, or combinations thereof. The computer network may include one of more LANs, WANs, Internets, and computers. The computers may serve as servers, clients, or a combination thereof. The integrated security system can be a component of a single system, multiple systems, and/or geographically separate systems. The integrated security system can also be a subcomponent or subsystem of a single system, multiple systems, and/or geographically separate systems. The integrated security system can be coupled to one or more other components (not shown) of a host system or a system coupled to the host system. One or more components of the integrated security system and/or a corresponding system or application to which the integrated security system is coupled or connected includes and/or runs under and/or in association with a processing system. The processing system includes any collection of processor-based devices or computing devices operating together, or components of processing systems or devices, as is known in the art. For example, the processing system can include one or more of a portable computer, portable communication device operating in a communication network, and/or a network server. The portable computer can be any of a number and/or combination of devices selected from among personal computers, personal digital assistants, portable computing devices, and portable communication devices, but is not so limited. The processing system can include components within a larger computer system. The processing system of an embodiment includes at least one processor and at least one memory device or subsystem. The processing system can also include or be coupled to at least one database. The term “processor” as generally used herein refers to any logic processing unit, such as one or more central processing units (CPUs), digital signal processors (DSPs), application-specific integrated circuits (ASIC), etc. The processor and memory can be monolithically integrated onto a single chip, distributed among a number of chips or components, and/or provided by some combination of algorithms. The methods described herein can be implemented in one or more of software algorithm(s), programs, firmware, hardware, components, circuitry, in any combination. The components of any system that includes the integrated security system can be located together or in separate locations. Communication paths couple the components and include any medium for communicating or transferring files among the components. The communication paths include wireless connections, wired connections, and hybrid wireless/wired connections. The communication paths also include couplings or connections to networks including local area networks (LANs), metropolitan area networks (MANs), wide area networks (WANs), proprietary networks, interoffice or backend networks, and the Internet. Furthermore, the communication paths include removable fixed mediums like floppy disks, hard disk drives, and CD-ROM disks, as well as flash RAM, Universal Serial Bus (USB) connections, RS-232 connections, telephone lines, buses, and electronic mail messages. Aspects of the integrated security system and corresponding systems and methods described herein may be implemented as functionality programmed into any of a variety of circuitry, including programmable logic devices (PLDs), such as field programmable gate arrays (FPGAs), programmable array logic (PAL) devices, electrically programmable logic and memory devices and standard cell-based devices, as well as application specific integrated circuits (ASICs). Some other possibilities for implementing aspects of the integrated security system and corresponding systems and methods include: microcontrollers with memory (such as electronically erasable programmable read only memory (EEPROM)), embedded microprocessors, firmware, software, etc. Furthermore, aspects of the integrated security system and corresponding systems and methods may be embodied in microprocessors having software-based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types. Of course the underlying device technologies may be provided in a variety of component types, e.g., metal-oxide semiconductor field-effect transistor (MOSFET) technologies like complementary metal-oxide semiconductor (CMOS), bipolar technologies like emitter-coupled logic (ECL), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, etc. It should be noted that any system, method, and/or other components disclosed herein may be described using computer aided design tools and expressed (or represented), as data and/or instructions embodied in various computer-readable media, in terms of their behavioral, register transfer, logic component, transistor, layout geometries, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, non-volatile storage media in various forms (e.g., optical, magnetic or semiconductor storage media) and carrier waves that may be used to transfer such formatted data and/or instructions through wireless, optical, or wired signaling media or any combination thereof. Examples of transfers of such formatted data and/or instructions by carrier waves include, but are not limited to, transfers (uploads, downloads, e-mail, etc.) over the Internet and/or other computer networks via one or more data transfer protocols (e.g., HTTP, FTP, SMTP, etc.). When received within a computer system via one or more computer-readable media, such data and/or instruction-based expressions of the above described components may be processed by a processing entity (e.g., one or more processors) within the computer system in conjunction with execution of one or more other computer programs. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words “herein,” “hereunder,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. When the word “or” is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list. The above description of embodiments of the integrated security system and corresponding systems and methods is not intended to be exhaustive or to limit the systems and methods to the precise forms disclosed. While specific embodiments of, and examples for, the integrated security system and corresponding systems and methods are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the systems and methods, as those skilled in the relevant art will recognize. The teachings of the integrated security system and corresponding systems and methods provided herein can be applied to other systems and methods, not only for the systems and methods described above. The elements and acts of the various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the integrated security system and corresponding systems and methods in light of the above detailed description. In accordance with the embodiments described herein, a wireless system (e.g., radio frequency (RF)) is provided that enables a security provider or consumer to extend the capabilities of an existing RF-capable security system or a non-RF-capable security system that has been upgraded to support RF capabilities. The system includes an RF-capable Gateway device (physically located within RF range of the RF-capable security system) and associated software operating on the Gateway device. The system also includes a web server, application server, and remote database providing a persistent store for information related to the system. The security systems of an embodiment, referred to herein as the iControl security system or integrated security system, extend the value of traditional home security by adding broadband access and the advantages of remote home monitoring and home control through the formation of a security network including components of the integrated security system integrated with a conventional premise security system and a premise local area network (LAN). With the integrated security system, conventional home security sensors, cameras, touchscreen keypads, lighting controls, and/or Internet Protocol (IP) devices in the home (or business) become connected devices that are accessible anywhere in the world from a web browser, mobile phone or through content-enabled touchscreens. The integrated security system experience allows security operators to both extend the value proposition of their monitored security systems and reach new consumers that include broadband users interested in staying connected to their family, home and property when they are away from home. The integrated security system of an embodiment includes security servers (also referred to herein as iConnect servers or security network servers) and an iHub gateway (also referred to herein as the gateway, the iHub, or the iHub client) that couples or integrates into a home network (e.g., LAN) and communicates directly with the home security panel, in both wired and wireless installations. The security system of an embodiment automatically discovers the security system components (e.g., sensors, etc.) belonging to the security system and connected to a control panel of the security system and provides consumers with full two-way access via web and mobile portals. The gateway supports various wireless protocols and can interconnect with a wide range of control panels offered by security system providers. Service providers and users can then extend the system's capabilities with the additional IP cameras, lighting modules or security devices such as interactive touchscreen keypads. The integrated security system adds an enhanced value to these security systems by enabling consumers to stay connected through email and SMS alerts, photo push, event-based video capture and rule-based monitoring and notifications. This solution extends the reach of home security to households with broadband access. The integrated security system builds upon the foundation afforded by traditional security systems by layering broadband and mobile access, IP cameras, interactive touchscreens, and an open approach to home automation on top of traditional security system configurations. The integrated security system is easily installed and managed by the security operator, and simplifies the traditional security installation process, as described below. The integrated security system provides an open systems solution to the home security market. As such, the foundation of the integrated security system customer premises equipment (CPE) approach has been to abstract devices, and allows applications to manipulate and manage multiple devices from any vendor. The integrated security system DeviceConnect technology that enables this capability supports protocols, devices, and panels from GE Security and Honeywell, as well as consumer devices using Z-Wave, IP cameras (e.g., Ethernet, wife, and Homeplug), and IP touchscreens. The DeviceConnect is a device abstraction layer that enables any device or protocol layer to interoperate with integrated security system components. This architecture enables the addition of new devices supporting any of these interfaces, as well as add entirely new protocols. The benefit of DeviceConnect is that it provides supplier flexibility. The same consistent touchscreen, web, and mobile user experience operate unchanged on whatever security equipment selected by a security system provider, with the system provider's choice of IP cameras, backend data center and central station software. The integrated security system provides a complete system that integrates or layers on top of a conventional host security system available from a security system provider. The security system provider therefore can select different components or configurations to offer (e.g., CDMA, GPRS, no cellular, etc.) as well as have iControl modify the integrated security system configuration for the system provider's specific needs (e.g., change the functionality of the web or mobile portal, add a GE or Honeywell-compatible TouchScreen, etc.). The integrated security system integrates with the security system provider infrastructure for central station reporting directly via Broadband and GPRS alarm transmissions. Traditional dial-up reporting is supported via the standard panel connectivity. Additionally, the integrated security system provides interfaces for advanced functionality to the CMS, including enhanced alarm events, system installation optimizations, system test verification, video verification, 2-way voice over IP and GSM. The integrated security system is an IP centric system that includes broadband connectivity so that the gateway augments the existing security system with broadband and GPRS connectivity. If broadband is down or unavailable GPRS may be used, for example. The integrated security system supports GPRS connectivity using an optional wireless package that includes a GPRS modem in the gateway. The integrated security system treats the GPRS connection as a higher cost though flexible option for data transfers. In an embodiment the GPRS connection is only used to route alarm events (e.g., for cost), however the gateway can be configured (e.g., through the iConnect server interface) to act as a primary channel and pass any or all events over GPRS. Consequently, the integrated security system does not interfere with the current plain old telephone service (POTS) security panel interface. Alarm events can still be routed through POTS; however the gateway also allows such events to be routed through a broadband or GPRS connection as well. The integrated security system provides a web application interface to the CSR tool suite as well as XML web services interfaces for programmatic integration between the security system provider's existing call center products. The integrated security system includes, for example, APIs that allow the security system provider to integrate components of the integrated security system into a custom call center interface. The APIs include XML web service APIs for integration of existing security system provider call center applications with the integrated security system service. All functionality available in the CSR Web application is provided with these API sets. The Java and XML-based APIs of the integrated security system support provisioning, billing, system administration, CSR, central station, portal user interfaces, and content management functions, to name a few. The integrated security system can provide a customized interface to the security system provider's billing system, or alternatively can provide security system developers with APIs and support in the integration effort. The integrated security system provides or includes business component interfaces for provisioning, administration, and customer care to name a few. Standard templates and examples are provided with a defined customer professional services engagement to help integrate OSS/BSS systems of a Service Provider with the integrated security system. The integrated security system components support and allow for the integration of customer account creation and deletion with a security system. The iConnect APIs provides access to the provisioning and account management system in iConnect and provide full support for account creation, provisioning, and deletion. Depending on the requirements of the security system provider, the iConnect APIs can be used to completely customize any aspect of the integrated security system backend operational system. The integrated security system includes a gateway that supports the following standards-based interfaces, to name a few: Ethernet IP communications via Ethernet ports on the gateway, and standard XML/TCP/IP protocols and ports are employed over secured SSL sessions; USB 2.0 via ports on the gateway; 802.11 b/g/n IP communications; GSM/GPRS RF WAN communications; CDMA 1×RTT RF WAN communications (optional, can also support EVDO and 3G technologies). The gateway supports the following proprietary interfaces, to name a few: interfaces including Dialog RF network (319.5 MHz) and RS485 Superbus 2000 wired interface; RF mesh network (908 MHz); and interfaces including RF network (345 MHz) and RS485/RS232bus wired interfaces. Regarding security for the IP communications (e.g., authentication, authorization, encryption, anti-spoofing, etc), the integrated security system uses SSL to encrypt all IP traffic, using server and client-certificates for authentication, as well as authentication in the data sent over the SSL-encrypted channel. For encryption, integrated security system issues public/private key pairs at the time/place of manufacture, and certificates are not stored in any online storage in an embodiment. The integrated security system does not need any special rules at the customer premise and/or at the security system provider central station because the integrated security system makes outgoing connections using TCP over the standard HTTP and HTTPS ports. Provided outbound TCP connections are allowed then no special requirements on the firewalls are necessary. FIG.1is a block diagram of the integrated security system100, under an embodiment. The integrated security system100of an embodiment includes the gateway102and the security servers104coupled to the conventional home security system110. At a customer's home or business, the gateway102connects and manages the diverse variety of home security and self-monitoring devices. The gateway102communicates with the iConnect Servers104located in the service provider's data center106(or hosted in integrated security system data center), with the communication taking place via a communication network108or other network (e.g., cellular network, internet, etc.). These servers104manage the system integrations necessary to deliver the integrated system service described herein. The combination of the gateway102and the iConnect servers104enable a wide variety of remote client devices120(e.g., PCs, mobile phones and PDAs) allowing users to remotely stay in touch with their home, business and family. In addition, the technology allows home security and self-monitoring information, as well as relevant third party content such as traffic and weather, to be presented in intuitive ways within the home, such as on advanced touchscreen keypads. The integrated security system service (also referred to as iControl service) can be managed by a service provider via browser-based Maintenance and Service Management applications that are provided with the iConnect Servers. Or, if desired, the service can be more tightly integrated with existing OSS/BSS and service delivery systems via the iConnect web services-based XML APIs. The integrated security system service can also coordinate the sending of alarms to the home security Central Monitoring Station (CMS)199. Alarms are passed to the CMS199using standard protocols such as Contact ID or SIA and can be generated from the home security panel location as well as by iConnect server104conditions (such as lack of communications with the integrated security system). In addition, the link between the security servers104and CMS199provides tighter integration between home security and self-monitoring devices and the gateway102. Such integration enables advanced security capabilities such as the ability for CMS personnel to view photos taken at the time a burglary alarm was triggered. For maximum security, the gateway102and iConnect servers104support the use of a mobile network (both GPRS and CDMA options are available) as a backup to the primary broadband connection. The integrated security system service is delivered by hosted servers running software components that communicate with a variety of client types while interacting with other systems.FIG.2is a block diagram of components of the integrated security system100, under an embodiment. Following is a more detailed description of the components. The iConnect servers104support a diverse collection of clients120ranging from mobile devices, to PCs, to in-home security devices, to a service provider's internal systems. Most clients120are used by end-users, but there are also a number of clients120that are used to operate the service. Clients120used by end-users of the integrated security system100include, but are not limited to, the following:Clients based on gateway client applications202(e.g., a processor-based device running the gateway technology that manages home security and automation devices).A web browser204accessing a Web Portal application, performing end-user configuration and customization of the integrated security system service as well as monitoring of in-home device status, viewing photos and video, etc. Device and user management can also be performed by this portal application.A mobile device206(e.g., PDA, mobile phone, etc.) accessing the integrated security system Mobile Portal. This type of client206is used by end-users to view system status and perform operations on devices (e.g., turning on a lamp, arming a security panel, etc.) rather than for system configuration tasks such as adding a new device or user.PC or browser-based “widget” containers208that present integrated security system service content, as well as other third-party content, in simple, targeted ways (e.g. a widget that resides on a PC desktop and shows live video from a single in-home camera). “Widget” as used herein means applications or programs in the system.Touchscreen home security keypads208and advanced in-home devices that present a variety of content widgets via an intuitive touchscreen user interface.Notification recipients210(e.g., cell phones that receive SMS-based notifications when certain events occur (or don't occur), email clients that receive an email message with similar information, etc.).Custom-built clients (not shown) that access the iConnect web services XML API to interact with users' home security and self-monitoring information in new and unique ways. Such clients could include new types of mobile devices, or complex applications where integrated security system content is integrated into a broader set of application features. In addition to the end-user clients, the iConnect servers104support PC browser-based Service Management clients that manage the ongoing operation of the overall service. These clients run applications that handle tasks such as provisioning, service monitoring, customer support and reporting. There are numerous types of server components of the iConnect servers104of an embodiment including, but not limited to, the following: Business Components which manage information about all of the home security and self-monitoring devices; End-User Application Components which display that information for users and access the Business Components via published XML APIs; and Service Management Application Components which enable operators to administer the service (these components also access the Business Components via the XML APIs, and also via published SNMP MIBs). The server components provide access to, and management of, the objects associated with an integrated security system installation. The top-level object is the “network.” It is a location where a gateway102is located, and is also commonly referred to as a site or premises; the premises can include any type of structure (e.g., home, office, warehouse, etc.) at which a gateway102is located. Users can only access the networks to which they have been granted permission. Within a network, every object monitored by the gateway102is called a device. Devices include the sensors, cameras, home security panels and automation devices, as well as the controller or processor-based device running the gateway applications. Various types of interactions are possible between the objects in a system. Automations define actions that occur as a result of a change in state of a device. For example, take a picture with the front entry camera when the front door sensor changes to “open”. Notifications are messages sent to users to indicate that something has occurred, such as the front door going to “open” state, or has not occurred (referred to as an iWatch notification). Schedules define changes in device states that are to take place at predefined days and times. For example, set the security panel to “Armed” mode every weeknight at 11:00 pm. The iConnect Business Components are responsible for orchestrating all of the low-level service management activities for the integrated security system service. They define all of the users and devices associated with a network (site), analyze how the devices interact, and trigger associated actions (such as sending notifications to users). All changes in device states are monitored and logged. The Business Components also manage all interactions with external systems as required, including sending alarms and other related self-monitoring data to the home security Central Monitoring System (CMS)199. The Business Components are implemented as portable Java J2EE Servlets, but are not so limited. The following iConnect Business Components manage the main elements of the integrated security system service, but the embodiment is not so limited:A Registry Manager220defines and manages users and networks. This component is responsible for the creation, modification and termination of users and networks. It is also where a user's access to networks is defined.A Network Manager222defines and manages security and self-monitoring devices that are deployed on a network (site). This component handles the creation, modification, deletion and configuration of the devices, as well as the creation of automations, schedules and notification rules associated with those devices.A Data Manager224manages access to current and logged state data for an existing network and its devices. This component specifically does not provide any access to network management capabilities, such as adding new devices to a network, which are handled exclusively by the Network Manager222.To achieve optimal performance for all types of queries, data for current device states is stored separately from historical state data (a.k.a. “logs”) in the database. A Log Data Manager226performs ongoing transfers of current device state data to the historical data log tables. Additional iConnect Business Components handle direct communications with certain clients and other systems, for example:An iHub Manager228directly manages all communications with gateway clients, including receiving information about device state changes, changing the configuration of devices, and pushing new versions of the gateway client to the hardware it is running on.A Notification Manager230is responsible for sending all notifications to clients via SMS (mobile phone messages), email (via a relay server like an SMTP email server), etc.An Alarm and CMS Manager232sends critical server-generated alarm events to the home security Central Monitoring Station (CMS) and manages all other communications of integrated security system service data to and from the CMS.The Element Management System (EMS)234is an iControl Business Component that manages all activities associated with service installation, scaling and monitoring, and filters and packages service operations data for use by service management applications. The SNMP MIBs published by the EMS can also be incorporated into any third party monitoring system if desired. The iConnect Business Components store information about the objects that they manage in the iControl Service Database240and in the iControl Content Store242. The iControl Content Store is used to store media objects like video, photos and widget content, while the Service Database stores information about users, networks, and devices. Database interaction is performed via a JDBC interface. For security purposes, the Business Components manage all data storage and retrieval. The iControl Business Components provide web services-based APIs that application components use to access the Business Components' capabilities. Functions of application components include presenting integrated security system service data to end-users, performing administrative duties, and integrating with external systems and back-office applications. The primary published APIs for the iConnect Business Components include, but are not limited to, the following:A Registry Manager API252provides access to the Registry Manager Business Component's functionality, allowing management of networks and users.A Network Manager API254provides access to the Network Manager Business Component's functionality, allowing management of devices on a network.A Data Manager API256provides access to the Data Manager Business Component's functionality, such as setting and retrieving (current and historical) data about device states.A Provisioning API258provides a simple way to create new networks and configure initial default properties. Each API of an embodiment includes two modes of access: Java API or XML API. The XML APIs are published as web services so that they can be easily accessed by applications or servers over a network. The Java APIs are a programmer-friendly wrapper for the XML APIs. Application components and integrations written in Java should generally use the Java APIs rather than the XML APIs directly. The iConnect Business Components also have an XML-based interface260for quickly adding support for new devices to the integrated security system. This interface260, referred to as DeviceConnect260, is a flexible, standards-based mechanism for defining the properties of new devices and how they can be managed. Although the format is flexible enough to allow the addition of any type of future device, pre-defined XML profiles are currently available for adding common types of devices such as sensors (SensorConnect), home security panels (PanelConnect) and IP cameras (CameraConnect). The iConnect End-User Application Components deliver the user interfaces that run on the different types of clients supported by the integrated security system service. The components are written in portable Java J2EE technology (e.g., as Java Servlets, as JavaServer Pages (JSPs), etc.) and they all interact with the iControl Business Components via the published APIs. The following End-User Application Components generate CSS-based HTML/JavaScript that is displayed on the target client. These applications can be dynamically branded with partner-specific logos and URL links (such as Customer Support, etc.). The End-User Application Components of an embodiment include, but are not limited to, the following:An iControl Activation Application270that delivers the first application that a user sees when they set up the integrated security system service. This wizard-based web browser application securely associates a new user with a purchased gateway and the other devices included with it as a kit (if any). It primarily uses functionality published by the Provisioning API.An iControl Web Portal Application272runs on PC browsers and delivers the web-based interface to the integrated security system service. This application allows users to manage their networks (e.g. add devices and create automations) as well as to view/change device states, and manage pictures and videos. Because of the wide scope of capabilities of this application, it uses three different Business Component APIs that include the Registry Manager API, Network Manager API, and Data Manager API, but the embodiment is not so limited.An iControl Mobile Portal274is a small-footprint web-based interface that runs on mobile phones and PDAs. This interface is optimized for remote viewing of device states and pictures/videos rather than network management. As such, its interaction with the Business Components is primarily via the Data Manager API.Custom portals and targeted client applications can be provided that leverage the same Business Component APIs used by the above applications.A Content Manager Application Component276delivers content to a variety of clients. It sends multimedia-rich user interface components to widget container clients (both PC and browser-based), as well as to advanced touchscreen keypad clients. In addition to providing content directly to end-user devices, the Content Manager276provides widget-based user interface components to satisfy requests from other Application Components such as the iControl Web272and Mobile274portals. A number of Application Components are responsible for overall management of the service. These pre-defined applications, referred to as Service Management Application Components, are configured to offer off-the-shelf solutions for production management of the integrated security system service including provisioning, overall service monitoring, customer support, and reporting, for example. The Service Management Application Components of an embodiment include, but are not limited to, the following:A Service Management Application280allows service administrators to perform activities associated with service installation, scaling and monitoring/alerting. This application interacts heavily with the Element Management System (EMS) Business Component to execute its functionality, and also retrieves its monitoring data from that component via protocols such as SNMP MIBs.A Kitting Application282is used by employees performing service provisioning tasks. This application allows home security and self-monitoring devices to be associated with gateways during the warehouse kitting process.A CSR Application and Report Generator284is used by personnel supporting the integrated security system service, such as CSRs resolving end-user issues and employees enquiring about overall service usage. The push of new gateway firmware to deployed gateways is also managed by this application. The iConnect servers104also support custom-built integrations with a service provider's existing OSS/BSS, CSR and service delivery systems290. Such systems can access the iConnect web services XML API to transfer data to and from the iConnect servers104. These types of integrations can compliment or replace the PC browser-based Service Management applications, depending on service provider needs. As described above, the integrated security system of an embodiment includes a gateway, or iHub. The gateway of an embodiment includes a device that is deployed in the home or business and couples or connects the various third-party cameras, home security panels, sensors and devices to the iConnect server over a WAN connection as described in detail herein. The gateway couples to the home network and communicates directly with the home security panel in both wired and wireless sensor installations. The gateway is configured to be low-cost, reliable and thin so that it complements the integrated security system network-based architecture. The gateway supports various wireless protocols and can interconnect with a wide range of home security control panels. Service providers and users can then extend the system's capabilities by adding IP cameras, lighting modules and additional security devices. The gateway is configurable to be integrated into many consumer appliances, including set-top boxes, routers and security panels. The small and efficient footprint of the gateway enables this portability and versatility, thereby simplifying and reducing the overall cost of the deployment. FIG.3is a block diagram of the gateway102including gateway software or applications, under an embodiment. The gateway software architecture is relatively thin and efficient, thereby simplifying its integration into other consumer appliances such as set-top boxes, routers, touch screens and security panels. The software architecture also provides a high degree of security against unauthorized access. This section describes the various key components of the gateway software architecture. The gateway application layer302is the main program that orchestrates the operations performed by the gateway. The Security Engine304provides robust protection against intentional and unintentional intrusion into the integrated security system network from the outside world (both from inside the premises as well as from the WAN). The Security Engine304of an embodiment comprises one or more sub-modules or components that perform functions including, but not limited to, the following:Encryption including 128-bit SSL encryption for gateway and iConnect server communication to protect user data privacy and provide secure communication.Bi-directional authentication between the gateway and iConnect server in order to prevent unauthorized spoofing and attacks. Data sent from the iConnect server to the gateway application (or vice versa) is digitally signed as an additional layer of security. Digital signing provides both authentication and validation that the data has not been altered in transit.Camera SSL encapsulation because picture and video traffic offered by off-the-shelf networked IP cameras is not secure when traveling over the Internet. The gateway provides for 128-bit SSL encapsulation of the user picture and video data sent over the internet for complete user security and privacy.802.11 b/g/n with WPA-2 security to ensure that wireless camera communications always takes place using the strongest available protection.A gateway-enabled device is assigned a unique activation key for activation with an iConnect server. This ensures that only valid gateway-enabled devices can be activated for use with the specific instance of iConnect server in use. Attempts to activate gateway-enabled devices by brute force are detected by the Security Engine. Partners deploying gateway-enabled devices have the knowledge that only a gateway with the correct serial number and activation key can be activated for use with an iConnect server. Stolen devices, devices attempting to masquerade as gateway-enabled devices, and malicious outsiders (or insiders as knowledgeable but nefarious customers) cannot effect other customers' gateway-enabled devices. As standards evolve, and new encryption and authentication methods are proven to be useful, and older mechanisms proven to be breakable, the security manager can be upgraded “over the air” to provide new and better security for communications between the iConnect server and the gateway application, and locally at the premises to remove any risk of eavesdropping on camera communications. A Remote Firmware Download module306allows for seamless and secure updates to the gateway firmware through the iControl Maintenance Application on the server104, providing a transparent, hassle-free mechanism for the service provider to deploy new features and bug fixes to the installed user base. The firmware download mechanism is tolerant of connection loss, power interruption and user interventions (both intentional and unintentional). Such robustness reduces down time and customer support issues. Gateway firmware can be remotely download either for one gateway at a time, a group of gateways, or in batches. The Automations engine308manages the user-defined rules of interaction between the different devices (e.g. when door opens turn on the light). Though the automation rules are programmed and reside at the portal/server level, they are cached at the gateway level in order to provide short latency between device triggers and actions. DeviceConnect310includes definitions of all supported devices (e.g., cameras, security panels, sensors, etc.) using a standardized plug-in architecture. The DeviceConnect module310offers an interface that can be used to quickly add support for any new device as well as enabling interoperability between devices that use different technologies/protocols. For common device types, pre-defined sub-modules have been defined, making supporting new devices of these types even easier. SensorConnect312is provided for adding new sensors, CameraConnect316for adding IP cameras, and PanelConnect314for adding home security panels. The Schedules engine318is responsible for executing the user defined schedules (e.g., take a picture every five minutes; every day at 8 am set temperature to 65 degrees Fahrenheit, etc.). Though the schedules are programmed and reside at the iConnect server level they are sent to the scheduler within the gateway application. The Schedules Engine318then interfaces with SensorConnect312to ensure that scheduled events occur at precisely the desired time. The Device Management module320is in charge of all discovery, installation and configuration of both wired and wireless IP devices (e.g., cameras, etc.) coupled or connected to the system. Networked IP devices, such as those used in the integrated security system, require user configuration of many IP and security parameters—to simplify the user experience and reduce the customer support burden, the device management module of an embodiment handles the details of this configuration. The device management module also manages the video routing module described below. The video routing engine322is responsible for delivering seamless video streams to the user with zero-configuration. Through a multi-step, staged approach the video routing engine uses a combination of UPnP port-forwarding, relay server routing and STUN/TURN peer-to-peer routing. FIG.4is a block diagram of components of the gateway102, under an embodiment. Depending on the specific set of functionality desired by the service provider deploying the integrated security system service, the gateway102can use any of a number of processors402, due to the small footprint of the gateway application firmware. In an embodiment, the gateway could include the Broadcom BCM5354 as the processor for example. In addition, the gateway102includes memory (e.g., FLASH404, RAM406, etc.) and any number of input/output (I/O) ports408. Referring to the WAN portion410of the gateway102, the gateway102of an embodiment can communicate with the iConnect server using a number of communication types and/or protocols, for example Broadband412, GPRS414and/or Public Switched Telephone Network (PTSN)416to name a few. In general, broadband communication412is the primary means of connection between the gateway102and the iConnect server104and the GPRS/CDMA414and/or PSTN416interfaces acts as back-up for fault tolerance in case the user's broadband connection fails for whatever reason, but the embodiment is not so limited. Referring to the LAN portion420of the gateway102, various protocols and physical transceivers can be used to communicate to off-the-shelf sensors and cameras. The gateway102is protocol-agnostic and technology-agnostic and as such can easily support almost any device networking protocol. The gateway102can, for example, support GE and Honeywell security RF protocols422, Z-Wave424, serial (RS232 and RS485)426for direct connection to security panels as well as WiFi428(802.11 b/g) for communication to WiFi cameras. The integrated security system includes couplings or connections among a variety of IP devices or components, and the device management module is in charge of the discovery, installation and configuration of the IP devices coupled or connected to the system, as described above. The integrated security system of an embodiment uses a “sandbox” network to discover and manage all IP devices coupled or connected as components of the system. The IP devices of an embodiment include wired devices, wireless devices, cameras, interactive touchscreens, and security panels to name a few. These devices can be wired via ethernet cable or Wifi devices, all of which are secured within the sandbox network, as described below. The “sandbox” network is described in detail below. FIG.5is a block diagram500of network or premise device integration with a premise network250, under an embodiment. In an embodiment, network devices255¬257are coupled to the gateway102using a secure network coupling or connection such as SSL over an encrypted 802.11 link (utilizing for example WPA-2 security for the wireless encryption). The network coupling or connection between the gateway102and the network devices255-257is a private coupling or connection in that it is segregated from any other network couplings or connections. The gateway102is coupled to the premise router/firewall252via a coupling with a premise LAN250. The premise router/firewall252is coupled to a broadband modem251, and the broadband modem251is coupled to a WAN200or other network outside the premise. The gateway102thus enables or forms a separate wireless network, or sub-network, that includes some number of devices and is coupled or connected to the LAN250of the host premises. The gateway sub-network can include, but is not limited to, any number of other devices like WiFi IP cameras, security panels (e.g., IP-enabled), and security touchscreens, to name a few. The gateway102manages or controls the sub-network separately from the LAN250and transfers data and information between components of the sub-network and the LAN250/WAN200, but is not so limited. Additionally, other network devices254can be coupled to the LAN250without being coupled to the gateway102. FIG.6is a block diagram600of network or premise device integration with a premise network250, under an alternative embodiment. The network or premise devices255-257are coupled to the gateway102. The network coupling or connection between the gateway102and the network devices255-257is a private coupling or connection in that it is segregated from any other network couplings or connections. The gateway102is coupled or connected between the premise router/firewall252and the broadband modem251. The broadband modem251is coupled to a WAN200or other network outside the premise, while the premise router/firewall252is coupled to a premise LAN250. As a result of its location between the broadband modem251and the premise router/firewall252, the gateway102can be configured or function as the premise router routing specified data between the outside network (e.g., WAN200) and the premise router/firewall252of the LAN250. As described above, the gateway102in this configuration enables or forms a separate wireless network, or sub-network, that includes the network or premise devices255-257and is coupled or connected between the LAN250of the host premises and the WAN200. The gateway sub-network can include, but is not limited to, any number of network or premise devices255-257like WiFi IP cameras, security panels (e.g., IP-enabled), and security touchscreens, to name a few. The gateway102manages or controls the sub-network separately from the LAN250and transfers data and information between components of the sub-network and the LAN250/WAN200, but is not so limited. Additionally, other network devices254can be coupled to the LAN250without being coupled to the gateway102. The examples described above with reference toFIGS.5and6are presented only as examples of IP device integration. The integrated security system is not limited to the type, number and/or combination of IP devices shown and described in these examples, and any type, number and/or combination of IP devices is contemplated within the scope of this disclosure as capable of being integrated with the premise network. The integrated security system of an embodiment includes a touchscreen (also referred to as the iControl touchscreen or integrated security system touchscreen), as described above, which provides core security keypad functionality, content management and presentation, and embedded systems design. The networked security touchscreen system of an embodiment enables a consumer or security provider to easily and automatically install, configure and manage the security system and touchscreen located at a customer premise. Using this system the customer may access and control the local security system, local IP devices such as cameras, local sensors and control devices (such as lighting controls or pipe freeze sensors), as well as the local security system panel and associated security sensors (such as door/window, motion, and smoke detectors). The customer premise may be a home, business, and/or other location equipped with a wired or wireless broadband IP connection. The system of an embodiment includes a touchscreen with a configurable software user interface and/or a gateway device (e.g., iHub) that couples or connects to a premise security panel through a wired or wireless connection, and a remote server that provides access to content and information from the premises devices to a user when they are remote from the home. The touchscreen supports broadband and/or WAN wireless connectivity. In this embodiment, the touchscreen incorporates an IP broadband connection (e.g., Wifi radio, Ethernet port, etc.), and/or a cellular radio (e.g., GPRS/GSM, CDMA, WiMax, etc.). The touchscreen described herein can be used as one or more of a security system interface panel and a network user interface (UI) that provides an interface to interact with a network (e.g., LAN, WAN, internet, etc.). The touchscreen of an embodiment provides an integrated touchscreen and security panel as an all-in-one device. Once integrated using the touchscreen, the touchscreen and a security panel of a premise security system become physically co-located in one device, and the functionality of both may even be co-resident on the same CPU and memory (though this is not required). The touchscreen of an embodiment also provides an integrated IP video and touchscreen UI. As such, the touchscreen supports one or more standard video CODECs/players (e.g., H.264, Flash Video, MOV, MPEG4, M-JPEG, etc.). The touchscreen UI then provides a mechanism (such as a camera or video widget) to play video. In an embodiment the video is streamed live from an IP video camera. In other embodiments the video comprises video clips or photos sent from an IP camera or from a remote location. The touchscreen of an embodiment provides a configurable user interface system that includes a configuration supporting use as a security touchscreen. In this embodiment, the touchscreen utilizes a modular user interface that allows components to be modified easily by a service provider, an installer, or even the end user. Examples of such a modular approach include using Flash widgets, HTML-based widgets, or other downloadable code modules such that the user interface of the touchscreen can be updated and modified while the application is running. In an embodiment the touchscreen user interface modules can be downloaded over the internet. For example, a new security configuration widget can be downloaded from a standard web server, and the touchscreen then loads such configuration app into memory, and inserts it in place of the old security configuration widget. The touchscreen of an embodiment is configured to provide a self-install user interface. Embodiments of the networked security touchscreen system described herein include a touchscreen device with a user interface that includes a security toolbar providing one or more functions including arm, disarm, panic, medic, and alert. The touchscreen therefore includes at least one screen having a separate region of the screen dedicated to a security toolbar. The security toolbar of an embodiment is present in the dedicated region at all times that the screen is active. The touchscreen of an embodiment includes a home screen having a separate region of the screen allocated to managing home-based functions. The home-based functions of an embodiment include managing, viewing, and/or controlling IP video cameras. In this embodiment, regions of the home screen are allocated in the form of widget icons; these widget icons (e.g. for cameras, thermostats, lighting, etc) provide functionality for managing home systems. So, for example, a displayed camera icon, when selected, launches a Camera Widget, and the Camera widget in turn provides access to video from one or more cameras, as well as providing the user with relevant camera controls (take a picture, focus the camera, etc.) The touchscreen of an embodiment includes a home screen having a separate region of the screen allocated to managing, viewing, and/or controlling internet-based content or applications. For example, the Widget Manager UI presents a region of the home screen (up to and including the entire home screen) where internet widgets icons such as weather, sports, etc. may be accessed). Each of these icons may be selected to launch their respective content services. The touchscreen of an embodiment is integrated into a premise network using the gateway, as described above. The gateway as described herein functions to enable a separate wireless network, or sub-network, that is coupled, connected, or integrated with another network (e.g., WAN, LAN of the host premises, etc.). The sub-network enabled by the gateway optimizes the installation process for IP devices, like the touchscreen, that couple or connect to the sub-network by segregating these IP devices from other such devices on the network. This segregation of the IP devices of the sub-network further enables separate security and privacy policies to be implemented for these IP devices so that, where the IP devices are dedicated to specific functions (e.g., security), the security and privacy policies can be tailored specifically for the specific functions. Furthermore, the gateway and the sub-network it forms enables the segregation of data traffic, resulting in faster and more efficient data flow between components of the host network, components of the sub-network, and between components of the sub-network and components of the network. The touchscreen of an embodiment includes a core functional embedded system that includes an embedded operating system, required hardware drivers, and an open system interface to name a few. The core functional embedded system can be provided by or as a component of a conventional security system (e.g., security system available from GE Security). These core functional units are used with components of the integrated security system as described herein. Note that portions of the touchscreen description below may include reference to a host premise security system (e.g., GE security system), but these references are included only as an example and do not limit the touchscreen to integration with any particular security system. As an example, regarding the core functional embedded system, a reduced memory footprint version of embedded Linux forms the core operating system in an embodiment, and provides basic TCP/IP stack and memory management functions, along with a basic set of low-level graphics primitives. A set of device drivers is also provided or included that offer low-level hardware and network interfaces. In addition to the standard drivers, an interface to the RS 485 bus is included that couples or connects to the security system panel (e.g., GE Concord panel). The interface may, for example, implement the Superbus 2000 protocol, which can then be utilized by the more comprehensive transaction-level security functions implemented in PanelConnect technology (e.g. SetAlarmLevel (int level, int partition, char *accessCode)). Power control drivers are also provided. FIG.7is a block diagram of a touchscreen700of the integrated security system, under an embodiment. The touchscreen700generally includes an application/presentation layer702with a resident application704, and a core engine706. The touchscreen700also includes one or more of the following, but is not so limited: applications of premium services710, widgets712, a caching proxy714, network security716, network interface718, security object720, applications supporting devices722, PanelConnect API724, a gateway interface726, and one or more ports728. More specifically, the touchscreen, when configured as a home security device, includes but is not limited to the following application or software modules: RS 485 and/or RS-232 bus security protocols to conventional home security system panel (e.g., GE Concord panel); functional home security classes and interfaces (e.g. Panel ARM state, Sensor status, etc.); Application/Presentation layer or engine; Resident Application; Consumer Home Security Application; installer home security application; core engine; and System bootloader/Software Updater. The core Application engine and system bootloader can also be used to support other advanced content and applications. This provides a seamless interaction between the premise security application and other optional services such as weather widgets or IP cameras. An alternative configuration of the touchscreen includes a first Application engine for premise security and a second Application engine for all other applications. The integrated security system application engine supports content standards such as HTML, XML, Flash, etc. and enables a rich consumer experience for all ‘widgets’, whether security-based or not. The touchscreen thus provides service providers the ability to use web content creation and management tools to build and download any ‘widgets’ regardless of their functionality. As discussed above, although the Security Applications have specific low-level functional requirements in order to interface with the premise security system, these applications make use of the same fundamental application facilities as any other ‘widget’, application facilities that include graphical layout, interactivity, application handoff, screen management, and network interfaces, to name a few. Content management in the touchscreen provides the ability to leverage conventional web development tools, performance optimized for an embedded system, service provider control of accessible content, content reliability in a consumer device, and consistency between ‘widgets’ and seamless widget operational environment. In an embodiment of the integrated security system, widgets are created by web developers and hosted on the integrated security system Content Manager (and stored in the Content Store database). In this embodiment the server component caches the widgets and offers them to consumers through the web-based integrated security system provisioning system. The servers interact with the advanced touchscreen using HTTPS interfaces controlled by the core engine and dynamically download widgets and updates as needed to be cached on the touchscreen. In other embodiments widgets can be accessed directly over a network such as the Internet without needing to go through the iControl Content Manager Referring toFIG.7, the touchscreen system is built on a tiered architecture, with defined interfaces between the Application/Presentation Layer (the Application Engine) on the top, the Core Engine in the middle, and the security panel and gateway APIs at the lower level. The architecture is configured to provide maximum flexibility and ease of maintenance. The application engine of the touchscreen provides the presentation and interactivity capabilities for all applications (widgets) that run on the touchscreen, including both core security function widgets and third party content widgets.FIG.8is an example screenshot800of a networked security touchscreen, under an embodiment. This example screenshot800includes three interfaces or user interface (UI) components802-806, but is not so limited. A first UI802of the touchscreen includes icons by which a user controls or accesses functions and/or components of the security system (e.g., “Main”, “Panic”, “Medic”, “Fire”, state of the premise alarm system (e.g., disarmed, armed, etc.), etc.); the first UI802, which is also referred to herein as a security interface, is always presented on the touchscreen. A second UI804of the touchscreen includes icons by which a user selects or interacts with services and other network content (e.g., clock, calendar, weather, stocks, news, sports, photos, maps, music, etc.) that is accessible via the touchscreen. The second UI804is also referred to herein as a network interface or content interface. A third UI806of the touchscreen includes icons by which a user selects or interacts with additional services or components (e.g., intercom control, security, cameras coupled to the system in particular regions (e.g., front door, baby, etc.) available via the touchscreen. A component of the application engine is the Presentation Engine, which includes a set of libraries that implement the standards-based widget content (e.g., XML, HTML, JavaScript, Flash) layout and interactivity. This engine provides the widget with interfaces to dynamically load both graphics and application logic from third parties, support high level data description language as well as standard graphic formats. The set of web content-based functionality available to a widget developer is extended by specific touchscreen functions implemented as local web services by the Core Engine. The resident application of the touchscreen is the master service that controls the interaction of all widgets in the system, and enforces the business and security rules required by the service provider. For example, the resident application determines the priority of widgets, thereby enabling a home security widget to override resource requests from a less critical widget (e.g. a weather widget). The resident application also monitors widget behavior, and responds to client or server requests for cache updates. The core engine of the touchscreen manages interaction with other components of the integrated security system, and provides an interface through which the resident application and authorized widgets can get information about the home security system, set alarms, install sensors, etc. At the lower level, the Core Engine's main interactions are through the PanelConnect API, which handles all communication with the security panel, and the gateway Interface, which handles communication with the gateway. In an embodiment, both the iHub Interface and PanelConnect API are resident and operating on the touchscreen. In another embodiment, the PanelConnect API runs on the gateway or other device that provides security system interaction and is accessed by the touchscreen through a web services interface. The Core Engine also handles application and service level persistent and cached memory functions, as well as the dynamic provisioning of content and widgets, including but not limited to: flash memory management, local widget and content caching, widget version management (download, cache flush new/old content versions), as well as the caching and synchronization of user preferences. As a portion of these services the Core engine incorporates the bootloader functionality that is responsible for maintaining a consistent software image on the touchscreen, and acts as the client agent for all software updates. The bootloader is configured to ensure full update redundancy so that unsuccessful downloads cannot corrupt the integrated security system. Video management is provided as a set of web services by the Core Engine. Video management includes the retrieval and playback of local video feeds as well as remote control and management of cameras (all through iControl CameraConnect technology). Both the high level application layer and the mid-level core engine of the touchscreen can make calls to the network. Any call to the network made by the application layer is automatically handed off to a local caching proxy, which determines whether the request should be handled locally. Many of the requests from the application layer are web services API requests, although such requests could be satisfied by the iControl servers, they are handled directly by the touchscreen and the gateway. Requests that get through the caching proxy are checked against a white list of acceptable sites, and, if they match, are sent off through the network interface to the gateway. Included in the Network Subsystem is a set of network services including HTTP, HTTPS, and server-level authentication functions to manage the secure client-server interface. Storage and management of certificates is incorporated as a part of the network services layer. Server components of the integrated security system servers support interactive content services on the touchscreen. These server components include, but are not limited to the content manager, registry manager, network manager, and global registry, each of which is described herein. The Content Manager oversees aspects of handling widget data and raw content on the touchscreen. Once created and validated by the service provider, widgets are ‘ingested’ to the Content Manager, and then become available as downloadable services through the integrated security system Content Management APIs. The Content manager maintains versions and timestamp information, and connects to the raw data contained in the backend Content Store database. When a widget is updated (or new content becomes available) all clients registering interest in a widget are systematically updated as needed (a process that can be configured at an account, locale, or system-wide level). The Registry Manager handles user data, and provisioning accounts, including information about widgets the user has decided to install, and the user preferences for these widgets. The Network Manager handles getting and setting state for all devices on the integrated security system network (e.g., sensors, panels, cameras, etc.). The Network manager synchronizes with the gateway, the advanced touchscreen, and the subscriber database. The Global Registry is a primary starting point server for all client services, and is a logical referral service that abstracts specific server locations/addresses from clients (touchscreen, gateway102, desktop widgets, etc.). This approach enables easy scaling/migration of server farms. The touchscreen of an embodiment operates wirelessly with a premise security system. The touchscreen of an embodiment incorporates an RF transceiver component that either communicates directly with the sensors and/or security panel over the panel's proprietary RF frequency, or the touchscreen communicates wirelessly to the gateway over 802.11, Ethernet, or other IP-based communications channel, as described in detail herein. In the latter case the gateway implements the PanelConnect interface and communicates directly to the security panel and/or sensors over wireless or wired networks as described in detail above. The touchscreen of an embodiment is configured to operate with multiple security systems through the use of an abstracted security system interface. In this embodiment, the PanelConnect API can be configured to support a plurality of proprietary security system interfaces, either simultaneously or individually as described herein. In one embodiment of this approach, the touchscreen incorporates multiple physical interfaces to security panels (e.g. GE Security RS-485, Honeywell RF, etc.) in addition to the PanelConnect API implemented to support multiple security interfaces. The change needed to support this in PanelConnect is a configuration parameter specifying the panel type connection that is being utilized. So for example, the setARMState( ) function is called with an additional parameter (e.g., Armstate=setARMState(type=“ARM STAYS ARM AWAY|DISARM”, Parameters=“ExitDelay=30|Lights=OFF”, panelType GE Concord4 RS485”)). The ‘panelType’ parameter is used by the setARMState function (and in practice by all of the PanelConnect functions) to select an algorithm appropriate to the specific panel out of a plurality of algorithms. The touchscreen of an embodiment is self-installable. Consequently, the touchscreen provides a ‘wizard’ approach similar to that used in traditional computer installations (e.g. InstallShield). The wizard can be resident on the touchscreen, accessible through a web interface, or both. In one embodiment of a touchscreen self-installation process, the service provider can associate devices (sensors, touchscreens, security panels, lighting controls, etc.) remotely using a web-based administrator interface. The touchscreen of an embodiment includes a battery backup system for a security touchscreen. The touchscreen incorporates a standard Li-ion or other battery and charging circuitry to allow continued operation in the event of a power outage. In an embodiment the battery is physically located and connected within the touchscreen enclosure. In another embodiment the battery is located as a part of the power transformer, or in between the power transformer and the touchscreen. The example configurations of the integrated security system described above with reference toFIGS.5and6include a gateway that is a separate device, and the touchscreen couples to the gateway. However, in an alternative embodiment, the gateway device and its functionality can be incorporated into the touchscreen so that the device management module, which is now a component of or included in the touchscreen, is in charge of the discovery, installation and configuration of the IP devices coupled or connected to the system, as described above. The integrated security system with the integrated touchscreen/gateway uses the same “sandbox” network to discover and manage all IP devices coupled or connected as components of the system. The touchscreen of this alternative embodiment integrates the components of the gateway with the components of the touchscreen as described herein. More specifically, the touchscreen of this alternative embodiment includes software or applications described above with reference toFIG.3. In this alternative embodiment, the touchscreen includes the gateway application layer302as the main program that orchestrates the operations performed by the gateway. A Security Engine304of the touchscreen provides robust protection against intentional and unintentional intrusion into the integrated security system network from the outside world (both from inside the premises as well as from the WAN). The Security Engine304of an embodiment comprises one or more sub-modules or components that perform functions including, but not limited to, the following:Encryption including 128-bit SSL encryption for gateway and iConnect server communication to protect user data privacy and provide secure communication.Bi-directional authentication between the touchscreen and iConnect server in order to prevent unauthorized spoofing and attacks. Data sent from the iConnect server to the gateway application (or vice versa) is digitally signed as an additional layer of security. Digital signing provides both authentication and validation that the data has not been altered in transit.Camera SSL encapsulation because picture and video traffic offered by off-the-shelf networked IP cameras is not secure when traveling over the Internet. The touchscreen provides for 128-bit SSL encapsulation of the user picture and video data sent over the internet for complete user security and privacy.802.11 b/g/n with WPA-2 security to ensure that wireless camera communications always takes place using the strongest available protection.A touchscreen-enabled device is assigned a unique activation key for activation with an iConnect server. This ensures that only valid gateway-enabled devices can be activated for use with the specific instance of iConnect server in use. Attempts to activate gateway-enabled devices by brute force are detected by the Security Engine. Partners deploying touchscreen-enabled devices have the knowledge that only a gateway with the correct serial number and activation key can be activated for use with an iConnect server. Stolen devices, devices attempting to masquerade as gateway-enabled devices, and malicious outsiders (or insiders as knowledgeable but nefarious customers) cannot effect other customers' gateway-enabled devices. As standards evolve, and new encryption and authentication methods are proven to be useful, and older mechanisms proven to be breakable, the security manager can be upgraded “over the air” to provide new and better security for communications between the iConnect server and the gateway application, and locally at the premises to remove any risk of eavesdropping on camera communications. A Remote Firmware Download module306of the touchscreen allows for seamless and secure updates to the gateway firmware through the iControl Maintenance Application on the server104, providing a transparent, hassle-free mechanism for the service provider to deploy new features and bug fixes to the installed user base. The firmware download mechanism is tolerant of connection loss, power interruption and user interventions (both intentional and unintentional). Such robustness reduces down time and customer support issues. Touchscreen firmware can be remotely download either for one touchscreen at a time, a group of touchscreen, or in batches. The Automations engine308of the touchscreen manages the user-defined rules of interaction between the different devices (e.g. when door opens turn on the light). Though the automation rules are programmed and reside at the portal/server level, they are cached at the gateway level in order to provide short latency between device triggers and actions. DeviceConnect310of the touchscreen includes definitions of all supported devices (e.g., cameras, security panels, sensors, etc.) using a standardized plug-in architecture. The DeviceConnect module310offers an interface that can be used to quickly add support for any new device as well as enabling interoperability between devices that use different technologies/protocols. For common device types, pre-defined sub-modules have been defined, making supporting new devices of these types even easier. SensorConnect312is provided for adding new sensors, CameraConnect316for adding IP cameras, and PanelConnect314for adding home security panels. The Schedules engine318of the touchscreen is responsible for executing the user defined schedules (e.g., take a picture every five minutes; every day at 8 am set temperature to 65 degrees Fahrenheit, etc.). Though the schedules are programmed and reside at the iConnect server level they are sent to the scheduler within the gateway application of the touchscreen. The Schedules Engine318then interfaces with SensorConnect312to ensure that scheduled events occur at precisely the desired time. The Device Management module320of the touchscreen is in charge of all discovery, installation and configuration of both wired and wireless IP devices (e.g., cameras, etc.) coupled or connected to the system. Networked IP devices, such as those used in the integrated security system, require user configuration of many IP and security parameters, and the device management module of an embodiment handles the details of this configuration. The device management module also manages the video routing module described below. The video routing engine322of the touchscreen is responsible for delivering seamless video streams to the user with zero-configuration. Through a multi-step, staged approach the video routing engine uses a combination of UPnP port-forwarding, relay server routing and STUN/TURN peer-to-peer routing. The video routing engine is described in detail in the Related Applications. FIG.9is a block diagram900of network or premise device integration with a premise network250, under an embodiment. In an embodiment, network devices255,256,957are coupled to the touchscreen902using a secure network connection such as SSL over an encrypted 802.11 link (utilizing for example WPA-2 security for the wireless encryption), and the touchscreen902coupled to the premise router/firewall252via a coupling with a premise LAN250. The premise router/firewall252is coupled to a broadband modem251, and the broadband modem251is coupled to a WAN200or other network outside the premise. The touchscreen902thus enables or forms a separate wireless network, or sub-network, that includes some number of devices and is coupled or connected to the LAN250of the host premises. The touchscreen sub-network can include, but is not limited to, any number of other devices like WiFi IP cameras, security panels (e.g., IP-enabled), and IP devices, to name a few. The touchscreen902manages or controls the sub-network separately from the LAN250and transfers data and information between components of the sub-network and the LAN250/WAN200, but is not so limited. Additionally, other network devices254can be coupled to the LAN250without being coupled to the touchscreen902. FIG.10is a block diagram1000of network or premise device integration with a premise network250, under an alternative embodiment. The network or premise devices255,256,1057are coupled to the touchscreen1002, and the touchscreen1002is coupled or connected between the premise router/firewall252and the broadband modem251. The broadband modem251is coupled to a WAN200or other network outside the premise, while the premise router/firewall252is coupled to a premise LAN250. As a result of its location between the broadband modem251and the premise router/firewall252, the touchscreen1002can be configured or function as the premise router routing specified data between the outside network (e.g., WAN200) and the premise router/firewall252of the LAN250. As described above, the touchscreen1002in this configuration enables or forms a separate wireless network, or sub-network, that includes the network or premise devices255,156,1057and is coupled or connected between the LAN250of the host premises and the WAN200. The touchscreen sub-network can include, but is not limited to, any number of network or premise devices255,256,1057like WiFi IP cameras, security panels (e.g., IP-enabled), and security touchscreens, to name a few. The touchscreen1002manages or controls the sub-network separately from the LAN250and transfers data and information between components of the sub-network and the LAN250/WAN200, but is not so limited. Additionally, other network devices254can be coupled to the LAN250without being coupled to the touchscreen1002. The gateway of an embodiment, whether a stand-along component or integrated with a touchscreen, enables couplings or connections and thus the flow or integration of information between various components of the host premises and various types and/or combinations of IP devices, where the components of the host premises include a network (e.g., LAN) and/or a security system or subsystem to name a few. Consequently, the gateway controls the association between and the flow of information or data between the components of the host premises. For example, the gateway of an embodiment forms a sub-network coupled to another network (e.g., WAN, LAN, etc.), with the sub-network including IP devices. The gateway further enables the association of the IP devices of the sub-network with appropriate systems on the premises (e.g., security system, etc.). Therefore, for example, the gateway can form a sub-network of IP devices configured for security functions, and associate the sub-network only with the premises security system, thereby segregating the IP devices dedicated to security from other IP devices that may be coupled to another network on the premises. The gateway of an embodiment, as described herein, enables couplings or connections and thus the flow of information between various components of the host premises and various types and/or combinations of IP devices, where the components of the host premises include a network, a security system or subsystem to name a few. Consequently, the gateway controls the association between and the flow of information or data between the components of the host premises. For example, the gateway of an embodiment forms a sub-network coupled to another network (e.g., WAN, LAN, etc.), with the sub-network including IP devices. The gateway further enables the association of the IP devices of the sub-network with appropriate systems on the premises (e.g., security system, etc.). Therefore, for example, the gateway can form a sub-network of IP devices configured for security functions, and associate the sub-network only with the premises security system, thereby segregating the IP devices dedicated to security from other IP devices that may be coupled to another network on the premises. FIG.11is a flow diagram for a method1100of forming a security network including integrated security system components, under an embodiment. Generally, the method comprises coupling1102a gateway comprising a connection management component to a local area network in a first location and a security server in a second location. The method comprises forming1104a security network by automatically establishing a wireless coupling between the gateway and a security system using the connection management component. The security system of an embodiment comprises security system components located at the first location. The method comprises integrating1106communications and functions of the security system components into the security network via the wireless coupling. FIG.12is a flow diagram for a method1200of forming a security network including integrated security system components and network devices, under an embodiment. Generally, the method comprises coupling1202a gateway to a local area network located in a first location and a security server in a second location. The method comprises automatically establishing1204communications between the gateway and security system components at the first location, the security system including the security system components. The method comprises automatically establishing1206communications between the gateway and premise devices at the first location. The method comprises forming1208a security network by electronically integrating, via the gateway, communications and functions of the premise devices and the security system components. In an example embodiment,FIG.13is a flow diagram1300for integration or installation of an IP device into a private network environment, under an embodiment. The IP device includes any IP-capable device that, for example, includes the touchscreen of an embodiment. The variables of an embodiment set at time of installation include, but are not limited to, one or more of a private SSID/Password, a gateway identifier, a security panel identifier, a user account TS, and a Central Monitoring Station account identification. An embodiment of the IP device discovery and management begins with a user or installer activating1302the gateway and initiating1304the install mode of the system. This places the gateway in an install mode. Once in install mode, the gateway shifts to a default (Install) Wifi configuration. This setting will match the default setting for other integrated security system-enabled devices that have been pre-configured to work with the integrated security system. The gateway will then begin to provide1306DHCP addresses for these IP devices. Once the devices have acquired a new DHCP address from the gateway, those devices are available for configuration into a new secured Wifi network setting. The user or installer of the system selects1308all devices that have been identified as available for inclusion into the integrated security system. The user may select these devices by their unique IDs via a web page, Touchscreen, or other client interface. The gateway provides1310data as appropriate to the devices. Once selected, the devices are configured1312with appropriate secured Wifi settings, including SSID and WPA/WPA-2 keys that are used once the gateway switches back to the secured sandbox configuration from the “Install” settings. Other settings are also configured as appropriate for that type of device. Once all devices have been configured, the user is notified and the user can exit install mode. At this point all devices will have been registered1314with the integrated security system servers. The installer switches1316the gateway to an operational mode, and the gateway instructs or directs1318all newly configured devices to switch to the “secured” Wifi sandbox settings. The gateway then switches1320to the “secured” Wifi settings. Once the devices identify that the gateway is active on the “secured” network, they request new DHCP addresses from the gateway which, in response, provides1322the new addresses. The devices with the new addresses are then operational1324on the secured network. In order to ensure the highest level of security on the secured network, the gateway can create or generate a dynamic network security configuration based on the unique ID and private key in the gateway, coupled with a randomizing factor that can be based on online time or other inputs. This guarantees the uniqueness of the gateway secured network configuration. To enable the highest level of performance, the gateway analyzes the RF spectrum of the 802.11x network and determines which frequency band/channel it should select to run. An alternative embodiment of the camera/IP device management process leverages the local ethernet connection of the sandbox network on the gateway. This alternative process is similar to the Wifi discovery embodiment described above, except the user connects the targeted device to the ethernet port of the sandbox network to begin the process. This alternative embodiment accommodates devices that have not been pre-configured with the default “Install” configuration for the integrated security system. This alternative embodiment of the IP device discovery and management begins with the user/installer placing the system into install mode. The user is instructed to attach an IP device to be installed to the sandbox Ethernet port of the gateway. The IP device requests a DHCP address from the gateway which, in response to the request, provides the address. The user is presented the device and is asked if he/she wants to install the device. If yes, the system configures the device with the secured Wifi settings and other device-specific settings (e.g., camera settings for video length, image quality etc.). The user is next instructed to disconnect the device from the ethernet port. The device is now available for use on the secured sandbox network. FIG.14is a block diagram showing communications among integrated IP devices of the private network environment, under an embodiment. The IP devices of this example include a security touchscreen1403, gateway1402(e.g., “iHub”), and security panel (e.g., “Security Panel 1”, “Security Panel 2”, “Security Panel n”), but the embodiment is not so limited. In alternative embodiments any number and/or combination of these three primary component types may be combined with other components including IP devices and/or security system components. For example, a single device that comprises an integrated gateway, touchscreen, and security panel is merely another embodiment of the integrated security system described herein. The description that follows includes an example configuration that includes a touchscreen hosting particular applications. However, the embodiment is not limited to the touchscreen hosting these applications, and the touchscreen should be thought of as representing any IP device. Referring toFIG.14, the touchscreen1403incorporates an application1410that is implemented as computer code resident on the touchscreen operating system, or as a web-based application running in a browser, or as another type of scripted application (e.g., Flash, Java, Visual Basic, etc.). The touchscreen core application1410represents this application, providing user interface and logic for the end user to manage their security system or to gain access to networked information or content (Widgets). The touchscreen core application1410in turn accesses a library or libraries of functions to control the local hardware (e.g. screen display, sound, LEDs, memory, etc.) as well as specialized librarie(s) to couple or connect to the security system. In an embodiment of this security system connection, the touchscreen1403communicates to the gateway1402, and has no direct communication with the security panel. In this embodiment, the touchscreen core application1410accesses the remote service APIs1412which provide security system functionality (e.g. ARM/DISARM panel, sensor state, get/set panel configuration parameters, initiate or get alarm events, etc.). In an embodiment, the remote service APIs1412implement one or more of the following functions, but the embodiment is not so limited: Armstate=setARMState(type=“ARM STAYS|ARM AWAY|DISARM”, Parameters=“ExitDelay=30|Lights=OFF”); sensorState=getSensors(type=“ALL|SensorName|SensorNameList”); result=setSensorState(SensorName, parameters=“Option1, Options2, . . . Option n”); interruptHandler=SensorEvent( ); and, interruptHandler=alarmEvent( ). Functions of the remote service APIs1412of an embodiment use a remote PanelConnect API1424which resides in memory on the gateway1402. The touchscreen1403communicates with the gateway1402through a suitable network interface such as an Ethernet or 802.11 RF connection, for example. The remote PanelConnect API1424provides the underlying Security System Interfaces1426used to communicate with and control one or more types of security panel via wired link1430and/or RF link 3. The PanelConnect API1224provides responses and input to the remote services APIs1426, and in turn translates function calls and data to and from the specific protocols and functions supported by a specific implementation of a Security Panel (e.g. a GE Security Simon XT or Honeywell Vista20P). In an embodiment, the PanelConnect API1224uses a 345 MHz RF transceiver or receiver hardware/firmware module to communicate wirelessly to the security panel and directly to a set of 345 MHz RF-enabled sensors and devices, but the embodiment is not so limited. The gateway of an alternative embodiment communicates over a wired physical coupling or connection to the security panel using the panel's specific wired hardware (bus) interface and the panel's bus-level protocol. In an alternative embodiment, the Touchscreen1403implements the same PanelConnect API1414locally on the Touchscreen1403, communicating directly with the Security Panel 2 and/or Sensors 2 over the proprietary RF link or over a wired link for that system. In this embodiment the Touchscreen1403, instead of the gateway1402, incorporates the 345 MHz RF transceiver to communicate directly with Security Panel 2 or Sensors 2 over the RF link 2. In the case of a wired link the Touchscreen1403incorporates the real-time hardware (e.g. a PIC chip and RS232-variant serial link) to physically connect to and satisfy the specific bus-level timing requirements of the Security Panel 2. In yet another alternative embodiment, either the gateway1402or the Touchscreen1403implements the remote service APIs. This embodiment includes a Cricket device (“Cricket”) which comprises but is not limited to the following components: a processor (suitable for handling 802.11 protocols and processing, as well as the bus timing requirements of Security Panel 1); an 802.11 (WiFi) client IP interface chip; and, a serial bus interface chip that implements variants of RS232 or RS485, depending on the specific Security Panel. The Cricket also implements the full PanelConnect APIs such that it can perform the same functions as the case where the gateway implements the PanelConnect APIs. In this embodiment, the touchscreen core application1410calls functions in the remote service APIs1412(such as setArmState( )). These functions in turn couple or connect to the remote Cricket through a standard IP connection (“Cricket IP Link”) (e.g., Ethernet, Homeplug, the gateway's proprietary Wifi network, etc.). The Cricket in turn implements the PanelConnect API, which responds to the request from the touchscreen core application, and performs the appropriate function using the proprietary panel interface. This interface uses either the wireless or wired proprietary protocol for the specific security panel and/or sensors. FIG.15is a flow diagram of a method of integrating an external control and management application system with an existing security system, under an embodiment. Operations begin when the system is powered on1510, involving at a minimum the power-on of the gateway device, and optionally the power-on of the connection between the gateway device and the remote servers. The gateway device initiates1520a software and RF sequence to locate the extant security system. The gateway and installer initiate and complete1530a sequence to ‘learn’ the gateway into the security system as a valid and authorized control device. The gateway initiates1540another software and RF sequence of instructions to discover and learn the existence and capabilities of existing RF devices within the extant security system, and store this information in the system. These operations under the system of an embodiment are described in further detail below. Unlike conventional systems that extend an existing security system, the system of an embodiment operates utilizing the proprietary wireless protocols of the security system manufacturer. In one illustrative embodiment, the gateway is an embedded computer with an IP LAN and WAN connection and a plurality of RF transceivers and software protocol modules capable of communicating with a plurality of security systems each with a potentially different RF and software protocol interface. After the gateway has completed the discovery and learning1540of sensors and has been integrated1550as a virtual control device in the extant security system, the system becomes operational. Thus, the security system and associated sensors are presented1550as accessible devices to a potential plurality of user interface subsystems. The system of an embodiment integrates1560the functionality of the extant security system with other non-security devices including but not limited to IP cameras, touchscreens, lighting controls, door locking mechanisms, which may be controlled via RF, wired, or powerline-based networking mechanisms supported by the gateway or servers. The system of an embodiment provides a user interface subsystem1570enabling a user to monitor, manage, and control the system and associated sensors and security systems. In an embodiment of the system, a user interface subsystem is an HTML/XML/Javascript/Java/AJAX/Flash presentation of a monitoring and control application, enabling users to view the state of all sensors and controllers in the extant security system from a web browser or equivalent operating on a computer, PDA, mobile phone, or other consumer device. In another illustrative embodiment of the system described herein, a user interface subsystem is an HTML/XML/Javascript/Java/AJAX presentation of a monitoring and control application, enabling users to combine the monitoring and control of the extant security system and sensors with the monitoring and control of non-security devices including but not limited to IP cameras, touchscreens, lighting controls, door locking mechanisms. In another illustrative embodiment of the system described herein, a user interface subsystem is a mobile phone application enabling users to monitor and control the extant security system as well as other non-security devices. In another illustrative embodiment of the system described herein, a user interface subsystem is an application running on a keypad or touchscreen device enabling users to monitor and control the extant security system as well as other non-security devices. In another illustrative embodiment of the system described herein, a user interface subsystem is an application operating on a TV or set-top box connected to a TV enabling users to monitor and control the extant security system as well as other non-security devices. FIG.16is a block diagram of an integrated security system1600wirelessly interfacing to proprietary security systems, under an embodiment. A security system1610is coupled or connected to a Gateway1620, and from Gateway1620coupled or connected to a plurality of information and content sources across a network1630including one or more web servers1640, system databases1650, and applications servers1660. While in one embodiment network1630is the Internet, including the World Wide Web, those of skill in the art will appreciate that network1630may be any type of network, such as an intranet, an extranet, a virtual private network (VPN), a mobile network, or a non-TCP/IP based network. Moreover, other elements of the system of an embodiment may be conventional, well-known elements that need not be explained in detail herein. For example, security system1610could be any type home or business security system, such devices including but not limited to a standalone RF home security system or a non-RF-capable wired home security system with an add-on RF interface module. In the integrated security system1600of this example, security system1610includes an RF-capable wireless security panel (WSP)1611that acts as the master controller for security system1610. Well-known examples of such a WSP include the GE Security Concord, Networx, and Simon panels, the Honeywell Vista and Lynx panels, and similar panes' from DSC and Napco, to name a few. A wireless module1614includes the RF hardware and protocol software necessary to enable communication with and control of a plurality of wireless devices1613. WSP1611may also manage wired devices1614physically connected to WSP1611with an RS232 or RS485 or Ethernet connection or similar such wired interface. In an implementation consistent with the systems and methods described herein, Gateway1620provides the interface between security system1610and LAN and/or WAN for purposes of remote control, monitoring, and management. Gateway1620communicates with an external web server1640, database1650, and application server1660over network1630(which may comprise WAN, LAN, or a combination thereof). In this example system, application logic, remote user interface functionality, as well as user state and account are managed by the combination of these remote servers. Gateway1620includes server connection manager1621, a software interface module responsible for all server communication over network1630. Event manager1622implements the main event loop for Gateway1620, processing events received from device manager1624(communicating with non-security system devices including but not limited to IP cameras, wireless thermostats, or remote door locks). Event manager1622further processes events and control messages from and to security system1610by utilizing WSP manager1623. WSP manager1623and device manager1624both rely upon wireless protocol manager1626which receives and stores the proprietary or standards-based protocols required to support security system1610as well as any other devices interfacing with gateway1620. WSP manager1623further utilizes the comprehensive protocols and interface algorithms for a plurality of security systems1610stored in the WSP DB client database associated with wireless protocol manager1626. These various components implement the software logic and protocols necessary to communicate with and manager devices and security systems1610. Wireless Transceiver hardware modules1625are then used to implement the physical RF communications link to such devices and security systems1610. An illustrative wireless transceiver1625is the GE Security Dialog circuit board, implementing a 319.5 MHz two-way RF transceiver module. In this example, RF Link1670represents the 319.5 MHz RF communication link, enabling gateway1620to monitor and control WSP1611and associated wireless and wired devices1613and1614, respectively. In one embodiment, server connection manager1621requests and receives a set of wireless protocols for a specific security system1610(an illustrative example being that of the GE Security Concord panel and sensors) and stores them in the WSP DB portion of the wireless protocol manager1626. WSP manager1623then utilizes such protocols from wireless protocol manager1626to initiate the sequence of processes detailed inFIG.15andFIG.16for learning gateway1620into security system1610as an authorized control device. Once learned in, as described with reference toFIG.16(and above), event manager1622processes all events and messages detected by the combination of WSP manager1623and the GE Security wireless transceiver module1625. In another embodiment, gateway1620incorporates a plurality of wireless transceivers1625and associated protocols managed by wireless protocol manager1626. In this embodiment events and control of multiple heterogeneous devices may be coordinated with WSP1611, wireless devices1613, and wired devices1614. For example a wireless sensor from one manufacturer may be utilized to control a device using a different protocol from a different manufacturer. In another embodiment, gateway1620incorporates a wired interface to security system1610, and incorporates a plurality of wireless transceivers1625and associated protocols managed by wireless protocol manager1626. In this embodiment events and control of multiple heterogeneous devices may be coordinated with WSP1611, wireless devices1613, and wired devices1614. Of course, while an illustrative embodiment of an architecture of the system of an embodiment is described in detail herein with respect toFIG.16, one of skill in the art will understand that modifications to this architecture may be made without departing from the scope of the description presented herein. For example, the functionality described herein may be allocated differently between client and server, or amongst different server or processor-based components. Likewise, the entire functionality of the gateway1620described herein could be integrated completely within an existing security system1610. In such an embodiment, the architecture could be directly integrated with a security system1610in a manner consistent with the currently described embodiments. FIG.17is a flow diagram for wirelessly ‘learning’ the Gateway into an existing security system and discovering extant sensors, under an embodiment. The learning interfaces gateway1620with security system1610. Gateway1620powers up1710and initiates software sequences1720and1725to identify accessible WSPs1611and wireless devices1613, respectively (e.g., one or more WSPs and/or devices within range of gateway1620). Once identified, WSP1611is manually or automatically set into ‘learn mode’1730, and gateway1620utilizes available protocols to add1740itself as an authorized control device in security system1610. Upon successful completion of this task, WSP1611is manually or automatically removed from ‘learn mode’1750. Gateway1620utilizes the appropriate protocols to mimic1760the first identified device1614. In this operation gateway1620identifies itself using the unique or pseudo-unique identifier of the first found device1614, and sends an appropriate change of state message over RF Link1670. In the event that WSP1611responds to this change of state message, the device1614is then added1770to the system in database1650. Gateway1620associates1780any other information (such as zone name or token-based identifier) with this device1614in database1650, enabling gateway1620, user interface modules, or any application to retrieve this associated information. In the event that WSP1611does not respond to the change of state message, the device1614is not added1770to the system in database1650, and this device1614is identified as not being a part of security system1610with a flag, and is either ignored or added as an independent device, at the discretion of the system provisioning rules. Operations hereunder repeat1785operations1760,1770,1780for all devices1614if applicable. Once all devices1614have been tested in this way, the system begins operation1790. In another embodiment, gateway1620utilizes a wired connection to WSP1611, but also incorporates a wireless transceiver1625to communicate directly with devices1614. In this embodiment, operations under1720above are removed, and operations under1740above are modified so the system of this embodiment utilizes wireline protocols to add itself as an authorized control device in security system1610. A description of an example embodiment follows in which the Gateway (FIG.16, element1620) is the iHub available from iControl Networks, Palo Alto, CA, and described in detail herein. In this example the gateway is “automatically” installed with a security system. The automatic security system installation begins with the assignment of an authorization key to components of the security system (e.g., gateway, kit including the gateway, etc.). The assignment of an authorization key is done in lieu of creating a user account. An installer later places the gateway in a user's premises along with the premises security system. The installer uses a computer to navigate to a web portal (e.g., integrated security system web interface), logs in to the portal, and enters the authorization key of the installed gateway into the web portal for authentication. Once authenticated, the gateway automatically discovers devices at the premises (e.g., sensors, cameras, light controls, etc.) and adds the discovered devices to the system or “network”. The installer assigns names to the devices, and tests operation of the devices back to the server (e.g., did the door open, did the camera take a picture, etc.). The security device information is optionally pushed or otherwise propagated to a security panel and/or to the server network database. The installer finishes the installation, and instructs the end user on how to create an account, username, and password. At this time the user enters the authorization key which validates the account creation (uses a valid authorization key to associate the network with the user's account). New devices may subsequently be added to the security network in a variety of ways (e.g., user first enters a unique ID for each device/sensor and names it in the server, after which the gateway can automatically discover and configure the device). A description of another example embodiment follows in which the security system (FIG.16, element1610) is a Dialog system and the WSP (FIG.16, element1611) is a SimonXT available from General Electric Security, and the Gateway (FIG.16, element1620) is the iHub available from iControl Networks, Palo Alto, CA, and described in detail herein. Descriptions of the install process for the SimonXT and iHub are also provided below. GE Security's Dialog network is one of the most widely deployed and tested wireless security systems in the world. The physical RF network is based on a 319.5 MHz unlicensed spectrum, with a bandwidth supporting up to 19 Kbps communications. Typical use of this bandwidth—even in conjunction with the integrated security system—is far less than that. Devices on this network can support either one-way communication (either a transmitter or a receiver) or two-way communication (a transceiver). Certain GE Simon, Simon XT, and Concord security control panels incorporate a two-way transceiver as a standard component. The gateway also incorporates the same two-way transceiver card. The physical link layer of the network is managed by the transceiver module hardware and firmware, while the coded payload bitstreams are made available to the application layer for processing. Sensors in the Dialog network typically use a 60-bit protocol for communicating with the security panel transceiver, while security system keypads and the gateway use the encrypted 80-bit protocol. The Dialog network is configured for reliability, as well as low-power usage. Many devices are supervised, i.e. they are regularly monitored by the system ‘master’ (typically a GE security panel), while still maintaining excellent power usage characteristics. A typical door window sensor has a battery life in excess of 5-7 years. The gateway has two modes of operation in the Dialog network: a first mode of operation is when the gateway is configured or operates as a ‘slave’ to the GE security panel; a second mode of operation is when the gateway is configured or operates as a ‘master’ to the system in the event a security panel is not present. In both configurations, the gateway has the ability to ‘listen’ to network traffic, enabling the gateway to continually keep track of the status of all devices in the system. Similarly, in both situations the gateway can address and control devices that support setting adjustments (such as the GE wireless thermostat). In the configuration in which the gateway acts as a ‘slave’ to the security panel, the gateway is ‘learned into’ the system as a GE wireless keypad. In this mode of operation, the gateway emulates a security system keypad when managing the security panel, and can query the security panel for status and ‘listen’ to security panel events (such as alarm events). The gateway incorporates an RF Transceiver manufactured by GE Security, but is not so limited. This transceiver implements the Dialog protocols and handles all network message transmissions, receptions, and timing. As such, the physical, link, and protocol layers of the communications between the gateway and any GE device in the Dialog network are totally compliant with GE Security specifications. At the application level, the gateway emulates the behavior of a GE wireless keypad utilizing the GE Security 80-bit encrypted protocol, and only supported protocols and network traffic are generated by the gateway. Extensions to the Dialog RF protocol of an embodiment enable full control and configuration of the panel, and iControl can both automate installation and sensor enrollment as well as direct configuration downloads for the panel under these protocol extensions. As described above, the gateway participates in the GE Security network at the customer premises. Because the gateway has intelligence and a two-way transceiver, it can ‘hear’ all of the traffic on that network. The gateway makes use of the periodic sensor updates, state changes, and supervisory signals of the network to maintain a current state of the premises. This data is relayed to the integrated security system server (e.g.,FIG.2, element260) and stored in the event repository for use by other server components. This usage of the GE Security RF network is completely non-invasive; there is no new data traffic created to support this activity. The gateway can directly (or indirectly through the Simon XT panel) control two-way devices on the network. For example, the gateway can direct a GE Security Thermostat to change its setting to ‘Cool’ from ‘Off, as well as request an update on the current temperature of the room. The gateway performs these functions using the existing GE Dialog protocols, with little to no impact on the network; a gateway device control or data request takes only a few dozen bytes of data in a network that can support 19 Kbps. By enrolling with the Simon XT as a wireless keypad, as described herein, the gateway includes data or information of all alarm events, as well as state changes relevant to the security panel. This information is transferred to the gateway as encrypted packets in the same way that the information is transferred to all other wireless keypads on the network. Because of its status as an authorized keypad, the gateway can also initiate the same panel commands that a keypad can initiate. For example, the gateway can arm or disarm the panel using the standard Dialog protocol for this activity. Other than the monitoring of standard alarm events like other network keypads, the only incremental data traffic on the network as a result of the gateway is the infrequent remote arm/disarm events that the gateway initiates, or infrequent queries on the state of the panel. The gateway is enrolled into the Simon XT panel as a ‘slave’ device which, in an embodiment, is a wireless keypad. This enables the gateway for all necessary functionality for operating the Simon XT system remotely, as well as combining the actions and information of non-security devices such as lighting or door locks with GE Security devices. The only resource taken up by the gateway in this scenario is one wireless zone (sensor ID). The gateway of an embodiment supports three forms of sensor and panel enrollment/installation into the integrated security system, but is not limited to this number of enrollment/installation options. The enrollment/installation options of an embodiment include installer installation, kitting, and panel, each of which is described below. Under the installer option, the installer enters the sensor IDs at time of installation into the integrated security system web portal or iScreen. This technique is supported in all configurations and installations. Kits can be pre-provisioned using integrated security system provisioning applications when using the kitting option. At kitting time, multiple sensors are automatically associated with an account, and at install time there is no additional work required. In the case where a panel is installed with sensors already enrolled (i.e. using the GE Simon XT enrollment process), the gateway has the capability to automatically extract the sensor information from the system and incorporate it into the user account on the integrated security system server. The gateway and integrated security system of an embodiment uses an auto-learn process for sensor and panel enrollment in an embodiment. The deployment approach of an embodiment can use additional interfaces that GE Security is adding to the Simon XT panel. With these interfaces, the gateway has the capability to remotely enroll sensors in the panel automatically. The interfaces include, but are not limited to, the following: EnrollDevice(ID, type, name, zone, group); SetDeviceParameters(ID, type, Name, zone, group), GetDeviceParameters(zone); and RemoveDevice(zone). The integrated security system incorporates these new interfaces into the system, providing the following install process. The install process can include integrated security system logistics to handle kitting and pre-provisioning. Pre-kitting and logistics can include a pre-provisioning kitting tool provided by integrated security system that enables a security system vendor or provider (“provider”) to offer pre-packaged initial ‘kits’. This is not required but is recommended for simplifying the install process. This example assumes a ‘Basic’ kit is preassembled and includes one (1) Simon XT, three (3) Door/window sensors, one (1) motion sensor, one (1) gateway, one (1) keyfob, two (2) cameras, and ethernet cables. The kit also includes a sticker page with all Zones (1-24) and Names (full name list). The provider uses the integrated security system kitting tool to assemble ‘Basic’ kit packages. The contents of different types of starter kits may be defined by the provider. At the distribution warehouse, a worker uses a bar code scanner to scan each sensor and the gateway as it is packed into the box. An ID label is created that is attached to the box. The scanning process automatically associates all the devices with one kit, and the new ID label is the unique identifier of the kit. These boxes are then sent to the provider for distribution to installer warehouses. Individual sensors, cameras, etc. are also sent to the provider installer warehouse. Each is labeled with its own barcode/ID. An installation and enrollment procedure of a security system including a gateway is described below as one example of the installation process. 1. Order and Physical Install Processa. Once an order is generated in the iControl system, an account is created and an install ticket is created and sent electronically to the provider for assignment to an installer.b. The assigned installer picks up his/her ticket(s) and fills his/her truck with Basic and/or Advanced starter kits. He/she also keeps a stock of individual sensors, cameras, iHubs, Simon XTs, etc. Optionally, the installer can also stock homeplug adapters for problematic installations.c. The installer arrives at the address on the ticket, and pulls out the Basic kit, The installer determines sensor locations from a tour of the premises and discussion with the homeowner. At this point assume the homeowner requests additional equipment including an extra camera, two (2) additional door/window sensors, one (1) glass break detector, and one (1) smoke detector.d. Installer mounts SimonXT in the kitchen or other location in the home as directed by the homeowner, and routes the phone line to Simon XT if available. GPRS and Phone numbers pre-programmed in SimonXT to point to the provider Central Monitoring Station (CMS).e. Installer places gateway in the home in the vicinity of a router and cable modem. Installer installs an ethernet line from gateway to router and plugs gateway into an electrical outlet. 2. Associate and Enroll gateway into SimonXTa. Installer uses either his/her own laptop plugged into router, or homeowners computer to go to the integrated security system web interface and log in with installer ID/pass.b. Installer enters ticket number into admin interface, and clicks ‘New Install’ button. Screen prompts installer for kit ID (on box's barcode label).c. Installer clicks ‘Add SimonXT’. Instructions prompt installer to put Simon XT into install mode, and add gateway as a wireless keypad. It is noted that this step is for security only and can be automated in an embodiment.d. Installer enters the installer code into the Simon XT. Installer Learns ‘gateway’ into the panel as a wireless keypad as a group 1 device.e. Installer goes back to Web portal, and clicks the ‘Finished Adding SimonXT’ button. 3. Enroll Sensors into SimonXT Via iControla. All devices in the Basic kit are already associated with the user's account.b. For additional devices, Installer clicks ‘Add Device’ and adds the additional camera to the user's account (by typing in the camera ID/Serial #).c. Installer clicks ‘Add Device’ and adds other sensors (two (2) door/window sensors, one (1) glass break sensor, and one (1) smoke sensor) to the account (e.g., by typing in IDs).d. As part of Add Device, Installer assigns zone, name, and group to the sensor. Installer puts appropriate Zone and Name sticker on the sensor temporarily.e. All sensor information for the account is pushed or otherwise propagated to the iConnect server, and is available to propagate to CMS automation software through the CMS application programming interface (API).f. Web interface displays ‘Installing Sensors in System . . . ’ and automatically adds all of the sensors to the Simon XT panel through the GE RF link.g. Web interface displays ‘Done Installing’→all sensors show green. 4. Place and Tests Sensors in Homea. Installer physically mounts each sensor in its desired location, and removes the stickers.b. Installer physically mounts WiFi cameras in their location and plugs into AC power. Optional fishing of low voltage wire through wall to remove dangling wires. Camera transformer is still plugged into outlet but wire is now inside the wall.c. Installer goes to Web interface and is prompted for automatic camera install. Each camera is provisioned as a private, encrypted Wifi device on the gateway secured sandbox network, and firewall NAT traversal is initiated. Upon completion the customer is prompted to test the security system.d. Installer selects the ‘Test System’ button on the web portal—the SimonXT is put into Test mode by the gateway over GE RF.e. Installer manually tests the operation of each sensor, receiving an audible confirmation from SimonXT.f. gateway sends test data directly to CMS over broadband link, as well as storing the test data in the user's account for subsequent report generation.g. Installer exits test mode from the Web portal. 5. Installer instructs customer on use of the Simon XT, and shows customer how to log into the iControl web and mobile portals. Customer creates a username/password at this time. 6. Installer instructs customer how to change Simon XT user code from the Web interface. Customer changes user code which is pushed to SimonXT automatically over GE RF. An installation and enrollment procedure of a security system including a gateway is described below as an alternative example of the installation process. This installation process is for use for enrolling sensors into the SimonXT and integrated security system and is compatible with all existing GE Simon panels. The integrated security system supports all pre-kitting functionality described in the installation process above. However, for the purpose of the following example, no kitting is used. 1. Order and Physical Install Processa. Once an order is generated in the iControl system, an account is created and an install ticket is created and sent electronically to the security system provider for assignment to an installer.b. The assigned installer picks up his/her ticket(s) and fills his/her truck with individual sensors, cameras, iHubs, Simon XTs, etc. Optionally, the installer can also stock homeplug adapters for problematic installations.c. The installer arrives at the address on the ticket, and analyzes the house and talks with the homeowner to determine sensor locations. At this point assume the homeowner requests three (3) cameras, five (5) door/window sensors, one (1) glass break detector, one (1) smoke detector, and one (1) keyfob.d. Installer mounts SimonXT in the kitchen or other location in the home. The installer routes a phone line to Simon XT if available. GPRS and Phone numbers are pre-programmed in SimonXT to point to the provider CMS.e. Installer places gateway in home in the vicinity of a router and cable modem, and installs an ethernet line from gateway to the router, and plugs gateway into an electrical outlet. 2. Associate and Enroll gateway into SimonXTa. Installer uses either his/her own laptop plugged into router, or homeowners computer to go to the integrated security system web interface and log in with an installer ID/pass.b. Installer enters ticket number into admin interface, and clicks ‘New Install’ button. Screen prompts installer to add devices.c. Installer types in ID of gateway, and it is associated with the user's account.d. Installer clicks ‘Add Device’ and adds the cameras to the user's account (by typing in the camera ID/Serial #).e. Installer clicks ‘Add SimonXT’. Instructions prompt installer to put Simon XT into install mode, and add gateway as a wireless keypad.f. Installer goes to Simon XT and enters the installer code into the Simon XT. Learns ‘gateway’ into the panel as a wireless keypad as group 1 type sensor.g. Installer returns to Web portal, and clicks the ‘Finished Adding SimonXT’ button.h. Gateway now is alerted to all subsequent installs over the security system RF. 3. Enroll Sensors into SimonXT via iControla. Installer clicks ‘Add Simon XT Sensors’—Displays instructions for adding sensors to Simon XT.b. Installer goes to Simon XT and uses Simon XT install process to add each sensor, assigning zone, name, group. These assignments are recorded for later use.c. The gateway automatically detects each sensor addition and adds the new sensor to the integrated security system.d. Installer exits install mode on the Simon XT, and returns to the Web portal.e. Installer clicks ‘Done Adding Devices’.f. Installer enters zone/sensor naming from recorded notes into integrated security system to associate sensors to friendly names.g. All sensor information for the account is pushed to the iConnect server, and is available to propagate to CMS automation software through the CMS API. 4. Place and Tests Sensors in Homea. Installer physically mounts each sensor in its desired location.b. Installer physically mounts Wifi cameras in their location and plugs into AC power. Optional fishing of low voltage wire through wall to remove dangling wires. Camera transformer is still plugged into outlet but wire is now inside the wall.c. Installer puts SimonXT into Test mode from the keypad.d. Installer manually tests the operation of each sensor, receiving an audible confirmation from SimonXT.e. Installer exits test mode from the Simon XT keypad.f Installer returns to web interface and is prompted to automatically set up cameras. After waiting for completion cameras are now provisioned and operational. 5. Installer instructs customer on use of the Simon XT, and shows customer how to log into the integrated security system web and mobile portals. Customer creates a username/password at this time. 6. Customer and Installer observe that all sensors/cameras are green. 7. Installer instructs customer how to change Simon XT user code from the keypad. Customer changes user code and stores in SimonXT. 8. The first time the customer uses the web portal to Arm/Disarm system the web interface prompts the customer for the user code, which is then stored securely on the server. In the event the user code is changed on the panel the web interface once again prompts the customer. The panel of an embodiment can be programmed remotely. The CMS pushes new programming to SimonXT over a telephone or GPRS link. Optionally, iControl and GE provide a broadband link or coupling to the gateway and then a link from the gateway to the Simon XT over GE RF. In addition to the configurations described above, the gateway of an embodiment supports takeover configurations in which it is introduced or added into a legacy security system. A description of example takeover configurations follow in which the security system (FIG.2, element210) is a Dialog system and the WSP (FIG.2, element211) is a GE Concord panel (e.g., equipped with POTS, GE RF, and Superbus 2000 RS485 interface (in the case of a Lynx takeover the Simon XT is used) available from General Electric Security. The gateway (FIG.2, element220) in the takeover configurations is an iHub (e.g., equipped with built-in 802.11 b/g router, Ethernet Hub, GSM/GPRS card, RS485 interface, and iControl Honeywell-compatible RF card) available from iControl Networks, Palo Alto, CA While components of particular manufacturers are used in this example, the embodiments are not limited to these components or to components from these vendors. The security system can optionally include RF wireless sensors (e.g., GE wireless sensors utilizing the GE Dialog RF technology), IP cameras, a GE-iControl Touchscreen (the touchscreen is assumed to be an optional component in the configurations described herein, and is thus treated separately from the iHub; in systems in which the touchscreen is a component of the base security package, the integrated iScreen (available from iControl Networks, Palo Alto, CA) can be used to combine iHub technology with the touchscreen in a single unit), and Z-Wave devices to name a few. The takeover configurations described below assume takeover by a “new” system of an embodiment of a security system provided by another third party vendor, referred to herein as an “original” or “legacy” system. Generally, the takeover begins with removal of the control panel and keypad of the legacy system. A GE Concord panel is installed to replace the control panel of the legacy system along with an iHub with GPRS Modem. The legacy system sensors are then connected or wired to the Concord panel, and a GE keypad or touchscreen is installed to replace the control panel of the legacy system. The iHub includes the iControl RF card, which is compatible with the legacy system. The iHub finds and manages the wireless sensors of the legacy system, and learns the sensors into the Concord by emulating the corresponding GE sensors. The iHub effectively acts as a relay for legacy wireless sensors. Once takeover is complete, the new security system provides a homogeneous system that removes the compromises inherent in taking over or replacing a legacy system. For example, the new system provides a modern touchscreen that may include additional functionality, new services, and supports integration of sensors from various manufacturers. Furthermore, lower support costs can be realized because call centers, installers, etc. are only required to support one architecture. Additionally, there is minimal install cost because only the panel is required to be replaced as a result of the configuration flexibility offered by the iHub. The system takeover configurations described below include but are not limited to a dedicated wireless configuration, a dedicated wireless configuration that includes a touchscreen, and a fished Ethernet configuration. Each of these configurations is described in detail below. FIG.18is a block diagram of a security system in which the legacy panel is replaced with a GE Concord panel wirelessly coupled to an iHub, under an embodiment. All existing wired and RF sensors remain in place. The iHub is located near the Concord panel, and communicates with the panel via the 802.11 link, but is not so limited. The iHub manages cameras through a built-in 802.11 router. The iHub listens to the existing RF HW sensors, and relays sensor information to the Concord panel (emulating the equivalent GE sensor). The wired sensors of the legacy system are connected to the wired zones on the control panel. FIG.19is a block diagram of a security system in which the legacy panel is replaced with a GE Concord panel wirelessly coupled to an iHub, and a GE-iControl Touchscreen, under an embodiment. All existing wired and RF sensors remain in place. The iHub is located near the Concord panel, and communicates with the panel via the 802.11 link, but is not so limited. The iHub manages cameras through a built-in 802.11 router. The iHub listens to the existing RF HW sensors, and relays sensor information to the Concord panel (emulating the equivalent GE sensor). The wired sensors of the legacy system are connected to the wired zones on the control panel. The GE-iControl Touchscreen can be used with either of an 802.11 connection or Ethernet connection with the iHub. Because the takeover involves a GE Concord panel (or Simon XT), the touchscreen is always an option. No extra wiring is required for the touchscreen as it can use the 4-wire set from the replaced keypad of the legacy system. This provides power, battery backup (through Concord), and data link (RS485 Superbus 2000) between Concord and touchscreen. The touchscreen receives its broadband connectivity through the dedicated 802.11 link to the iHub. FIG.20is a block diagram of a security system in which the legacy panel is replaced with a GE Concord panel connected to an iHub via an Ethernet coupling, under an embodiment. All existing wired and RF sensors remain in place. The iHub is located near the Concord panel, and wired to the panel using a 4-wire SUperbus 2000 (RS485) interface, but is not so limited. The iHub manages cameras through a built-in 802.11 router. The iHub listens to the existing RF HW sensors, and relays sensor information to the Concord panel (emulating the equivalent GE sensor). The wired sensors of the legacy system are connected to the wired zones on the control panel. The takeover installation process is similar to the installation process described above, except the control panel of the legacy system is replaced; therefore, only the differences with the installation described above are provided here. The takeover approach of an embodiment uses the existing RS485 control interfaces that GE Security and iControl support with the iHub, touchscreen, and Concord panel. With these interfaces, the iHub is capable of automatically enrolling sensors in the panel. The exception is the leverage of an iControl RF card compatible with legacy systems to ‘takeover’ existing RF sensors. A description of the takeover installation process follows. During the installation process, the iHub uses an RF Takeover Card to automatically extract all sensor IDs, zones, and names from the legacy panel. The installer removes connections at the legacy panel from hardwired wired sensors and labels each with the zone. The installer pulls the legacy panel and replaces it with the GE Concord panel. The installer also pulls the existing legacy keypad and replaces it with either a GE keypad or a GE-iControl touchscreen. The installer connects legacy hardwired sensors to appropriate wired zone (from labels) on the Concord. The installer connects the iHub to the local network and connects the iHub RS485 interface to the Concord panel. The iHub automatically ‘enrolls’ legacy RF sensors into the Concord panel as GE sensors (maps IDs), and pushes or otherwise propagates other information gathered from HW panel (zone, name, group). The installer performs a test of all sensors back to CMS. In operation, the iHub relays legacy sensor data to the Concord panel, emulating equivalent GE sensor behavior and protocols. The areas of the installation process particular to the legacy takeover include how the iHub extracts sensor info from the legacy panel and how the iHub automatically enrolls legacy RE sensors and populates Concord with wired zone information. Each of these areas is described below. In having the iHub extract sensor information from the legacy panel, the installer ‘enrolls’ iHub into the legacy panel as a wireless keypad (use install code and house ID—available from panel). The iHub legacy RF Takeover Card is a compatible legacy RF transceiver. The installer uses the web portal to place iHub into ‘Takeover Mode’, and the web portal the automatically instructs the iHub to begin extraction. The iHub queries the panel over the RF link (to get all zone information for all sensors, wired and RF). The iHub then stores the legacy sensor information received during the queries on the iConnect server. The iHub also automatically enrolls legacy RF sensors and populates Concord with wired zone information. In so doing, the installer selects ‘Enroll legacy Sensors into Concord’ (next step in ‘Takeover’ process on web portal). The iHub automatically queries the iConnect server, and downloads legacy sensor information previously extracted. The downloaded information includes an ID mapping from legacy ID to ‘spoofed’ GE ID. This mapping is stored on the server as part of the sensor information (e.g., the iConnect server knows that the sensor is a legacy sensor acting in GE mode). The iHub instructs Concord to go into install mode, and sends appropriate Superbus 2000 commands for sensor learning to the panel. For each sensor, the ‘spoofed’ GE ID is loaded, and zone, name, and group are set based on information extracted from legacy panel. Upon completion, the iHub notifies the server, and the web portal is updated to reflect next phase of Takeover (e.g., ‘Test Sensors’). Sensors are tested in the same manner as described above. When a HW sensor is triggered, the signal is captured by the iHub legacy RF Takeover Card, translated to the equivalent GE RF sensor signal, and pushed to the panel as a sensor event on the SuperBus 2000 wires. In support of remote programming of the panel, CMS pushes new programming to Concord over a phone line, or to the iConnect CMS/Alarm Server API, which in turn pushes the programming to the iHub. The iHub uses the Concord Superbus 2000 RS485 link to push the programming to the Concord panel. FIG.21is a flow diagram for automatic takeover2100of a security system, under an embodiment. Automatic takeover includes establishing2102a wireless coupling between a takeover component running under a processor and a first controller of a security system installed at a first location. The security system includes some number of security system components coupled to the first controller. The automatic takeover includes automatically extracting2104security data of the security system from the first controller via the takeover component. The automatic takeover includes automatically transferring2106the security data to a second controller and controlling loading of the security data into the second controller. The second controller is coupled to the security system components and replaces the first controller. FIG.22is a flow diagram for automatic takeover2200of a security system, under an alternative embodiment. Automatic takeover includes automatically forming2202a security network at a first location by establishing a wireless coupling between a security system and a gateway. The gateway of an embodiment includes a takeover component. The security system of an embodiment includes security system components. The automatic takeover includes automatically extracting2204security data of the security system from a first controller of the security system. The automatic takeover includes automatically transferring2206the security data to a second controller. The second controller of an embodiment is coupled to the security system components and replaces the first controller. Components of the gateway of the integrated security system described herein control discovery, installation and configuration of both wired and wireless IP devices (e.g., cameras, etc.) coupled or connected to the system, as described herein with reference toFIGS.1-4, as well as management of video routing using a video routing module or engine. The video routing engine initiates communication paths for the transfer of video from a streaming source device to a requesting client device, and delivers seamless video streams to the user via the communication paths using one or more of UPnP port-forwarding, relay server routing and STUN/TURN peer-to-peer routing, each of which is described below. By way of reference, conventional video cameras have the ability to stream digital video in a variety of formats and over a variety of networks. Internet protocol (IP) video cameras, which include video cameras using an IP transport network (e.g., Ethernet, WiFi (IEEE 802.11 standards), etc.) are prevalent and increasingly being utilized in home monitoring and security system applications. With the proliferation of the internet, Ethernet and WiFi local area networks (LANs) and advanced wide area networks (WANs) that offer high bandwidth, low latency connections (broadband), as well as more advanced wireless WAN data networks (e.g. GPRS or CDMA 1×RTT), there increasingly exists the networking capability to extend traditional security systems to offer IP-based video. However, a fundamental reason for such IP video in a security system is to enable a user or security provider to monitor live or otherwise streamed video from outside the host premises (and the associated LAN). The conventional solution to this problem has involved a technique known as ‘port forwarding’, whereby a ‘port’ on the LAN's router/firewall is assigned to the specific LAN IP address for an IP camera, or a proxy to that camera. Once a port has been ‘forwarded’ in this manner, a computer external to the LAN can address the LAN's router directly, and request access to that port. This access request is then forwarded by the router directly to the IP address specified, the IP camera or proxy. In this way an external device can directly access an IP camera within the LAN and view or control the streamed video. The issues with this conventional approach include the following: port forwarding is highly technical and most users do not know how/why to do it; automatic port forwarding is difficult and problematic using emerging standards like UPnP; the camera IP address is often reset in response to a power outage/router reboot event; there are many different routers with different ways/capabilities for port forwarding. In short, although port forwarding can work, it is frequently less than adequate to support a broadly deployed security solution utilizing IP cameras. Another approach to accessing streaming video externally to a LAN utilizes peer-to-peer networking technology. So-called peer-to-peer networks, which includes networks in which a device or client is connected directly to another device or client, typically over a Wide Area Network (WAN) and without a persistent server connection, are increasingly common. In addition to being used for the sharing of files between computers (e.g., Napster and Ka7aa), peer-to-peer networks have also been more recently utilized to facilitate direct audio and media streaming in applications such as Skype. In these cases, the peer-to-peer communications have been utilized to enable telephony-style voice communications and video conferencing between two computers, each enabled with an IP-based microphone, speaker, and video camera. A fundamental reason for adopting such peer-to-peer technology is the ability to transparently ‘punch through’ LAN firewalls to enable external access to the streaming voice and video content, and to do so in a way that scales to tens of millions of users without creating an untenable server load. A limitation of the conventional peer-to-peer video transport lies in the personal computer (PC)-centric nature of the solution. Each of the conventional solutions uses a highly capable PC connected to the video camera, with the PC providing the advanced software functionality required to initiate and manage the peer-to-peer connection with the remote client. A typical security or remote home monitoring system requires multiple cameras, each with its own unique IP address, and only a limited amount of processing capability in each camera such that the conventional PC-centric approach cannot easily solve the need. Instead of a typical PC-centric architecture with three components (a “3-way IP Video System”) that include a computer device with video camera, a mediating server, and a PC client with video display capability, the conventional security system adds a plurality of fourth components that are standalone IP video cameras (requiring a “4-way IP Video System”), another less-than-ideal solution. In accordance with the embodiments described herein, IP camera management systems and methods are provided that enable a consumer or security provider to easily and automatically configure and manage IP cameras located at a customer premise. Using this system IP camera management may be extended to remote control and monitoring from outside the firewall and router of the customer premise. With reference toFIGS.5and6, the system includes a gateway253having a video routing component so that the gateway253can manage and control, or assist in management and control, or video routing. The system also includes one or more cameras (e.g., WiFi IP camera254, Ethernet IP camera255, etc.) that communicate over the LAN250using an IP format, as well as a connection management server210located outside the premise firewall252and connected to the gateway253by a Wide Area Network (WAN)200. The system further includes one or more devices220,230,240located outside the premise and behind other firewalls221,231,241and connected to the WAN200. The other devices220,230,240are configured to access video or audio content from the IP cameras within the premise, as described above. Alternatively, with reference toFIGS.9and10, the system includes a touchscreen902or1002having a video routing component so that the touchscreen902or1002can manage and control, or assist in management and control, or video routing. The system also includes one or more cameras (e.g., WiFi IP camera254, Ethernet IP camera255, etc.) that communicate over the LAN250using an IP format, as well as a connection management server210located outside the premise firewall252and connected to the gateway253by a Wide Area Network (WAN)200. The system further includes one or more devices220,230,240located outside the premise and behind other firewalls221,231,241and connected to the WAN200. The other devices220,230,240are configured to access video or audio content from the IP cameras within the premise, as described above. FIG.23is a general flow diagram for IP video control, under an embodiment. The IP video control interfaces, manages, and provides WAN-based remote access to a plurality of IP cameras in conjunction with a home security or remote home monitoring system. The IP video control allows for monitoring and controlling of IP video cameras from a location remote to the customer premise, outside the customer premise firewall, and protected by another firewall. Operations begin when the system is powered on2310, involving at a minimum the power-on of the gateway, as well as the power-on of at least one IP camera coupled or connected to the premise LAN. The gateway searches2311for available IP cameras and associated IP addresses. The gateway selects2312from one or more possible approaches to create connections between the IP camera and a device external to the firewall. Once an appropriate connection path is selected, the gateway begins operation2313, and awaits2320a request for a stream from one of the plurality of IP video cameras available on the LAN. When a stream request is present the server retrieves2321the requestor's WAN IP address/port. When a server relay is present2330, the IP camera is instructed2331to stream to the server, and the connection is managed2332through the server. In response to the stream terminating2351, operations return to gateway operation2313, and waits to receive another request2320for a stream from one of the plurality of IP video cameras available on the LAN. When a server relay is not present2330, the requestor's WAN IP address/port is provided2333to the gateway or gateway relay. When a gateway relay is present2340, the IP camera is instructed2341to stream to the gateway, and the gateway relays2342the connection to the requestor. In response to the stream terminating2351, operations return to gateway operation2313, and waits to receive another request2320for a stream from one of the plurality of IP video cameras available on the LAN. When a gateway relay is not present2340, the IP camera is instructed2343to stream to an address, and a handoff2344is made resulting in direct communication between the camera and the requestor. In response to the stream terminating2351, operations return to gateway operation2313, and waits to receive another request2320from one of the plurality of IP video cameras available on the LAN. The integrated security system of an embodiment supports numerous video stream formats or types of video streams. Supported video streams include, but are not limited to, Motion Picture Experts Group (MPEG)-4 (MPEG-4)/Real-Time Streaming Protocol (RTSP), MPEG-4 over Hypertext Transfer Protocol (HTTP), and Motion Joint Photographic Experts Group (JPEG) (MJPEG). The integrated security system of an embodiment supports the MPEG-4/RTSP video streaming method (supported by video servers and clients) which uses RTSP for the control channel and Real-time Transport Protocol (RTP) for the data channel. Here the RTSP channel is over Transmission Control Protocol (TCP) while the data channel uses User Datagram Protocol (UDP). This method is widely supported by both streaming sources (e.g., cameras) and stream clients (e.g., remote client devices, Apple Quicktime, VideoLAN, IPTV mobile phones, etc). Encryption can be added to the two channels under MPEG-4/RTSP. For example, the RTSP control channel can be encrypted using SSL/TLS. The data channel can also be encrypted. If the camera or video stream source inside the home does not support encryption for either RTSP or RTP channels, the gateway located on the LAN can facilitate the encrypted RTSP method by maintaining separate TCP sessions with the video stream source device and with the encrypted RTSP client outside the LAN, and relay all communication between the two sessions. In this situation, any communication between the gateway and the video stream source that is not encrypted could be encrypted by the gateway before being relayed to the RTSP client outside the LAN. In many cases the gateway is an access point for the encrypted and private Wifi network on which the video stream source device is located. This means that communication between the gateway and the video stream source device is encrypted at the network level, and communication between the gateway and the RTSP client is encrypted at the transport level. In this fashion the gateway can compensate for a device that does not support encrypted RTSP. The integrated security system of an embodiment also supports reverse RTSP. Reverse RTSP includes taking a TCP-based protocol like RTSP, and reversing the roles of client and server (references to “server” include the iControl server, also referred to as the iConnect server) when it comes to TCP session establishment. For example, in standard RTSP the RTSP client is the one that establishes the TCP connection with the stream source server (the server listens on a port for incoming connections). In Reverse RTSP, the RTSP client listens on a port for incoming connections from the stream source server. Once the TCP connection is established, the RTSP client begins sending commands to the server over the TCP connection just as it would in standard RTSP. When using Reverse RTSP, the video stream source is generally on a LAN, protected by a firewall. Having a device on the LAN initiate the connection to the RTSP client outside the firewall enables easy network traversal. If the camera or video stream source inside the LAN does not support Reverse RTSP, then the gateway facilitates the Reverse RTSP method by initiating separate TCP sessions with the video stream source device and with the Reverse RTSP client outside the LAN, and then relays all communication between the two sessions. In this fashion the gateway compensates for a stream source device that does not support Reverse RTSP. As described in the encryption description above, the gateway can further compensate for missing functionalities on the device such as encryption. If the device does not support encryption for either RTSP or RTP channels, the gateway can communicate with the device using these un-encrypted streams, and then encrypt the streams before relaying them out of the LAN to the RTSP Reverse client. Servers of the integrated security system can compensate for RTSP clients that do not support Reverse RTSP. In this situation, the server accepts TCP connections from both the RTSP client and the Reverse RTSP video stream source (which could be a gateway acting on behalf of a stream source device that does not support Reverse RTSP). The server then relays the control and video streams from the Reverse RTSP video stream source to the RTSP client. The server can further compensate for the encryption capabilities of the RTSP client; if the RTSP client does not support encryption then the server can provide an unencrypted stream to the RTSP client even though an encrypted stream was received from the Reverse RTSP streaming video source. The integrated security system of an embodiment also supports Simple Traversal of User Datagram Protocol (UDP) through Network Address Translators (NAT) (STUN)/Traversal Using Relay NAT (TURN) peer-to-peer routing. STUN and Turn are techniques for using a server to help establish a peer-to-peer UDP data stream (it does not apply to TCP streams). The bandwidth consumed by the data channel of a video stream is usually many thousands of times larger than that used by the control channel. Consequently, when a peer-to-peer connection for both the RTSP and RTP channels is not possible, there is still a great incentive to use STUN/TURN techniques in order to achieve a peer-to-peer connection for the RTP data channel. Here, a method referred to herein as RTSP with STUN/TURN is used by the integrated security system. The RTSP with STUN/TURN is a method in which the video streaming device is instructed over the control channel to stream its UDP data channel to a different network address than that of the other end of the control TCP connection (usually the UDP data is simply streamed to the IP address of the RTSP client). The result is that the RTSP or Reverse RTSP TCP channel can be relayed using the gateway and/or the server, while the RTP UDP data channel can flow directly from the video stream source device to the video stream client. If a video stream source device does not support RTSP with STUN/TURN, the gateway can compensate for the device by relaying the RTSP control channel via the server to the RTSP client, and receiving the RTP data channel and then forwarding it directly to the RTSP with STUN/TURN enabled client. Encryption can also be added here by the gateway. The integrated security system of an embodiment supports MPEG-4 over HTTP. MPEG-4 over HTTP is similar to MPEG-4 over RTSP except that both the RTSP control channel and the RTP data channel are passed over an HTTP TCP session. Here a single TCP session can be used, splitting it into multiple channels using common HTTP techniques like chunked transfer encoding. The MPEG-4 over HTTP is generally supported by many video stream clients and server devices, and encryption can easily be added to it using SSL/TLS. Because it uses TCP for both channels, STUN/TURN techniques may not apply in the event that a direct peer-to-peer TCP session between client and server cannot be established. As described above, encryption can be provided using SSL/TLS taking the form of HTTPS. And as with MPEG-4 over RTSP, a gateway can compensate for a stream source device that does not support encryption by relaying the TCP streams and encrypting the TCP stream between the gateway and the stream client. In many cases the gateway is an access point for the encrypted and private Wifi network on which the video stream source device is located. This means that communication between the gateway and the video stream source device is encrypted at the network level, and communication between the gateway and the video stream client is encrypted at the transport level. In this fashion the gateway can compensate for a device that does not support HTTPS. As with Reverse RTSP, the integrated security system of an embodiment supports Reverse HTTP. Reverse HTTP includes taking a TCP-based protocol like HTTP, and reversing the roles of client and server when it comes to TCP session establishment. For example, in conventional HTTP the HTTP client is the one that establishes the TCP connection with the server (the server listens on a port for incoming connections). In Reverse HTTP, the HTTP client listens on a port for incoming connections from the server. Once the TCP connection is established, the HTTP client begins sending commands to the server over the TCP connection just as it would in standard HTTP. When using Reverse HTTP, the video stream source is generally on a LAN, protected by a firewall. Having a device on the LAN initiate the connection to the HTTP client outside the firewall enables easy network traversal. If the camera or video stream source inside the LAN does not support Reverse HTTP, then the gateway can facilitate the Reverse HTTP method by initiating separate TCP sessions with the video stream source device and with the Reverse HTTP client outside the LAN, and then relay all communication between the two sessions. In this fashion the gateway can compensate for a stream source device that does not support Reverse HTTP. As described in the encryption description above, the gateway can further compensate for missing functionalities on the device such as encryption. If the device does not support encrypted HTTP (e.g., HTTPS), then the gateway can communicate with the device using HTTP, and then encrypt the TCP stream(s) before relaying out of the LAN to the Reverse HTTP client. The servers of an embodiment can compensate for HTTP clients that do not support Reverse HTTP. In this situation, the server accepts TCP connections from both the HTTP client and the Reverse HTTP video stream source (which could be a gateway acting on behalf of a stream source device that does not support Reverse HTTP). The server then relays the TCP streams from the Reverse HTTP video stream source to the HTTP client. The server can further compensate for the encryption capabilities of the HTTP client; if the HTTP client does not support encryption then the server can provide an unencrypted stream to the HTTP client even though an encrypted stream was received from the Reverse HTTP streaming video source. The integrated security system of an embodiment supports MJPEG as described above. MJPEG is a streaming technique in which a series of JPG images are sent as the result of an HTTP request. Because MJPEG streams are transmitted over HTTP, HTTPS can be employed for encryption and most MJPEG clients support the resulting encrypted stream. And as with MPEG-4 over HTTP, a gateway can compensate for a stream source device that does not support encryption by relaying the TCP streams and encrypting the TCP stream between the gateway and the stream client. In many cases the gateway is an access point for the encrypted and private Wifi network on which the video stream source device is located. This means that communication between the gateway and the video stream source device is encrypted at the network level, and communication between the gateway and the video stream client is encrypted at the transport level. In this fashion the gateway can compensate for a device that does not support HTTPS. The integrated system of an embodiment supports Reverse HTTP. Reverse HTTP includes taking a TCP-based protocol like HTTP, and reversal of the roles of client and server when it comes to TCP session establishment can be employed for MJPEG streams. For example, in standard HTTP the HTTP client is the one who establishes the TCP connection with the server (the server listens on a port for incoming connections). In Reverse HTTP, the HTTP client listens on a port for incoming connections from the server. Once the TCP connection is established, the HTTP client begins sending commands to the server over the TCP connection just as it would in standard HTTP. When using Reverse HTTP, the video stream source is generally on a LAN, protected by a firewall. Having a device on the LAN initiate the connection to the HTTP client outside the firewall enables network traversal. If the camera or video stream source inside the LAN does not support Reverse HTTP, then the gateway can facilitate the Reverse HTTP method by initiating separate TCP sessions with the video stream source device and with the Reverse HTTP client outside the LAN, and then relay all communication between the two sessions. In this fashion the gateway can compensate for a stream source device that does not support Reverse HTTP. As described in the encryption description above, the gateway can further compensate for missing functionalities on the device such as encryption. If the device does not support encrypted HTTP (e.g., HTTPS), then the gateway can communicate with the device using HTTP, and then encrypt the TCP stream(s) before relaying out of the LAN to the Reverse HTTP client. The servers can compensate for HTTP clients that do not support Reverse HTTP. In this situation, the server accepts TCP connections from both the HTTP client and the Reverse HTTP video stream source (which could be a gateway acting on behalf of a stream source device that does not support Reverse HTTP). The server then relays the TCP streams from the Reverse HTTP video stream source to the HTTP client. The server can further compensate for the encryption capabilities of the HTTP client; if the HTTP client does not support encryption then the server can provide an unencrypted stream to the HTTP client even though an encrypted stream was received from the Reverse HTTP streaming video source. The integrated security system of an embodiment considers numerous parameters in determining or selecting one of the streaming formats described above for use in transferring video streams. The parameters considered in selecting a streaming format include, but are not limited to, security requirements, client capabilities, device capabilities, and network/system capabilities. The security requirements for a video stream are considered in determining an applicable streaming format in an embodiment. Security requirements fall into two categories, authentication and privacy, each of which is described below. Authentication as a security requirement means that stream clients must present credentials in order to obtain a stream. Furthermore, this presentation of credentials should be done in a way that is secure from network snooping and replays. An example of secure authentication is Basic Authentication over HTTPS. Here a username and password are presented over an encrypted HTTPS channel so snooping and replays are prevented. Basic Authentication alone, however, is generally not sufficient for secure authentication. Because not all streaming clients support SSL/TLS, authentication methods that do not require it are desirable. Such methods include Digest Authentication and one-time requests. A one-time request is a request that can only be made by a client one time, and the server prevents a reuse of the same request. One-time requests are used to control access to a stream source device by stream clients that do not support SSL/TLS. An example here is providing video access to a mobile phone. Typical mobile phone MPEG-4 viewers do not support encryption. In this case, one of the MPEG-4 over RTSP methods described above can be employed to get the video stream relayed to an server. The server can then provide the mobile phone with a one-time request Universal Resource Locator (URL) for the relayed video stream source (via a Wireless Application Protocol (WAP) page). Once the stream ends, the mobile phone would need to obtain another one-time request URL from the server (via WAP, for example) in order to view the stream again. Privacy as a security requirement means that the contents of the video stream must be encrypted. This is a requirement that may be impossible to satisfy on clients that do not support video stream encryption, for example many mobile phones. If a client supports encryption for some video stream format(s), then the “best” of those formats should be selected. Here “best” is determined by the stream type priority algorithm. The client capabilities are considered in determining an applicable streaming format in an embodiment. In considering client capabilities, the selection depends upon the supported video stream formats that include encryption, and the supported video stream formats that do not support encryption. The device capabilities are considered in determining an applicable streaming format in an embodiment. In considering device capabilities, the selection depends upon the supported video stream formats that include encryption, the supported video stream formats that do not support encryption, and whether the device is on an encrypted private Wifi network managed by the gateway (in which case encryption at the network level is not required). The network/system capabilities are considered in determining an applicable streaming format in an embodiment. In considering network/system capabilities, the selection depends upon characteristics of the network or system across which the stream must travel. The characteristics considered include, for example, the following: whether there is a gateway and/or server on the network to facilitate some of the fancier video streaming types or security requirements; whether the client is on the same LAN as the gateway, meaning that network firewall traversal is not needed. Streaming methods with the highest priority are peer-to-peer because they scale best with server resources. Universal Plug and Play (UPnP) can be used by the gateway to open ports on the video stream device's LAN router and direct traffic through those ports to the video stream device. This allows a video stream client to talk directly with the video stream device or talk directly with the gateway which can in turn facilitate communication with the video stream device. Another factor in determining the best video stream format to use is the success of STUN and TURN methods for establishing direct peer-to-peer UDP communication between the stream source device and the stream client. Again, the gateway and the server can help with the setup of this communication. Client bandwidth availability and processing power are other factors in determining the best streaming methods. For example, due to its bandwidth overhead an encrypted MJPEG stream should not be considered for most mobile phone data networks. Device bandwidth availability can also be considered in choosing the best video stream format. For example, consideration can be given to whether the upstream bandwidth capabilities of the typical residential DSL support two or more simultaneous MJPEG streams. Components of the integrated security system of an embodiment, while considering various parameters in selecting a video streaming format to transfer video streams from streaming source devices and requesting client devices, prioritize streaming formats according to these parameters. The parameters considered in selecting a streaming format include, as described above, security requirements, client capabilities, device capabilities, and network/system capabilities. Components of the integrated security system of an embodiment select a video streaming format according to the following priority, but alternative embodiments can use other priorities. The selected format is UPnP or peer-to-peer MPEG-4 over RTSP with encryption when both requesting client device and streaming source device support this format. The selected format is UPnP or peer-to-peer MPEG-4 over RTSP with authentication when the requesting client device does not support encryption or UPnP or peer-to-peer MPEG-4 over RTSP with encryption. The selected format is UPnP (peer-to-peer) MPEG-4 over HTTPS when both requesting client device and streaming source device support this format. The selected format is UPnP (peer-to-peer) MPEG-4 over HTTP when the requesting client device does not support encryption or UPnP (peer-to-peer) MPEG-4 over HTTPS. The selected format is UPnP (peer-to-peer) MPEG-4 over RTSP facilitated by gateway or touchscreen (including or incorporating gateway components) (to provide encryption), when the requesting client device supports encrypted RTSP and the streaming source device supports MPEG-4 over RTSP. The selected format is UPnP (peer-to-peer) MPEG-4 over HTTPS facilitated by gateway or touchscreen (including or incorporating gateway components) (to provide encryption) when the requesting client device supports MPEG-4 over HTTPS and the streaming source device supports MPEG-4 over HTTP. The selected format is UPnP (peer-to-peer) MJPEG over HTTPS when the networks and devices can handle the bandwidth and both requesting client device and streaming source device support MJPEG over HTTPS. The selected format is Reverse RTSP with STUN/TURN facilitated by the server when the streaming source device initiates SSL/TLS TCP to server, the streaming source device supports Reverse RTSP over SSL/TLS with STUN/TURN, and the requesting client device supports RTSP with STUN/TURN. The selected format is Reverse RTSP with STUN/TURN facilitated by server and gateway or touchscreen (including or incorporating gateway components) when the gateway initiates SSL/TLS TCP to the server and to the streaming source device, the streaming source device supports RTSP, and the requesting client device supports RTSP with STUN/TURN. The selected format is Reverse MPEG over RTSP/HTTP facilitated by the server when the streaming source device initiates SSL/TLS TCP to server, the streaming source device supports Reverse RTSP or HTTP over SSL/TLS, and the requesting client device supports MPEG over RTSP/HTTP. The selected format is Reverse MPEG over RTSP/HTTP facilitated by server and gateway or touchscreen (including or incorporating gateway components) when the gateway initiates SSL/TLS TCP to server and to streaming source device, the streaming source device supports MPEG over RTSP or HTTP, and the requesting client device supports MPEG over RTSP/HTTP. The selected format is UPnP (peer-to-peer) MJPEG over HTTP when the networks and devices can handle the bandwidth and when the requesting client device does not support encryption and does not support MPEG-4. The selected format is Reverse MJPEG over HTTPS facilitated by the server when the streaming source device initiates SSL/TLS TCP to server, the streaming source device supports Reverse MJPEG over SSL/TLS, and the requesting client device supports MJPEG. The selected format is Reverse MJPEG over HTTPS facilitated by server and gateway or touchscreen (including or incorporating gateway components) when the gateway initiates SSL/TLS TCP to the server and to the streaming source device, the streaming source device supports MJPEG, and the requesting client device supports MJPEG. FIG.24is a block diagram showing camera tunneling, under an embodiment. Additional detailed description of camera tunnel implementation details follow. An embodiment uses XMPP for communication with a remote video camera as a lightweight (bandwidth) method for maintaining real-time communication with the remote camera. More specifically, the remote camera is located on another NAT (e.g., NAT traversal). An embodiment comprises a method for including a remotely located camera in a home automation system. For example, using XMPP via cloud XMPP server to couple or connect camera to home automation system. This can be used with in-car cameras, cell phone cameras, and re-locatable cameras (e.g., dropped in the office, the hotel room, the neighbor's house, etc.). Components of an embodiment are distributed so that any one can be offline while system continues to function (e.g., panel can be down while camera still up, motion detection from camera, video clip upload etc. continue to work. Embodiments extend the PSIA in one or more of the following areas: wifi roaming configuration; video relay commands; wifi connectivity test; media tunnel for live video streaming in the context of a security system; motion notification mechanism and configuration (motion heartbeat) (e.g., helps with scalable server); XMPP for lightweight communication (helps with scalable server, reduced bandwidth, for maintaining persistent connection with a gateway); ping request sent over XMPP as health check mechanism; shared secret authentication bootstrapping process; asynchronous error status delivery by the camera for commands invoked by the gateway if the camera is responsible for delivering errors to the gateway in an asynchronous fashion (e.g., gateway requests a firmware update or a video clip upload). Embodiments extend the home automation system to devices located on separate networks, and make them useable as general-purpose communication devices. These cameras can be placed in the office, vacation home, neighbor house, software can be put onto a cell phone, into a car, navigation system, etc. Embodiments use a global device registry for enabling a device/camera to locate the server and home to which it is assigned. Embodiments include methods for bootstrapping and re-bootstrapping of authentication credentials. The methods include activation key entry by installer into the cloud web interface. Activation key generation is based upon mac address and a shared secret between manufacturer and the service provider. Embodiments of the system allow activation of a camera with valid activation key that is not already provisioned in the global registry server. Embodiments include a web-based interface for use in activating, configuring, remote firmware update, and re-configuring of a camera. Embodiments process or locate local wifi access points and provide these as options during camera configuring and re-configuring. Embodiments generate and provide recommendations around choosing a best wifi access point based upon characteristics of the network (e.g., signal strength, error rates, interference, etc.). Embodiments include methods for testing and diagnosing issues with wifi and network access. Embodiments include cameras able to perform this wifi test using only one physical network interface, an approach that enables the camera to dynamically change this physical interface from wired to wifi. Embodiments are able to change the network settings (wifi etc) remotely using the same process. Cameras of an embodiment can be configured with multiple network preferences with priority order so that the camera can move between different locations and the camera can automatically find the best network to join (e.g., can have multiple ssid+bssid+password sets configured and prioritized). Regarding firmware download, embodiments include a mechanism to monitor the status of the firmware update, provide feedback to the end user and improve overall quality of the system. Embodiments use RTSP over SSL to a cloud media relay server to allow live video NAT traversal to a remote client (e.g., PC, cell phone, etc.) in a secure manner where the camera provides media session authentication credentials to the server. The camera initiates the SSL connection to the cloud and then acts as a RTSP server over this connection. Embodiments include methods for using NAT traversal for connecting to the cloud for remote management and live video access allows the integrated security components to avoid port forwarding on the local router(s) and as a result maintain a more secure local network and a more secure camera since no ports are required to be open. Embodiments enable camera sensors (e.g., motion, audio, heat, etc.) to serve as triggers to other actions in the automation system. The capture of video clips or snapshots from the camera is one such action, but the embodiments are not so limited. A camera of an embodiment can be used by multiple systems. A detailed description of flows follows relating to the camera tunnel of an embodiment. A detailed description of camera startup and installation follows as it pertains to the camera tunnel of an embodiment. Activation Keya. camera to follow same algorithm as ihub where activation key is generated from serial based upon a one-way hash on serial and a per-vendor shared secret.b. Used com.icontrol.util.ops.activation.ActivationKeyUtil class to validate serialNo<->activationKey. Registry Request [partner]/registry/[device type]/[serial]a. new column in existing registry table for id type; nullable but the application treats null as “gateway”.b. rest endpoints allow adding with the new optional argument.c. current serial and siteId uniqueness enforcement by application depends upon device type (for any device type, there should be uniqueness on serial; for gateway device type, there should be uniqueness on siteId; for other device types, there need not be uniqueness on siteId).d. if no activation yet (e.g., no entry) then send dummy response (random but repeatable reply; may include predictable “dummy” so that steps below can infer.e. add/update registry server endpoints for adding/updating entries. If Camera has No Password Camera retrieves “Pending Key” via POST to /<CredentialGatewayURL>/GatewayService/<siteID>/PendingDeviceKey.a. pending key request (to get password) with serial and activation key.b. server checks for dummy reply; if dummy then responds with retry backoff response.c. server invokes pass-through API on gateway to get new pending key.d. if device is found, then gateway performs validation of serial+activation key, returns error if mismatch.e. if activation key checks out, then gateway checks pending key status.f. if device currently has a pending key status, then a new pending password is generated.g. gateway maintains this authorization information in a new set of variables on the camera device.h. device-authorization/session-key comprises the current connected password.i. device-authorization/pending-expiry comprises a UTC timestamp representing the time the current pending password period ends; any value less than the current time or blank means the device is not in a pending password state.j. device-authorization/pending-session-key comprises the last password returned to the camera in a pending request; this is optional (device may choose to maintain this value in memory).k. session-key and pending-session-key variables tagged with “encryption” in the device def which causes rest and admin to hide their value from client. ConnectInfo Requesta. returns xmpp host and port to connect to (comes from config as it does for gateway connect info).b. returns connectInfo with additional <xmpp> parameter. Start Portal Add Camera Wizarda. user enters camera serial, activation key.b. addDevice rest endpoint on gateway calledc. gateway verifies activation key is correct.d. gateway calls addDevice method on gapp server to add LWG_SerComm_iCamera_1000 with given serial to site.e. Server detects the camera type and populates registry.f. gateway puts device into pending password state (e.g., updates device-auth/pending-expiry point).g. rest endpoints on gateway device for managing device pending password state.h. start pending password state: POST future UTC value to device-auth/pending-expiry; device-auth/pending-expiry set to 30 minutes from time device was added.i. stop pending password state: POST −1 to device-auth/pending-expiry.j. check pending password state: GET device-auth/pending-expiry.k. message returned with “Location” header pointing to relative URI.l. user told to power on camera (or reboot if already powered on).m. once camera connects, gateway updates device-auth/pending-expiry to −1 and device-auth/session-key with password and device/connection-status to connectedn. portal polls for device/connection-status to change to connected; if does not connect after X seconds, bring up error page (camera has not connected—continue waiting or start over).o. user asked if wifi should be configured for this camera.p. entry fields for wifi ssid and password.q. portal can pre-populate ssid and password fields with picklist of any from other cameras on the site.r. get XML of available SSIDs.s. non-wifi option is allowed.t. portal submits options to configure camera (use null values to specify non-wifi); upon success, message is returned with “Location” header pointing to relative URI.u. checks configuration progress and extracting “status” and “subState” fields.v. puts device state into “configuring”; upon error, puts device state into “configuration failure”.w. performs firmware upgrade if needed, placing device state into “upgrading”; upon error, puts device state into “upgrade failure”.x. upon configuration success, puts device state of “ok” and applies appropriate configuration for camera (e.g., resolutions, users, etc.).y. if non-blank wifi parameters, automatically perform “wifi test” method to test wifi without disconnecting Ethernet.z. portal wizard polls device status until changes to “ok” or “upgrade failure/” configuration failure” in “status” field, along with applicable, if any, with error code reason, in “subState” field; upon error, show details to user, provide options (start over, configure again, reboot, factory reset, etc)aa. notify user they can move camera to desired location. Camera Rebootsa. gets siteId and server URL from registry.b. makes pending paid key request to server specifying correct siteId, serial and activation key; gets back pending password.c. makes connectInfo request to get xmpp server.d. connects over xmpp with pending password. If Camera Reboots Againa. get siteId and server URL from registry.b. already has password (may or may not be pending) so no need to perform pending paid key request.c. make connectInfo request to get xmpp server.d. connect over xmpp with password. xmpp Connect with Passworda. xmpp user is of the form [serial]@[server]/[siteId]b. session server performs authentication by making passthrough API request to gateway for given SiteId.c. Session xmpp server authenticates new session using DeviceKey received in GET request against received xmpp client credential.d. If authentication fails or GET receives non-response, server returns to camera XMPP connect retry backoff with long backoff.e. gateway device performs password management.f. compares password with current key and pending key (if not expired); if matches pending, then update device-auth/session-key to be pending value, and clear out the device-auth/pending-expiry.g. gateway device updates the device/connection-status point to reflect that camera is connected.h. gateway device tracks the xmpp session server this camera is connected to via new point device/proxy-host and updates this info if changed.i. if deviceConnected returns message, then session server posts connected event containing xmpp user to queue monitored by all session servers.j. session servers monitor these events and disconnect/cleanup sessions they have for same user.k. may use new API endpoint on session server for broadcast messages. xmpp Connect with Bad Passworda. Upon receiving a new connection request, session server performs authentication by making passthrough API request to gateway for given SiteId.b. Session xmpp server authenticates new session using DeviceKey received in above GET request against received xmpp client credential.c. If authentication fails or GET receives non-response from virtual gateway.d. Session server rejects incoming connection (is there a backoff/retry XMPP response that can be sent here).e. Session server logs event.f. Gateway logs event. xmpp Disconnecta. session server posts disconnected event to gateway (with session server name).b. gateway updates the device/connected variable/point to reflect that camera is disconnected.c. gateway updates the device/connection-status variable/point to reflect that camera is disconnected.d. gateway clears the device/proxy-host point that contains the session host to this camera is connected. LWGW Shutdowna. During LWGW shutdown, gateway can broadcast messages to all XMPP servers to ensure all active XMPP sessions are gracefully shutdown.b. gateways use REST client to call URI, which will broadcast to all XMPP servers. To Configure Camera During Installationa. applies all appropriate configuration for camera (e.g., resolutions, users, etc).b. returns message for configuration applied, wifi test passed, all settings taken. returns other response code with error code description upon any failure. To Reconfigure Wifi SSID and Keya. returns message for wifi credentials set.b. returns other response code with error code description upon any failure. API Pass-Through Handling for Gateway Fail-Over Casea. When performing passthrough for LWGW, the API endpoint handles the LWGW failover case (e.g., when gateway is not currently running on any session server).b. passthrough functions in the following way: current session server IP is maintained on the gateway object; server looks up gateway object to get session IP and then sends passthrough request to that session server; if that request returns gateway not found message, server error message, or a network level error (e.g., cannot route to host, etc.), if the gateway is a LWGW then server should lookup the primary/secondary LW Gateway group for this site; server should then send resume message to primary, followed by rest request; if that fails, then server send resume message to secondary followed by rest requestc. alternatively, passthrough functions in the following way: rather than lookup session server IP on gateway object, passthrough requests should be posted to a passthrough queue that is monitored by all session servers; the session server with the Gateway on it should consume the message (and pass it to the appropriate gateway); the server should monitor for expiry of these messages, and if the gateway is a LWGW then server should lookup the primary/secondary LW Gateway group for this site; server should then send resume message to primary, followed by rest request; if that fails, then server send resume message to secondary followed by rest request. A detailed description follows for additional flows relating to the camera tunnel of an embodiment. Motion Detectiona. camera sends openhome motion event to session server via xmpp.b. session server posts motion event to gateway via passthrough API.c. gateway updates the camera motion variable/point to reflect the event gateway updates the camera motion variable/point to reflect the event Capture Snapshota. gateway posts openhome snapshot command to session server with camera connected.b. gateway sends command including xmpp user id to xmpp command Queue monitored by all session servers.c. session server with given xmpp user id consumes command and sends command to camera (command contains upload URL on gw webapp).d. gateway starts internal timer to check if a response is received from camera (e.g., 5 sec wait window).e. if broadcast RabbitMQ not ready, then gateway will use device/proxy-host value to know which session server to post command to.f. session server sends command to camera (comprises upload URL on gw webapp)g. Example XML body: <MediaUpload><id>1321896772660</id><snapShotImageType>JPEG</snapShotImageType><gateway_url>[gatewaysyncUrl]/gw/GatewayService/SPutJpg/s/[siteId]/[deviceIndex]/[varValue]/[varIndex]/[who]/[ts]/[HMM]/[passCheck]/</<failure_url>[gatewaysyncUrl]/gw/GatewayService/SPutJpgError/s/[siteId]/[deviceIndex]/[varValue]/[varIndex]/[who]/[ts]/[HMM]/[passCheck]/<</MediaUpload>h. session server receives response to sendRequestEvent from camera and posts response to gateway.i. camera uploads to upload URL on gw webapp.j. passCheck can be verified on server (based upon gateway secret); alternatively, the OpenHome spec calls for Digest Auth here.k. endpoint responds with message digest password if the URI is expected, otherwise returns non-response.l. gw webapp stores snapshot, logs history event.m. event is posted to gateway for deltas. Capture Clipa. gateway posts openhome video clip capture command to session server with camera connected.b. gateway sends command including xmpp user id to xmpp command Queue monitored by all session servers.c. session server with given xmpp user id consumes command and sends command to camera (command comprises upload URL on gw webapp).d. gateway starts internal timer to check if a response is received from camera (e.g., 5 sec wait window).e. session server sends command to camera (comprises upload URL on gw webapp).f. Example URI from session server to camera: /openhome/streaming/channels/1/video/uploadg. Example XML body: <MediaUpload><id>1321898092270<id><videoClipFormatType>MP4</videoClipFormatType><gateway_url>[gatewaysyncUrl]/gw/GatewayService/SPutMpeg/s/[siteId]/[deviceIndex]/[varValue]/[varIndex]/[who]/[ts]/[HMM]/[passCheck]/<<failure_url>[gatewaysyncUrl]/gw/GatewayService/SPutMpegFailed/s/[siteId]/[deviceIndex]/[varValue]/[varIndex]/[who]/[ts]/[HMM]/[passCheck] /</</MediaUpload>h. session server receives response to sendRequestEvent from camera and posts response to gateway.i. camera uploads to upload URL on gw webapp.j. passCheck can be verified on server (based upon gateway secret).k. alternatively, spec calls for Digest Auth here.l. endpoint responds with message digest password if the URI is expected, otherwise returns non-response.m. gw webapp stores video clip, logs history event.n. event is posted to gateway for deltas. Live Video (Relay)a. Upon user login to portal, portal creates a media relay tunnel by calling relayAPImanager create.b. RelayAPImanager creates relays and sends ip-config-relay variable (which instructs gateway to create media tunnel) to gateway.c. Upon receiving media tunnel create ip-config-relay command, gateway posts openhome media channel create command to session server with camera connected.d. session server sends create media tunnel command to camera (comprises camera relay URL on relay server).e. Example URI from session server to camera: /openhome/streaming/mediatunnel/createf. Example XML body: <CreateMediaTunnel><sessionID>1</sessionID><gatewayURL>TBD</gatewayURL><failureURL>TBD</failureURL></CreateMediaTunnel>g. GatewayURL is created from relay server, port, and sessionId info included within ip-config-relay variable.h. camera creates a TLS tunnel to relay server via POST to <gatewayURL>.i. When user initiates live video, portal determines user is remote and retrieves URL of Relay server from relayAPImanager.j. Upon receiving a user pole connection on the relay server (along with valid rtsp request), relay sends streaming command to camera: example: rtsp:://openhome/streaming/channels/1/rtspk. Upon user portal logout, portals calls relayAPImanager to terminate media tunnel.l. RelayAPImanager send ip-config-relay variable to terminate media tunnel. m. Gateway sends destroy media tunnel command to camera via XMPP. Camera Firmware Updatea. Gateway checks camera firmware version; if below minimum version, gateway sends command to camera (via session server) to upgrade firmware (command: /openhome/system/updatefirmware).b. Gateway checks firmware update status by polling: /openhome/system/updatefirmware/status.c. Gateway informs portal of upgrade status.d. Camera auto-reboots after firmware update and reconnects to Session server. Camera First-Contact Configurationa. After a camera is added successfully and is connected to the session server for the first time, gateway performs first contact configuration as follows.b. Check firmware version.c. Configure settings by: download config file using /openhome/system/configurationData/configFile; or configure each category individually (configure video input channel settings—/openhome/system/video/inputs/channels; configure audio input channel settings (if any)—/openhome/system/audio/inputs/channels; configure video streaming channel settings—/openhome/streaming/channels; configure motion detection settings—example: PUT/openhome/custom/motiondetection/pir/0; configure event trigger settings—example: PUT/openhome/custom/event).d. Reboot camera (/openhome/system/factoryreset) if camera responds with reboot required. The integrated system of an embodiment includes a social access platform that is configured with analytics processes that allow users to interact with physical buildings and devices using their social identity. The social access platform is a component of one or more of the gateway and servers and/or distributed between the gateway and servers, but is not so limited. The integrated system including the social access platform captures video content and other media content as described herein and manages the video content experience of a user through use of a cloud-based access control and hosted video platform. The social access platform of an embodiment includes one or more APIs configured to authenticate and identify users and provide permissions to visitors as described herein, but is not so limited. Using social identities, applications of the integrated system including the social access platform connect virtual communities with physical premises. As such, the social access platform provides contextual security and access control that manages access and media content using social and business networks to interact with the premises. The integrated system provides a logical platform for video surveillance in particular. This platform is configured to manage and control video or image data collected by security and/or network devices in the premises in order to categorize, select and provide the collected data according to parameters or characteristics of the collected data, and to generate control signals or data in response to the collected data. Content management examples are described in detail. As described in detail herein, the integrated system locates, sorts and/or stores media content (e.g., video, image, etc.) according to premises security data and/or device data collected during a period. The security data includes data of motion events, door events, thermostat events, and other sensor or device event data, but is not so limited. Furthermore, as one example, the integrated system locates, sorts and/or stores media content (e.g., video, image, etc.) according to an object previously tagged by the account holder. The system is configured so that the account holder can “tag” objects (e.g., person(s), pet(s), etc.) and subsequently locate, sort, and/or store the content using the tagged objects. For example, the account holder who has tagged objects (e.g., people, pets, etc.) requests video data, footage or images and/or other data that includes or represents those objects. Examples of the objects include, but are not limited to, children (e.g., I want to see any footage of my son John, I want to see footage of my children John and Amy, etc.) and pets (e.g., I want to see any footage of my cat Zeus, I want to see footage of my dogs Thelma and Louise, etc.). In another example, the integrated system locates, sorts and/or stores media content (e.g., video, image, etc.) using parameters or characteristics representing an event. The parameters or characteristics include, for example, a particular number of people present (e.g. I want to see video that includes more than five (5) people, etc.), one or more known persons, and one or more unknown persons (e.g., I want to see any video containing anyone not tagged for identification, etc.) In yet another example, the integrated system locates, sorts and/or stores media content (e.g., video, image, etc.) using parameters or characteristics of a “calendar event” (e.g., I want to see footage corresponding to a calendar event that I marked for “record”). The parameters or characteristics of the calendar event include but are not limited to a date of the event, data of the event (e.g., participant name(s), participant(s) address(es), event address, event start time, event end time, duration of event, event notes, etc.), and keywords associated with the event. The integrated system uses data of the parameters or characteristics described herein (e.g., premises security data, tagged object data, recognition data, event data, calendar data, etc.) to identify and rate high interest video content over some period of time (e.g., 12-hour period, day, two days, etc.). High interest video of an embodiment can be rated by an account holder according to parameters, for example based on or related to event duration of one or more events of a type, quantity of an event type over the period, and one or more combinations of events, but is not so limited. The system then generates or creates a video digest used or based on the rating. The integrated system of an embodiment automatically generates and implements rules for recording content. The auto-rules of an embodiment can be based on or “learned” using any data available to the integrated system. The available data, for example, can reside in a device hosted in the integrated system and/or hosted on a remote network or premises. Once generated, the auto-rules can be activated or implemented to effect control of parameters based on the learned events. In an example, the integrated system develops and implements auto-rules based on behavior detected by sensor devices and/or network devices at the premises (e.g., if someone appears in a room when others are not in a room, the activate a camera, then have “learning prompts”, etc.). As such, the integrated system “learns” based on occurrences represented by collected or available data (e.g., video content (e.g., light gets turned on at 5 PM every night (suggest a rule for controlling lights)), front door opens at approximately 6 PM every evening (suggest a rule for controlling alarm system state), etc.). In addition to the auto-rules, rules can be generated corresponding to sensor events (e.g., rule to activate camera when water sensor is triggered, rule to activate camera recording when motion is detected in kitchen, rule to activate camera when front door opens, etc.). The integrated system of an embodiment identifies a beginning and/or an end of a media content or data segment (e.g., video, image, etc.) and uploads the identified segment to one or more applications configured to post the identified segment at social media sites or platforms. The integrated system uses data of voice profile tags and/or audio profile tags (e.g., cry, yell, laugh, etc.) to sort the media content (e.g., video, image, etc.). As described above, computer networks suitable for use with the embodiments described herein include local area networks (LAN), wide area networks (WAN), Internet, or other connection services and network variations such as the world wide web, the public internet, a private internet, a private computer network, a public network, a mobile network, a cellular network, a value-added network, and the like. Computing devices coupled or connected to the network may be any microprocessor controlled device that permits access to the network, including terminal devices, such as personal computers, workstations, servers, mini computers, main-frame computers, laptop computers, mobile computers, palm top computers, hand held computers, mobile phones, TV set-top boxes, or combinations thereof. The computer network may include one of more LANs, WANs, Internets, and computers. The computers may serve as servers, clients, or a combination thereof. The integrated system can be a component of a single system, multiple systems, and/or geographically separate systems. The integrated system can also be a subcomponent or subsystem of a single system, multiple systems, and/or geographically separate systems. The integrated system can be coupled to one or more other components (not shown) of a host system or a system coupled to the host system. One or more components of the integrated system and/or a corresponding system or application to which the integrated system is coupled or connected includes and/or runs under and/or in association with a processing system. The processing system includes any collection of processor-based devices or computing devices operating together, or components of processing systems or devices, as is known in the art. For example, the processing system can include one or more of a portable computer, portable communication device operating in a communication network, and/or a network server. The portable computer can be any of a number and/or combination of devices selected from among personal computers, personal digital assistants, portable computing devices, and portable communication devices, but is not so limited. The processing system can include components within a larger computer system. The processing system of an embodiment includes at least one processor and at least one memory device or subsystem. The processing system can also include or be coupled to at least one database. The term “processor” as generally used herein refers to any logic processing unit, such as one or more central processing units (CPUs), digital signal processors (DSPs), application-specific integrated circuits (ASIC), etc. The processor and memory can be monolithically integrated onto a single chip, distributed among a number of chips or components, and/or provided by some combination of algorithms. The methods described herein can be implemented in one or more of software algorithm(s), programs, firmware, hardware, components, circuitry, in any combination. The components of any system that includes the integrated system can be located together or in separate locations. Communication paths couple the components and include any medium for communicating or transferring files among the components. The communication paths include wireless connections, wired connections, and hybrid wireless/wired connections. The communication paths also include couplings or connections to networks including local area networks (LANs), metropolitan area networks (MANS), wide area networks (WANs), proprietary networks, interoffice or backend networks, and the Internet. Furthermore, the communication paths include removable fixed mediums like floppy disks, hard disk drives, and CD-ROM disks, as well as flash RAM, Universal Serial Bus (USB) connections, RS-232 connections, telephone lines, buses, and electronic mail messages. Embodiments include a system comprising a gateway that includes a processor coupled to sensors and/or network devices installed at a premises. The system includes a remote server coupled to the gateway and located remote to the premises. The gateway and/or the remote server includes data of the sensors and/or network devices. The system includes an application running on at least one of the gateway and the remote server. The application controls events corresponding to the data and/or the premises in response to content of the data. The system includes a client interface coupled to the gateway and/or the remote server. The client interface presents the data to client devices. Embodiments described herein include a system comprising: a gateway that includes a processor coupled to at least one of sensors and network devices installed at a premises; a remote server coupled to the gateway and located remote to the premises, wherein at least one of the gateway and the remote server includes data of the at least one of sensors and network devices; an application running on at least one of the gateway and the remote server, wherein the application controls events corresponding to at least one of the data and the premises in response to content of the data; and a client interface coupled to at least one of the gateway and the remote server, wherein the client interface presents the data to client devices. The application sorts the data according to at least one characteristic of the data during a period. The application stores the data according to at least one characteristic of the data during a period. The application delivers the data according to at least one characteristic of the data during a period. The application at least one of locates, sorts, stores, and delivers the data according to at least one characteristic of the data during a period. The at least one characteristic comprises a tagged object in the content. The tagged object comprises at least one of at least one person and at least one pet. The at least one person comprises at least one of a known person and an unknown person. The at least one characteristic comprises a plurality of objects in the content. The at least one characteristic comprises a calendar event. The at least one characteristic comprises at least one of a date, data representing the calendar event, participant names, participant addresses, event address, event start time, event end time, duration of event, event notes, and keywords associated with the event. The application identifies and rates high interest data over a time period according to at least one characteristic of the data during a period. The application rates the high interest data based on event duration of one or more events of a type, quantity of an event type over the period, and one or more combinations of events. The application generates a data digest according to the rating. The application automatically generates and implements rules for future data recording in response to the data. The rules are based on behavior detected by at least one of sensors and network devices. The application learns based on events represented by the data. Aspects of the integrated system and corresponding systems and methods described herein may be implemented as functionality programmed into any of a variety of circuitry, including programmable logic devices (PLDs), such as field programmable gate arrays (FPGAs), programmable array logic (PAL) devices, electrically programmable logic and memory devices and standard cell-based devices, as well as application specific integrated circuits (ASICs). Some other possibilities for implementing aspects of the integrated system and corresponding systems and methods include: microcontrollers with memory (such as electronically erasable programmable read only memory (EEPROM)), embedded microprocessors, firmware, software, etc. Furthermore, aspects of the integrated system and corresponding systems and methods may be embodied in microprocessors having software-based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types. Of course the underlying device technologies may be provided in a variety of component types, e.g., metal-oxide semiconductor field-effect transistor (MOSFET) technologies like complementary metal-oxide semiconductor (CMOS), bipolar technologies like emitter-coupled logic (ECL), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, etc. It should be noted that any system, method, and/or other components disclosed herein may be described using computer aided design tools and expressed (or represented), as data and/or instructions embodied in various computer-readable media, in terms of their behavioral, register transfer, logic component, transistor, layout geometries, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, non-volatile storage media in various forms (e.g., optical, magnetic or semiconductor storage media) and carrier waves that may be used to transfer such formatted data and/or instructions through wireless, optical, or wired signaling media or any combination thereof. Examples of transfers of such formatted data and/or instructions by carrier waves include, but are not limited to, transfers (uploads, downloads, e-mail, etc.) over the Internet and/or other computer networks via one or more data transfer protocols (e.g., HTTP, FTP, SMTP, etc.). When received within a computer system via one or more computer-readable media, such data and/or instruction-based expressions of the above described components may be processed by a processing entity (e.g., one or more processors) within the computer system in conjunction with execution of one or more other computer programs. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words “herein,” “hereunder,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. When the word “or” is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list. The above description of embodiments of the integrated system and corresponding systems and methods is not intended to be exhaustive or to limit the systems and methods to the precise forms disclosed. While specific embodiments of, and examples for, the integrated system and corresponding systems and methods are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the systems and methods, as those skilled in the relevant art will recognize. The teachings of the integrated system and corresponding systems and methods provided herein can be applied to other systems and methods, not only for the systems and methods described above. The elements and acts of the various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the integrated system and corresponding systems and methods in light of the above detailed description. In general, in the following claims, the terms used should not be construed to limit the integrated system and corresponding systems and methods to the specific embodiments disclosed in the specification and the claims, but should be construed to include all systems that operate under the claims. Accordingly, the integrated system and corresponding systems and methods is not limited by the disclosure, but instead the scope is to be determined entirely by the claims. While certain aspects of the integrated system and corresponding systems and methods are presented below in certain claim forms, the inventors contemplate the various aspects of the integrated system and corresponding systems and methods in any number of claim forms. Accordingly, the inventors reserve the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the integrated system and corresponding systems and methods. | 213,707 |
11943302 | DETAILED DESCRIPTION Embodiments can include devices, articles of manufacture, and processes as may be employed in support of enhanced intermediate server services taught herein. Embodiments can regard enhanced intermediate transaction server(s). For example, additional services can be provided by one or more Intermediary Transaction Processing Server(s) (ITPS) whereby a client computing system can obtain additional and/or expedited services from a third-party computing system, via one or more ITPS, without necessarily needing an update or other modification to the client computing system before obtaining the additional and/or expedited services from the third-party computing system. Advantages of such embodiments may include reducing or eliminating the need to update or modify a client computing system or other computing system, to communicate to or with other computing systems while still being able to receive the additional and/or expedited services associated with the third-party computing system even though the client computing system or other computing system is incompatible with the third-party computing system or otherwise not adapted to directly request and/or receive the additional and/or expedited services from the third-party computing system. Client computing systems may include individual computing systems or groups of servers or other individual or grouped computing systems configured to receive and track customer orders and to receive payments for services or goods provided by the client, or its proxy, to a client customer. Client computing systems may be configured with communication protocols and/or syntax modules suitable to interact with other servers or computing devices. These interactions can include sending and receiving signals and instructions indicative of certain customer orders and certain payments including, for example, particulars such as order number, service requested, goods requested, quoted price, travel itinerary, personal name record, baggage status, specialty conditions, meal status, payment status, group member names, and certain payment methods, for example. Embodiments can provide for additional services, such as batch file parsing, breakdown, and/or quantification, being provided to a client computing device without requiring that the client computing device is programmed or otherwise compatible to parse a batch file and extract individual data elements for certain transactions or other groupings carried by the batch file. Embodiments may provide intermediary translation services, proxy services, financing services, and combinations thereof to client computing systems. These translations, proxy, financing, or other services can enable the client computing systems to receive additional services or be apprised of additional services from other entities via the client's existing client computing platforms with little to no additional configuration directed to communicating to and or from the additional other third-party computing systems or other computing systems for the enhanced and/or expedited services. Client computing system resources, such as memory, processing speed, and overall system burden may be decreased, maintained or only slightly increased even though additional services, which may not otherwise be available to the client computing system under its present configuration, are being provided or being apprised of through the use of one or more ITPS in embodiments. Embodiments may comprise a third-party computing system that may run a Billing and Settlement Plan (BSP) (e.g., a transaction platform) that manages the sales, reporting, and remittance procedures between travel agencies and airlines. An ITPS provider may provide a Billing Settlement Plan (BSP) batch file handling solution that comprises a server or other computing system with an ITPS provider's existing connectivity to a plurality of numerous third-party settlement systems e.g., third-party Billing Settlement Plan (BSP) markets or Cargo Account Settlement Systems (CASS), to deliver the data and files necessary to pay airlines, travel agents or other clients for services such as passenger tickets or passenger services, paid with a BSP cash solution in advance (approximately two days or less) of existing timelines (e.g., 7-30 days). Thus, an ITPS may intervene with existing transaction files of a client computing system (single transaction or batch), harvest or otherwise obtain data from these files (single transaction or batch), and provide such data to another third party computing system, for example a gateway to an SSP or a travel agent computing system, such that the client computing system or a travel agent computing system or other entity computing system may receive SSP services or other benefits without necessarily being specifically programmed or otherwise configured to specifically request these services from the SSP or other computing system. For example, embodiments may involve a traveler computing system, a travel agent computing system, a third-party billing and settlement plan computing system, an ITPS, a settlement service provider (SSP) computing system, and an airline computing system. The travel agent computing system may obtain, sell, and/or set up numerous travel itineraries during a day or other period and sell tickets for these itineraries and schedule other attributes of the itineraries, e.g., meals, bags, seat locations, etc. The travel agent may collect funds for these itineraries and periodically generate batch files at the end of the day or the conclusion of some other period. This batch file can show the total payments collected and/or have more specific details with regards to each booked itinerary. The batch file can be received by the airline computing system as well as by an ITPS or other computing system. The ITPS can parse the batch file and fill in missing data elements by inferring from other available data element in the batch file or otherwise available to the ITPS. The ITPS can also parse individual itineraries for reporting to the airline computing system such that the airline computing system can receive and account for these individual itineraries without having to parse the previously received batch file. The ITPS may send particulars from the batch file to a gateway of a settlement service provider (SSP) computing system. The gateway may be another entity computing system providing related services on behalf of the ITPS and/or the SSP. The SSP may then provide funding to an airline account or travel agency account or other entities benefit on time plus 2 or other expedited basis without the airline computing system or travel agent computer system or other entity computing system making a request from the SSP for the funding. In embodiments, the SSP may provide funding particulars to the airline computing system and then be subsequently reimbursed after providing the funding to a financial institution acting on behalf of an airline. The SSP computing system and/or a related account may be reimbursed by third-party billing and settlement plan computing systems. The SSP may facilitate payment in different amounts to accounts of the airline or travel agency or other service providers, for example, paying 90% at T+2 and the remaining 10% at T+30. According, in some embodiments, a third-party may provide a billing and settlement computing plan computing system that manages portions or all of the sales, reporting, and remittance procedures between customers and clients, for example, between travel agencies or freight forwarders and airlines. An ITPS in embodiments may provide intermediary services for providing a client computing system with additional services not offered by a computing platform to which the client is configured to communicate, but, instead, offered by another party without necessitating a full or any reprograming of the client computing system in order to receive the additional services offered by a third-party computing system. For example, the ITPS provider may provide expedited payment options or more detailed payment processing, which can result in enhanced payment services not previously offered by a third-party platform to a client or a plurality of clients. In embodiments, an ITPS may offer other services to a client computing device as well. These may include simplified capture/settlement transactions services during cash or credit card payment processing flow. In embodiments, an ITPS may be configured to communicate to and from various third-party server computing systems in order to provide the unique or enhanced payment options or other services to the client computing system or a plurality of client computing systems or a travel agent computing system or a plurality of travel agent computing systems or other entity computing systems. In embodiments, a third-party computing system may itself be configured to communicate with a fourth, or, fifth party computing system to supplement the services provided by the third-party computing system and being received by the client without the client being necessarily configured to specifically interact with the third, fourth, or, fifth party computing systems server for the specific enhanced services. In embodiments, an ITPS may be configured to query for, receive, and/or send certain batch files from a client computing system or another computing system. Also, an ITPS may be configured to parse or otherwise use these batch files for purposes of providing services to a client computing system that the client computing system is not necessarily presently configured to obtain. In embodiments, an ITPS may develop a secure connection to deliver necessary data and files through a gateway to a ‘liquidity platform’. The liquidity platform may consist of a separate server and a Special Purpose Vehicle (SPV), e.g., a fourth-party computing system designated to secure liquid assets that can be used to advance settlement funds to airlines or other clients. An ITPS can also be used for other use cases, such as billing and settlement between freight forwarders and airlines rather than between travelers and travel agents and airlines. Freight forwarders, for example, may be managed by an ITPS provider's Cargo Account Settlement Systems (CASS), as well as credit card settlement services and may employ the techniques and computing systems and taught herein. As such, various processes, modules, and systems may be employed by an ITPS. Embodiments may include an intermediary transaction processing server (TIPS) comprising a processor and a memory storing instructions. These stored instructions, when executed by the processor, may cause the processor to perform processes such as receiving, via a network, a copy of a third-party data batch hand-off-tape (HOT) file, the batch HOT file containing travel transaction data for a plurality of travel transactions, the travel transaction data comprising travel itinerary information and received cash payment information; identifying received cash payment information from the received copy of the batch HOT file; and sending identified received cash payment information from the received copy of the batch HOT file to a settlement service provider (SSP) gateway. The gateway may provide the received cash payment information to an SSP computing system, the SSP computing system may use the received cash payment information to provide cash funding to an airline computing system without the airline computing system having been configured to request cash funding from the SSP computing system for travel transactions from the batch HOT file. In some embodiments, the processor of an ITPS may identify travel itinerary data and determining if any data are missing in a batch HOT file and, if data are determined to be missing, infer missing data for a specific travel itinerary using information parsed from the received batch HOT file (or previous received batch HOT files) and/or identify a single travel itinerary from the plurality of travel transactions. In some embodiments, a batch HOT file may be received from an airline computing system. In some embodiments, the processor of an ITPS may parse a batch HOT file to identify individual cash transactions from the batch HOT file and/or perform a series of data integrity checks to verify that travel itinerary certain data are included in the batch HOT file. In some embodiments, the processor of an ITPS may individually send identified individual cash transactions from the batch HOT file to the SSP computing system for use by the SSP computing system and/or make available for display at a different location, information regarding identified individual cash transactions from the parsed batch HOT file. In some embodiments, a copy of the batch HOT file may be received on a daily basis or some other periodic or nonperiodic time table. Embodiments may include an intermediary transaction processing server (TIPS) comprising a processor and a memory storing instructions. These stored instructions, when executed by the processor, may cause the processor to perform processes such as receiving, via a network, a copy of a third-party data batch hand-off-tape (HOT) file, the batch HOT file containing a plurality of passenger name records (PNRs), each passenger name record (PNR) comprising travel itinerary information and travel payment information; identifying cash transaction data from the travel payment information from the received copy of the batch HOT file; and sending identified cash transaction data from the received copy of the batch HOT file to a settlement service provider (SSP) gateway, the gateway providing the cash transaction data elements to an SSP computing system, the SSP computing system using the received identified cash transaction data elements to provide cash funding to an airline account without an airline computing system having been configured to request cash funding from the SSP computing system for travel itineraries from the batch HOT file. In some embodiments, the processor of an ITPS may identify itinerary data and determining if any data are missing in the batch HOT file and, if data are determined to be missing, infer missing data using information parsed from the received batch HOT file. Also, a processor of an ITPS may identify a single travel itinerary from a passenger name record (PNR) of the plurality of passenger name records (PNRs) and may receive the batch HOT file containing the PNR from the airline computing system. This received batch HOT file may also be parsed to identify individual cash transactions from the batch HOT file. Still further, a processor of an ITPS may be configured to identify a single travel itinerary from the plurality of passenger name records and perform a data integrity check of a single travel itinerary to verify that meal data or bag or other ancillary data or more or less are included in the PNR. Similarly, the processor may be configured to process a data integrity check of a PNR and if missing data is identified, fill in missing data in the PNR using data inferred from a batch HOT file. In embodiments, a processor of an ITPS may make available for display, at a location apart from the location of the ITPS, information regarding identified individual cash transactions from the parsed batch HOT file. Embodiments may comprise processes for handling a computer batch file. These processes can comprise receiving a third-party data batch file, the batch file comprising a plurality of passenger name records (PNRs), the PNRs of the batch file each comprising name, passage date and ticket number; searching for one or more missing data elements from a first passenger name record in the batch file of PNRs; if a missing data element is found in the first passenger name record, entering inferred data into the missing data element, the data inferred from the HOT file or a previous HOT file or from another source; parsing a second passenger name record from the batch file of PNRs; sending the parsed second passenger name record to a gateway for a settlement service provider computing system; and the settlement service provider computing system providing funding for the batch file to the airline server system, wherein the airline server system was not configured to request funding associated with the batch file from the settlement service provider computing system prior to receipt of the funding. Embodiments may exist wherein a settlement service provider computing system provides information linking funding received by an airline server system with individual PNRs from the batch file and/or wherein a received batch file is sent by the airline server system and/or the settlement service provider computing system provides funding for the batch file to the airline or travel agency or freight forwarder server system in a timeframe mimicking a settlement cycle of a credit card transaction. Embodiments may exist where a received batch file is a hand off tape (HOT) file previously received by an airline server system and subsequently sent by the airline server system to an ITPS after its receipt. Embodiments may enable a client computing system to obtain beneficial transactions with existing customer types via an ITPS and the use of third-party transaction servers. In embodiments, an ITPS server may communicate back and forth with a third-party server or other computing device such that the third-party server or other computing device can communicate with and receive services from a fourth and fifth party computing system in order to provide the services of the fourth and fifth parties to the client without the client computing device being necessarily configured to communicate directly with the third, fourth or fifth computing devices for purposes of obtaining the services offered by the fourth and/or fifth parties. In embodiments, an ITPS server may query, send, receive, or parse files on behalf of a client and provide selected data to a third-party such that the third-party computing system may trigger handling of the requisite services from the fourth or fifth party for the client. When clients are airline computing devices, ITPS embodiments may comprise compliance modules in a computing system in order to receive, store, parse, and otherwise process batch HOT files from an airline computing system for subsequent processing through a gateway and by settlement service providers and subsequent funding service providers. In embodiments, an ITPS may be configured to onboard and manage client computing systems intended to use a third-party service server or other computing device. An ITPS may be configured such that client IATA CSI (Credit Transaction Invoicing File) and/or IATA HOT (Hand off Tape) files may be handled by an ITPS. A third-party service server or other computing device may be sent some or all relevant transaction information from the CSI or HOT or other batch files in order to create settlement files, to pull funds from fourth or fifth parties (e.g., card issuers or cash financers) and fund the client (e.g., airline, travel agency, freight forwarder) for those associated transactions. In embodiments, credit card transactions may employ IATA CSI files that may contain some or all of the credit card transaction information and their associated authorization information. IATA HOT files may employ both cash and credit card transaction information and the processing by an ITPS may vary depending on which file a client may provide. Embodiments may also be paired such that only cash or credit transaction information is handled by an ITPS on behalf of a client computing system. In embodiments, CSI/HOT files may contain information on *ALREADY* executed transactions that the client (e.g., airlines (or their travel agents)) has taken as payment for tickets, upgrades, refunds, other services or goods, etc. but transactions that have *NOT* yet been settled by the card schemes provided by the fourth-party servers. As such, an ITPS may be configured to parse the CSI/HOT files of unsettled transactions, process, and store the information of these files and then “replay” them into a third-party gateway so that the third-party can create settlement files that go directly to the fourth-party card schemes or the fifth-party cash schemes to enable settlement of these transactions. In so doing, the client computing systems and the ITPS need not handle all details of a settlement process with a fourth-party service provider and/or a fifth party service provider. Still further, embodiments may also comprise solely having CSI files sent for settlement to card schemes while HOT files may be used for cash transactions. Also, in some embodiments, the parsed transaction data may solely be sent to the gateway by an ITPS or other entity computer system so that a settlement service provider computing system can be apprised of what to prefund an airline, travel agency, or freight forwarder. In embodiments, a CSI file may contain rows listed on a per ticket basis and not on a transaction itself basis. As such, an ITPS may employ a Form of Payment Identifier field and Approval Code to group tickets in an original transaction. One ticket in a transaction may be deemed a so-called primary ticket, and that primary ticket may be identified based on records that follow it (e.g., authentication records, transaction information, flight details, etc.). Such records may also be prescribed as information on all of the tickets. An ITPS may be configured to pass those transaction files to a third-party computer server by employing ‘dummy’ payment call groups, which can gather all the tickets in one transaction. Comparatively, in some embodiments, an ITPS may configure transaction files such that settlement and refund calls are per ticket and whereby ticket amounts should preferably match the resulting transaction amount. An ITPS may be configured to review and/or supplement one or more of the following IATA standardized file configurations when processing batch files or conducting other processes of embodiments: Sales; TKTT (Electronic Ticketing Sale, i.e. regular tickets); EMDA (ancillaries associated with the existing passenger tickets, e.g., luggage, upgrades, meals); EMDS (standalone ancillaries, not associated with the existing passenger tickets, e.g., luggage, upgrades, meals); TASF (Travel Agency Service Fees, which may come associated with a ticket, e.g., where the use case may be a travel agency charging a customer additionally on a cost of a ticket and the airline would be responsible for receiving the funds and refunding the agency on that amount); RFNC (refunds); RFND (refunds); Referenced Refunds (refunds on the transactions that were previously settled with the third-party. Here, an ITPS may query or search for the original ticket number and provide a match of search results to the original settle transaction); and Non-Referenced Refund (refunds on the transactions previously settled with a different acquirer). As to a Non-Referenced Refund, in embodiments, an ITPS may share credit card details in order to have the mentioned amount refunded directly to the card. In this instance, a third-party server or other computing device may create a separated merchantSiteId field or other field ID and install a new API call which may be used only for these transactions. The merchantSiteId field or other field ID and new API call may be employed in order to maintain accuracy and/or security of transactions while also keeping the regular transactions as they are. ITPSs may be configured with various APIs providing various discrete functionality. An exemplary API of an ITPS may regard client onboarding, which may allow a client server to communicate with an ITPS via a Secure File Transfer Protocol (SFTP) and share the client's CSI files. Other APIs of the ITPS may comprise creating a dedicated folder for the client itself, which can separate the files between the clients; allowing an ITPS to store any necessary third-party credentials. The third-party credentials may be employed as dedicated sets of client credentials for a third-party API. ITPS may also have APIs that support clients with multiple Billing Settlement Plans (BSP), e.g., a BSP for each country in which a client airline computing system has sales, here, one or more API may support multiple sending locations for a client airline computing system and/or multiple third-party merchant identification numbers. Other ITPS APIs may support tracking, connecting and validating files received by an ITPS as well as adding, editing or deleting client configurations within and/or for client computing systems. These editing and deleting APIs may function via an SFTP communication link with a client airline computing system and/or another computing system via a network connection. In embodiments, an ITPS may be may be hosted as a cloud-based server network. Such server-network may: use SFTP server credential to receive batch BSP/CSI files from clients/airlines; use S3 buckets to store the received files; use DynamoDB functionality to store Client/Airline information, all transactions and communication with one or more third-party computing system (e.g., use a request/response table); use Lambda functionality, which may be triggered at a dedicated cut off time, in order to parse the received files and store all transaction data into DynamoDB; and/or use Lambda functionality that may be triggered at a previously agreed time with third-party computing systems, to replay data as dummy transactions (e.g., Authorization, Sale/Refund) to a third-party computing system payment gateway. FIG.1shows exemplary process flow using an Intermediary Transaction Processing Server (ITPS) as may be employed in some embodiments. At step1a customer10, such as a traveler or other person seeking services via an agent may communicate with the travel agent20via telephone or via computer interface or through some other communication method. The customer10may provide purchase, change, refund, or other instructions21to the travel agent20to purchase, change, refund services offered by an airline60. The purchase services may include purchasing tickets for travel, purchasing freight services, purchasing frequent flyer miles, purchasing baggage transport, purchasing seat assignments, purchasing boarding status, purchasing flight cabin status, purchasing in-flight meals, as well as other services offered by the airline60. The changes may include changes to any of the above purchases as well as other services offered by the airline60. Likewise, refund services may include refunds to any of the above purchases as well as other services offered by the airline60. The travel agent20may create an itinerary for the requested services, assign a personal name record for the itinerary, and accept payment from the customer10for the requested services. The payment may be collected from various methods including cash, credit, debit, PayPal, etc. The travel agent20may also verify ticket availability and ticket particulars via a Global Distribution System (GDS) computing system. This GDS may in turn verify specific ticket information and make ticket purchases from an airline60computing system. As particulars become known the airline20computing system may populate fields in the itinerary. These fields can include the number of passengers, the number of bags, seat assignments, meal selections, seat preferences, payment status, payment amount, passenger names, ticket numbers, flight leg details, frequent flyer particulars, and other information associated with the itinerary. The GDS computing system may, in real-time, issue the tickets to the travel agent20as the itinerary is being prepared and/or populated. On a periodic basis, daily, twice-daily, etc., the GDS computing system may send batch files to a Billing and Settlement Plan (BSP) computing system30. These batch files may contain all of the transactions for the period associated with a specific airline. The transactions may comprise the itineraries created by one or more travel agents for the period of the batch file. The BSP30with an associated Data Processing Center (DPC) may cull and gather received batch files for an airline and create a second batch file for receipt by the airline computing system60and the ITPS40. The ITPS may receive this second batch file and extract payment information as well as infer missing information to be filled in. The inferred information may be derived from various sources including a batch file whether presently or previously received. The ITPS40may send instructions, the second batch file, a revised second batch file and/or a third batch file, see4B, to a gateway of a Settlement Service Provider (50). The SSP may then receive and process this received batch file and provide payment to the airline computing system60on a Time+2 (i.e., ticket sale day plus two days) schedule51for a large percentage (e.g., 60%, 70%, 85%, 90%, etc.) of the total agent collections identified in the agent batch file. The SSP may also process the received batch file to provide payment to the airline computing system60for the remainder of the total agent collections identified in the agent batch file on a Time+30 schedule (i.e., ticket sale day plus thirty days)52. On or around this second payment and processing by the SSP, the BSP30may send full payment of the agent collections in the agent batch file to the SSP computing system50, seeFIG.1at31. The ITPS's receipt and processing of the BSP batch file as taught herein, can enable the airline computing system60to receive payments as taught herein without necessarily being programmed to make requests or provide instructions to the SSP ahead of receiving payment or transactional information from the SSP. In so doing, an airline computing system need not necessarily be updated or programmed to make calls, provide instructions, provide queries, or perform other actions and, instead the ITPS can perform needed queries, processes, and/or interactions with other computing systems to facilitate SSP transactions with the airline computing systems. Similarly, an airline computing system60need not be burdened with protocols or tasks or calculations or processes to make calls, provide instructions, provide queries, or perform other actions in order to facilitate SSP transactions with the airline computing systems as described herein. Thus, airline computing system maintenance, storage demands, storage quantity, and/or system performance may each or all be enhanced or otherwise improved in some embodiments. In some embodiments, the ITPS40may send instructions, a file, the second batch file, a revised second batch file and/or a third batch file, see4A, to the travel agent20to provide status of batch file handling or other processing by the settlement service provider50, its gateway, and/or the airline computing system. In some instances, status may be provided by the ITPS40within a set time period, such as two days, seven days, ten days, thirty days, sixty days, etc., whereby the travel agent computing system need not provide instructions to fund previously booked flights or other itinerary costs that will not or cannot be satisfied by the airline. For example, the ITPS40can provide one or more files or instructions or indicators to the travel agent computing system to temporarily reduce funding obligations or risk exposure for previously booked flights or other travel itinerary items. FIG.2shows computer systems (10,20,22,60,30,50,40,270, and280), a communication network290, communication paths (295,296), and an exemplary batch file200as may be employed in some embodiments. The batch file200may be exemplary of information contained in the batch files described above inFIG.1. The batch file200is shown with a plurality of itineraries 1-N where each itinerary may be identified with a Personal Name Record (PNR1-N) and each itinerary may contain various data fields in which data elements may be entered. These data fields may include passenger name, flight information, payment status, baggage status, meal status, etc. These data fields may contain data elements such as the actual flight number, the actual passenger name, the actual meal ordered, other entered data, as well as place holders to be finished later and/or be empty or otherwise incomplete. As noted above, in embodiments, an ITPS40may seek to complete missing or incomplete data elements in data fields by inferring missing data elements from other areas of the itinerary with the missing or incomplete data elements as well as from other areas of the batch file or another batch file or another data element source or repository. By the ITPS completing missing or incomplete data elements, subsequent recipients of the batch file or information from the batch file may need not research the missing data and may handle an itinerary or other portion of the batch file as if the added or corrected data was present from the travel agent computing system20. This correction and/or addition or change can serve to promote more efficient handling of incomplete or error filled itineraries after the agent computing system20has started the ticketing process. For example, an airline computing system60or a BSPDPC computing system30may process the received itinerary using fewer steps or on a more efficient processing track when compared to processing the same received itinerary had it not been previously corrected, added or changed by an ITPS as described above. As can be seen inFIG.2, nine computing systems are shown. These computing systems may be single computing devices such as a PC or server as well as groups of PCs or servers. Communications between the various computing systems may be direct between each other as well as via a wide area network290. Security protocols, such as SFTP, may be employed for these communications. In some instances, as is shown at296, SFTP protocols or other secure protocols may not be employed. The various computing systems shown inFIG.2may employ the processes taught herein, including those described specifically withFIGS.1and3-4. Labelled inFIG.2are wide area network290, travel agency computing system20, Global Distribution System computing system22, Airline computing system60, Billing Settlement Plan/Data Processing Center computer system30, Settlement Service Provider computing system50, Intermediary Transaction Processing Server computing system40, Card Scheme computing system270, and Card Issuer computing system280. FIG.3shows communication flow that may be employed in some embodiments. Travel agency computing system20, Global Distribution System computing system22, Airline computing system60, Billing Settlement Plan computing system30a, Data Processing Center computer system30b, Settlement Service Provider computing system50, and Intermediary Transaction Processing Server computing system40are labelled inFIG.3. Various communications and processes are identified within and between these various identified computer systems ofFIG.3. These various communications and processes may comprise a chronological exchange and may include the chronology identified inFIGS.3and4as well as other chronological orders. At the onset a customer/traveler may communicate with a travel agency via various communication methods, e.g., telephone, computer, etc. and request a ticket booking. After receiving one or more communications, a travel agent computing system20may create a PNR (Passenger Name Record) in the GDS (Global Distribution System) computer system22terminal and designate the form of payment as “cash.” This designation may signify that the travel agent computing system will accept and collect various forms of payment from the traveler for the tickets being sold. These payments may be made in cash, credit card, debit card, PayPal, etc., but should preferably be collected by the travel agent and should preferably be paid to the airline, minus any handling fees, etc., originating from the travel agent computing system20. As between points2and3, the GDS computing system22can verify availability of ticket inventory with an airline inventory system, which may be a backend process offered by an airline computing system60). The travel agent may enter the ticketing command into the GDS computing system22terminal to issue the ticket(s) in the PNR as shown at point4. Because this is a transaction where payment is collected by the travel agent computing system20, there may be no authorization request for the ticket to issue and the ticket may issue immediately and be sent to the customer as shown at point5, which serves to conclude the transaction. Periodically, batch file processing may be carried out. These can be end of business day, end of a twenty-four hour period, end of a twelve-hour period or some other equally spaced or non-equally spaced periodicity. At the end of the day or other period, there may be a series of legacy processes that generate several files that may be exchanged between GDSs22and BSP30a/DPC30band ITPS40and other computing systems. These may comprise one or more of the following. At point6, the GDS22may first create and send a daily, or other periodicity, a batch RET (Resolution Enhancement technology) file (Agent Reporting Data File) to the Data Processing Center (DPC)30b. This batch RET file may contain all sales and refunds processed for that day on the airline's ticket stock. This batch RET file may contain both cash and credit card transactions. Concurrently, or substantially concurrently, at point7, Data Processing Center (DPC)30bmay create a daily batch HOT (Hand Off Tape) file and send a copy to the Airline computing system60. The Airline computing system60may feed the batch HOT file into its Revenue Accounting System, and based on these received batch HOT files, the Airline computing system can be informed as to what to expect to receive for its cash transactions (via the BSP Cash settlement process of the BSP30a) and for its credit card transactions (from its Card Schemes computing system270and Card Issuer computing system280as shown inFIGS.2and4). At point8an airline computing system60may send a copy of their HOT file to an ITPS40. As part of point9, an ITPS40may parse out individual cash transactions from the batch file, which contains both travel agent customer payment collections and direct airline collections, and perform a series of data integrity checks, to ensure all preferred and/or mandatory data elements are included in the batch HOT file. Once the transactions are parsed out, they may be sent by an ITPS, one by one, to a third-party gateway to be recorded by the third-party. These recordations may be used such that the third-party or any of the Settlement Service Provider computing systems can determine what to prefund the airline computing system for the travel agent customer payment collections reflected in the batch HOT file. In embodiments, an ITPS40can display the transactions in a proprietary dashboard, provide reporting and analytics capabilities to the airline computing system60for transactions that otherwise would have been indeterminate to the airline computing system from the received batch HOT file. At point10, either a third-party or any Settlement Service Provider computing system50may prefund the airline computing system for “cash” transactions in a T+2 timeframe, mimicking the settlement cycle of a credit card transaction. At point11, DPC computing system30bmay act as a clearing house operator, and sweep funds from the travel agency in accordance with the settlement cycle instructions for that BSP market and/or the associated batch HOT file. At point12, the DPC30bmay be providing funding to or on behalf of the Settlement Service Provider for the funds that were pre-funded at point10, thereby completing the funding flow. In embodiments, the DPC30bmay fund a third party beneficiary (FBO—For Benefit Of) account set up for the Settlement Service Provider computing system50by the Airline computing system60. FIG.4shows communication flow that may be employed in some embodiments. These may include indirect credit card transactions from purchase to settlement. Travel agency computing system20, Global Distribution System computing system22, Airline computing system60, Billing Settlement Plan computing system30a, Data Processing Center computer system30b, Settlement Service Provider computing system50, Intermediary Transaction Processing Server computing system40, Card Schemes computing system270and Card Issuer computing system280are labelled inFIG.4. Various communications and processes are identified within and between these various identified computer systems ofFIG.4. These various communications and processes may comprise a chronological exchange and may include the chronology identified inFIGS.3and4as well as other chronological orders. At the onset a customer/traveler may communicate with a travel agency via various communication methods, e.g., telephone, computer, etc. and request a ticket booking. The travel agent computing system20may create a PNR (Passenger Name Record) in the GDS (Global Distribution System) computing system22terminal and enters the customer's credit card number in the PNR as the form of payment. The GDS terminal may verify availability of ticket inventory with an airline inventory system of the airline computing system60as a backend process. Once verified, the travel agent may enter a ticketing command into the GDS terminal to issue the ticket(s) in the assigned PNR. This may initiate an authorization request for the credit card in the FOP Form of Payment that goes directly from the GDS to the Card Schemes computing system270. In these instances, the credit card processing uses an indirect framework whereby the authorization request does not go to the acquirer—rather, the authorization request is sent to the Card Schemes computing system270(Visa/MC etc.) directly, which then pass it on to the Card Issuer computing system280to ensure sufficient funds for the transaction. The Card Issuer computing system280returns an authorization approval or denial back to the Card Schemes computing system270and the Card Schemes computing system270cascade this response back to the GDS computing system22in real time. If the authorization is an approval, the GDS computing system22successfully tickets the PNR and the Travel Agent computing system20provides the ticket to the customer10. Periodically, batch file processing may be carried out. These can be end of business day, end of a twenty-four hour period, end of a twelve-hour period or some other equally spaced or non-equally spaced periodicity. At the end of the day or other period, there may be a series of legacy processes that generate several files that may be exchanged between GDSs22and BSP30a/DPC30b, ITPS40, Card Schemes computing system270, Card Issuer computing systems280and other computing systems. These may comprise one or more of the following. At point10, the GDS computing system22first creates and sends the daily RET file (Agent Reporting Data File) to the Data Processing Center (DPC) computing system30b—the batch RET file contains all sales and refunds processed for that day on the airline's ticket stock. This batch RET file contains both cash and credit card transactions. Concurrently or substantially concurrently, at point11, the Data Processing Center (DPC) computing system30bcreates the daily batch HOT (Hand Off Tape) file and sends a copy to the airline computing system60. The airline computing system60feeds the batch HOT file into its Revenue Accounting System, and based on these files, the airline computing system60can determine what to expect to receive for its cash transactions (via the BSP Cash settlement process) and for its credit card transactions (from its Acquirer). At point12, the Data Processing Center computing system30balso generates the CSI file (Credit Transaction Invoicing Data file). This batch file may be either sent to the acquirer directly if they are connected to the BSP computing system30band are able to ingest the DISH file format, or to Accelya Card Clear, which reformats the file into a clearing file and sends it onto the airline acquirer. For BSP computing system30a, instead of sending the CSI file to Accelya card clear or an alternate acquirer, an ITPS40may be configured to receive these daily batch CSI files on behalf of the airline in a DSS-compliant environment (separate from traditional ITPS processing), parse the CSI/CCB files and send the individual transactions (both sales and refunds) to a third-party gateway. In certain embodiments, all that is required from the airline computing system to select ITPS/third-party as its new acquirer may be to amend a Local Credit Card Billing (LCCB) form and submit to a third party requesting the change of acquirer/processor. As part of point12, ITPS40may also perform a series of data integrity checks, to both ensure successful capture and settlement, as well as to ensure the lowest possible interchange fee for the airline computing system. To promote a low or lowest possible interchange fee for EMDA (Associated EMD transactions) an ITPS40may “scrape” itinerary level data from the associated ticket number and populate those data elements for the EMDA transaction, thereby ensuring that the transaction will qualify for the lower interchange fee (transactions that do not contain Level3, itinerary level data are downgraded to a more expensive/higher interchange fee). In embodiments, an ITPS40may be configured to display these transactions in a sophisticated dashboard, providing reporting and analytics to the airline for transactions that otherwise would have been opaque to them. As point14, a third-party gateway may then be able to submit these transactions for capture and settlement to the Card Schemes computing system270. A third-party server may be configured to show/display these transactions (including airline addendum data, searchable by ticket number) in a Dashboard. The third-party server may be configured to net settle the funds to the Airline's account for credit card transactions. FIG.5illustrates a basic block diagram of computer hardware, a wide area network, and various other party computing systems as may be employed in some embodiments. An ITPS10or other computing system that includes a CPU511and a main memory512connected to a bus516is shown. The CPU511is preferably based on the 32-bit or 64-bit architecture. A display, such as a liquid crystal display (LCD) may be connected to the bus516via an I/O adapter515. A storage unit519such as a hard disk or solid-state drive and a drive518such as a CD, DVD, or BD drive may be connected to the bus516via a SATA or IDE controller. The operational software may include an operating system, applications, modules570, and plug-ins. The CT40may be connected to a network290that is also connected to numerous computing systems30,60,22,10as well as other computing systems taught herein. The ITPS may be configured to function as disclosed herein and may communicate with other devices and systems through the network290, which may be a Wide Area Network, such as the Internet. Instructions to configure the ITPS may be stored in the storage unit as well as in main memory. These instructions may configure the CPU to perform the functions and provide the services of an ITPS or other computing systems as identified herein. The language of the application including the figures is used to describe embodiments and should not be considered to limit the invention to the specific combinations provided. Accordingly, the teachings of the application go beyond the specific figures and applicable text provided herein. Numerous other configurations are possible, including combinations of the embodiments provided herein, with more or fewer features and features further mixed among or between embodiments. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an” and “the” are intended to include plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operation, elements, components, and/or groups thereof. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill without departing from the scope and spirit of the invention. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for embodiments with various modifications as are suited to the particular use contemplated. Likewise, numerous embodiments are possible beyond those specifically described above as well as provided below. The embodiments described herein are illustrative and should not be considered to be limiting. For example, fewer or more features of a device or system, and fewer, more, or different actions or processes may accompany those already specifically described herein. Also, processes described herein may be undertaken in various orders unless a specific order is explicitly called for in any applicable claim or description. Likewise, features of the devices and systems described herein may be combined in various ways, not employed, and shared amongst themselves or in other devices and systems. | 50,217 |
11943303 | DETAILED DESCRIPTION Various embodiments provide systems, methods, devices, and instructions for a registry for augmented reality objects, which can provide augmented reality objects to a client device to support various software or hardware applications (e.g., mixed reality software applications). For instance, some embodiments provide for an augmented reality (AR) object registry that facilitates or enables registration of one or more AR objects in association with one or more locations across a planet (e.g., on a world scale). For instance, an AR object registry can enable associations between one or more AR objects and one or more locations or physical objects on planet Earth. In doing so, such an AR object registry can permit a user to use their client device to explore AR objects anywhere on Earth. Additionally, an AR object registry described herein can also support or enable, for example, implementation of a spatial-based (e.g., augmented reality-based) world wide web. The architecture of some embodiments described herein permits scaling to service AR object registration in connection with locations across Earth, and permits scaling to provide or support interactive sessions that enable multiple users (e.g., large numbers of users) across the world to interact together with registered AR objects. For some embodiments, an AR registry of an embodiment can associate (e.g., unite) topology map data (e.g., of Earth) with AR object data such that the real-world information is brought into a virtual model, which enables the scalability of the AR registry. Additionally, some embodiments implement one or more rankers or ranker mechanisms (e.g., ranker algorithm) to determine (e.g., by filtering or sorting) which AR objects are provided to a client device (e.g., in response to a request/query for AR objects from the client device). In this way, such embodiments can affect which AR objects are displayed or surfaced by the client device at and around the client device's current set of coordinates on a map (e.g., geospatial map). Unlike conventional technologies (e.g., traditional geospatial databases), the AR registry of various embodiments can better support user interaction with registered AR objects. Additionally, unlike conventional technologies, the AR registry of various embodiments does not need to rely on strict (e.g., tight) geofencing to provide AR objects to client devices. As used herein, an AR object can comprise a virtual object that can be presented in a client device-generated view of a real-world environment (e.g., a view presented on a display of a mobile client device), where the virtual object can interact with or enhance a real-world physical object of the real-world environment presented in the view. For instance, an AR object can be combined with a live (e.g., real-time or near real-time) camera feed such that when the AR object is presented, it appears situated in the live a three-dimensional environment (e.g., AR object appears to occupy a consistent three-dimensional volume and dynamically changing in aspect responsive to movement of the camera in a manner similar to that which would have been the case were the AR object a real-world physical object). A registered AR object can comprise an AR object registered by an embodiment described herein, thereby associating the AR object with a set of coordinates via an AR object registry. The level of interaction (e.g., user interaction) available for an AR object registered by an embodiment can vary. For example, an AR object can be static and have no level of interaction with a user or the real-world environment. A registered AR object (e.g., virtual ball) can have one or more available interactions (e.g., spin, bounce, toss, etc.) where any changes to the state of the AR object (by way of those available interactions) are localized (e.g., confined or limited) to the user at the client device (e.g., state changes to the AR object are not propagated to another user at another client device) and any changes to the state of the AR object do not alter the current initial state of the AR object as stored in the AR object registry. A registered AR object (e.g., virtual graffiti) can have one or more available interactions (e.g., drawing, generating, or applying the virtual graffiti) where any changes to the state of the AR object (by way of those available interactions) are propagated to another user at another client device (e.g., be presented in a view displayed by the other client device) without interaction by the other user (i.e., no interactive session needed). Additionally, a registered AR object can permit two or more users to interact (e.g., in real-time) with the registered AR object (e.g., spin, bounce, or toss the virtual ball) at the same time during an interactive session. For example, a first user can toss a virtual ball between one or more other users within the same interactive session, where data is transmitted between the user's client device through the interactive session. Depending on the registered AR object, at the end of the interactive session, the final state of the registered AR object (as changed by users' interactions during the interactive session) may or may not be saved to the AR object registry, thereby updating the initial state of the AR object for subsequent single-user interactions or multi-user interactive sessions. For some embodiments, a registration of an AR object or a ranker can be ephemeral (e.g., accessible for only a duration of time after first being accessed). The ephemeral nature of a registration can create a need for a user to re-register an AR object or a ranker periodically (e.g., every 24 hours), which can deter or prevent registration abuses (e.g., spamming). For some embodiments, user interactions with respect to a given AR object can be defined by a set of rules (e.g., interaction rules) associated with the AR object. For instance, a rule for an AR object can determine an availability of an interaction with respect to the AR object (e.g., can toss or bounce virtual ball), or can define an interaction constraint with respect to the AR object (e.g., interactions with respect to the virtual ball are limited to the client, or the virtual ball can only be tossed so far). The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. Reference will now be made in detail to embodiments of the present disclosure, examples of which are illustrated in the appended drawings. The present disclosure may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. FIG.1is a block diagram showing an example system100, for exchanging data (e.g., relating to AR objects) over a network106, that can include an augmented reality (AR) object system, according to some embodiments. The system100includes multiple client devices102, each of which hosts a number of applications including a client application104. Each client application104is communicatively coupled to other instances of the client application104and a server system108via a network106(e.g., the Internet). Accordingly, each client application104can communicate and exchange data with another client application104and with the server system108via the network106. The data exchanged between client applications104, and between a client application104and the server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., AR object, text, audio, video or other multimedia data). For some embodiments, a particular client application104provides its respective client device102with one or more augmented reality/mixed reality features. A particular client application104can represent, for example, an augmented reality (AR) client software application, or a messaging software application that includes augmented reality/mixed reality features. A particular client application104can obtain one or more AR objects (e.g., from an augmented reality (AR) object system with AR object system116, hereafter the AR object system116) to generate a mixed reality environment (e.g., based on the real-world environment of the client device102) that includes the one or more AR objects. For instance, a particular client application104can enable a client device102, such as a smartphone, to capture image frames of a real-world environment (e.g., using smartphone camera) and generate a view (e.g., on the smartphone display) that presents the real-world environment with (e.g., enhanced by) one or more AR objects that are associated with that real-world environment. In particular, a particular client application104can obtain AR objects from an AR registry (e.g., implemented by the AR object system116) by, for example, requesting or querying for one or more AR objects from the AR object system116using information associated with the client device102, such as information regarding the user of the client device102, the current set of coordinates (e.g., GPS coordinates) of the client device102, or a specified radius around the client device102. When obtaining the one or more AR objects (e.g., the AR object system116), a particular client application104can receive data for those AR objects. The data for those AR objects can include, for example: model data for one or more three-dimensional models (e.g., 3D graphical content) for rendering and displaying the obtained AR objects on a client device102; rule data describing one or more rules that determine user interactions with the obtained AR objects through a particular client application104; or state data describing initial states of the obtained AR objects (e.g., initial state in which an obtained AR object will be presented by a particular client application104on a client device102). With respect to usage of an obtained AR object, a particular client application104can display the obtained AR object on the display of a client device102by determining a positioning of the AR object on the display relative to the real-world environment. A particular client application104can do so by executing a process that generates (or constructs) a virtual camera by combining data from a client devices102's various sensors, such as an image sensor, inertial measurement unit (IMU), and GPS sensor, and then using the virtual camera to position the obtained AR object on the display of the client device102. A particular client application104can, for example, use a simultaneous localization and mapping (SLAM) or visual odometry (VIO) system or method to generate the virtual camera. When a particular client application104displays the AR object, the 3D model of AR object can be rendered and displayed as an overlay over the real-world environment being presented by a client device102. For some embodiments, a particular client application104enables a user to register one or more AR objects with an AR registry (e.g., implemented by the AR object system116) in association with a set of coordinates on a map (e.g., geospatial map). The server system108provides server-side functionality via the network106to a particular client application104. While certain functions of the system100are described herein as being performed by either a client application104or by the server system108, it will be appreciated that the location of certain functionality either within the client application104, the server system108is a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the server system108, but to later migrate this technology and functionality to the client application104where a client device102has a sufficient processing capacity. The server system108supports various services and operations that are provided to the client application104. Such operations include transmitting data to, receiving data from, and processing data generated by the client application104. This data may include message content, AR object-related information (e.g., model data, orientation, interaction rules or logic, state information, interactive, session information, etc.), client device information, geolocation information, media annotation and overlays, message content persistence conditions, social network information, and live event information as examples. Data exchanges within the system100can be invoked and controlled through functions available via user interfaces (UIs) of the client application104. Turning now specifically to the server system108, an Application Program Interface (API) server110is coupled to, and provides a programmatic interface to, an application server112. The application server112is communicatively coupled to a database server118, which facilitates access to a database120in which is stored data associated with operations performed by the application server112. Dealing specifically with the Application Program Interface (API) server110, this server receives and transmits message data (e.g., commands and message payloads) between the client device102and the application server112. Specifically, the API server110provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the client application104in order to invoke functionality of the application server112. The API server110exposes various functions supported by the application server112, including for example: account registration; login functionality; the sending of AR object-related information (e.g., model data, orientation, interaction rules or logic, state information, interactive, session information, etc.) via the application server112, from the AR object system116to a particular client application104; the sending of AR object-related information (e.g., query or request information, user input information, state information, model data for a new AR object, etc.) via the application server112, from a particular client application104to the AR object system116; the sending of messages, via the application server112, from a particular client application104to another client application104; the sending of media files (e.g., digital images or video) from a client application104to the messaging server application114, and for possible access by another client application104; the setting of a collection of media content items (e.g., story), the retrieval of a list of friends of a user of a client device102; the retrieval of such collections; the retrieval of messages and content, the adding and deletion of friends to a social graph; the location of friends within a social graph; and opening an application event (e.g., relating to the client application104). The application server112hosts a number of applications, systems, and subsystems, including a messaging server application114, an AR object system116, and a social network system122. The messaging server application114implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of media content items (e.g., textual and multimedia content items) included in messages received from multiple instances of the client application104. As will be described herein, media content items from multiple sources may be aggregated into collections of media content items (e.g., stories or galleries), which may be automatically annotated by various embodiments described herein. For example, the collections of media content items can be annotated by associating the collections with captions, geographic locations, categories, events, highlight media content items, and the like. The collections of media content items can be made available for access, by the messaging server application114, to the client application104. Other processor- and memory-intensive processing of data may also be performed server-side by the messaging server application114, in view of the hardware requirements for such processing. For a given a collection of media content, one or more annotations of the given collection may represent features of the given collection, and those features may include one or more graphical elements (e.g., emojis or emoticons) that various embodiments described herein may be use when automatically associating one or more graphical elements with the given collection. Access to the given collection of media content items may include access to one or more of annotations of the given collection and one or more graphical elements associated with the given collection by various embodiments described herein. As shown, the application server112also includes the AR object system116, which implements one or more aspects of various embodiments described herein, such as an AR registry and ranker-based AR querying. More regarding the AR object system116is described herein with respect toFIG.2. The social network system122supports various social networking functions and services, and makes these functions and services available to the messaging server application114and the AR object system116. To this end, the social network system122maintains and accesses an entity graph within the database120. Examples of functions and services supported by the social network system122include the identification of other users of the system100with which a particular user has relationships or is “following”, and also the identification of other entities and interests of a particular user. The application server112is communicatively coupled to a database server118, which facilitates access to a database120in which is stored data associated with operations performed by the messaging server application114or the AR object system116. FIG.2is block diagram illustrating an example logical architecture for the AR object system116, according to some embodiments. Specifically, the AR object system116is shown to comprise data layers210and augmented reality (AR) object services230, which support various features and functionalities of the AR object system116. As shown, the data layers210comprises a three-dimensional (3D) topological data layer212, a logical topological data layer214, a user data layer216, and an augmented reality (AR) object model data layer218. As also shown, the AR object services230comprises an augmented reality (AR) object interactive session service232, an augmented reality (AR) object query service234, and an augmented reality (AR) object registry service236. For various embodiments, the components and arrangement of components of the AR object system116may vary from what is illustrated inFIG.2. Any components of the AR object system116can be implemented using one or more processors (e.g., by configuring such one or more computer processors to perform functions described for that component) and hence can include one or more of the processors. Furthermore, according to various embodiments, any of the components illustrated inFIG.2can be implemented together or separately within a single machine, database, or device or may be distributed across multiple machines, databases, or devices. For example, the data layers210can be implemented by one or more databases (e.g., databases120), and the AR object services230can be implemented by one or more servers (e.g., the application server112). The 3D topological data layer212comprises data that describes an internal representation of a real-world environment. The data can include, without limitation, 3D modeling information of the real-world environment and information that associates the 3D modeling information with one or more coordinates (e.g., on a topological map). A query to the 3D topological data layer212cam comprise one or more coordinates on a map (e.g., topological map) and a radius value around a point corresponding to the one or more coordinates. The query results provided by the 3D topological data layer212can comprise one or more 3D model objects that fall within the radius that is centered at the one or more coordinates. Data for the 3D topological data layer212can be sourced from one or more data sources, including third party vendors. Additionally, data of the 3D topological data layer212can be divided into two or more types, such as lower resolution data (hereafter, referred to as world data) and higher resolution data (hereafter, referred to as deep world data. World data can represent default ground truth data for the AR object system116(which can provide a quick foundation for AR object model placement). In comparison to deep world data, world data can have lower accuracy (e.g., approximately 3 m of accuracy), and generally lacks indoor data for real world structures (e.g., buildings, etc.). Deep world data can represent the 3D topological data having the highest accuracy within the AR object system116(e.g., centimeter level of accuracy), and can include indoor data for real world structures. The logical topological data layer214comprises data that relates to logic (e.g., business or operational logic) that can be applied to data provided by the 3D topological data layer212. At least portion of the data provided by the logical topological data layer214can be stored in geospatial vector type format. Two types of data provided by the logical topological data layer214can include zone data and geolocation data. According to some embodiments, zone data (of the logical topological data layer214) marks or identifies one or more areas in a real-world environment, and can further associate one or more attribute values to those one or more areas. For instance, zone data can mark/identify areas of a real-world environment according to one or more of the following: state lines, county lines, city limits, parcel traits, or zoning areas. These marked/identified areas may also be referred to or regarded as zones. Within zone data, an area of a real-world environment can be defined by a non-scalar polygon data type. For some embodiments, zone data facilitates geo-partitioning of a large real-world environment (e.g., the Earth), which can support assignment and management of interactive sessions and session-related computing resources (e.g., session servers) by the AR object interactive session service232as described herein. For some embodiments, zone data marks or identifies one or more permissions for a given area (e.g., as one or more attribute values of the given area). The permissions (embodied as permission data) for a given area can enable the AR object system116(e.g., the AR object registry service236thereof) to determine, for example, whether a given user can register (e.g., place) an AR object (e.g., new AR object or an existing AR object) of their choosing at location corresponding to a set of coordinates on a map (e.g. topological map). In this way, zone data of some embodiments can associate a particular real-world space with one or more permissions that can deter an abuse of the AR object system116. For instance, permission data (provided by zone data) can prevent a first user representing a first business (e.g., pizza shop #1) from registering an AR object (e.g., AR object representing a coupon for the first business) at a location corresponding with a second business that is a competitor of the first business (e.g., pizza shop #2). On the other hand, the same permission data can permit a second user that is confirmed to be the owner of the second business to registering an AR object of their choosing at the location corresponding with the second business. In similar manner, one or more permissions provided by zone data can control registration of AR objects with respect to locations corresponding to private residencies. Additionally, one or more permissions provided by zone data for a given area can enable the AR object system116(e.g., the AR object registry service236thereof) to determine whether a given user can register (e.g., associate) a ranker with respect to a location corresponding to a set of coordinates on a map (e.g. topological map). As described herein, a ranker can determine which AR objects are provided (e.g., surfaced) to a client device in response to a request or query from the client device for AR objects. The following Table 1 can represent an example structure of a database table used to store zone data of the logical topological data layer214. TABLE 1Column NameData TypeBrief Descriptionzone_idLong number,Unique ID identifying a marked(serving as aarea of real-world environment.primary key)user_idLong numberID for user (corresponding to a(serving as auser_id in table of User Dataforeign key)Layer) associated as an owner ofthis marked area.zone_geometryGeoJSONDescribes the marked area as apolygonreal-world region (e.g., a propertyparcel).permission_idEnumerationDescribes one or more permissionsassociated with the marked area. According to some embodiments, geolocation data (of the logical topological data layer214) comprises data for storing registration of an AR object in association with one or more coordinates corresponding to a location on a map (e.g., topological map), storing registration of a ranker in association one or more coordinates corresponding to a location on a map, or some combination of both. In particular, for some embodiments, the geolocation data can associate data from 3D topological data layer212(e.g., such as geospatial data) with model data from the AR object model data layer218. In this way, the geolocation data can facilitate registration (e.g., placement) of an AR object in association with a set of coordinates corresponding to a location on a map. For some embodiments, the geolocation data associates a center (e.g., centeroid) of an AR object with the set of coordinates. The center of an AR object can correspond to a center of the AR object's 3D bounding box. When the AR object is ultimately displayed by a client device, the AR object's displayed position and orientation can be determined relative to the center of the AR object. Additionally, for various embodiments, the geolocation data facilitates registration of a ranker in association with a set of coordinates corresponding to a location on a map by associate data from 3D topological data layer212(e.g., such as geospatial data) with an identifier associated with a ranker. Depending on the embodiment, the geolocation data can be implemented as a join table of a database. The following Table 2 can represent an example structure of a database table used to store geolocation data of the logical topological data layer214. TABLE 2Column NameData TypeBrief Descriptionposition_idLong number,Unique ID identifying the(serving as aassociation of position of the ARprimary key)object with a set of coordinatescorrespond to a location on a map(described by data from 3DTopological Data Layer).model_idLong numberID for model data of AR object(serving as a(corresponding to a model_id inforeign key)table of Model Data Layer).ranker_idLong numberID for a ranker associated with(serving as athis position.foreign key)location_data(latitudeOne or more coordinates thatlongitude,determine where the AR object'saltitude)centroid will be positioned.orientation_data(yaw, pitch, roll)Used to determine the rotation ofthe AR object relative to itscentroid.expiry_timeTimestampTime at which this registeredassociation (of position of the ARobject with a set of coordinatescorrespond to a location on a map)will expire, which can be used toimplement an ephemeral ARobject. The user data layer216comprises data associated with a user of the AR object system116. The data provided by user data layer216can include, without limitation, data about which AR objects a given user owns or controls, data about the last state of a given AR object with respect to a given user, or data regarding one or more sessions associated with a given user. The following Table 3 can represent an example structure of a database table used to store user data of the user data layer216 TABLE 3Column NameData TypeBrief Descriptionuser_idLong number,Unique ID identifying a user of(serving as athe AR object system describedprimary key)herein.user_nameTextUsername for the user associatedwith the user_id.timestampTimestampTime at which this user recordwas created or updated. The following Table 4 can represent an example structure of a database table used to store data of the user data layer216for looking up user ownership/control of an AR object. TABLE 4Column NameData TypeBrief Descriptionuser_idLong numberID for user (corresponding to a(serving as auser_id in table of User Dataforeign key)Layer) associated as an owner ofan AR object model identified bymodel_idmodel_idLong numberID for model data of AR object(serving as a(corresponding to a model_id inforeign key)table of Model Data Layer). The AR object model data layer218comprises data for one or more AR objects that can potentially be registered with the AR object system116. Data stored by the AR object model data layer218can include, without limitation, model data for generating (e.g., rendering) a 3D model that visually represents a given AR object, data describing a (e.g., precalculated) 3D bounding box for a given AR object and rule data describe one or more rules for interacting with a given AR object. As described herein, a center of a 3D bounding box associated with a given AR object can determine how the given AR object is positioned and oriented when displayed by a client device with respect to a real-world environment (e.g., how the given AR object is embedded in real-world space presented by the client device). Additionally, as described herein, one or more rules associated with a given AR object can determine a level of user interaction available with respect to the given AR object. For instance, one or more rules of a given AR object can determine whether the AR object is static, has interactions limited to the client device, or allows multiuser interaction over through an interactive session. Depending on the embodiment, the AR object model data layer218can be implemented as a data structure that implements a key-value store. The following Table 5 can represent an example structure of a database table used to store data of the AR object model data layer218. TABLE 5Column NameData TypeBrief Descriptionmodel_idLong number,Unique ID associated with(serving asmodel data of AR object.a primary key)model_dataBinary bytesData blob used by a clientdevice to render the 3Dmodel of the AR object.user_idLong, foreignID for user (corresponding tokey intoa user_id in table of Useruser_tableData Layer) associated as anowner of this marked area. The AR object interactive session service232facilitates or manages the operation of an interactive session (hereafter, session) that enables multiuser interaction with respect to one or more registered AR objects (e.g., group of AR objects). As described herein, through a session, interaction data can be communicated between client devices of users that are jointly interacting with one or more AR objects. For some embodiments, the AR object interactive session service232assigns a user to a session when the user requests interaction with one or more given AR objects, where the assigned session is to handle the user's interactions with respect to the one or more given AR objects. Additionally, for some embodiments, the AR object interactive session service232assigns a user to a session when the user requests for a plurality of users to interact together (i.e., request multiuser interaction) with respect to one or more given AR objects. Depending on the embodiment, the AR object interactive session service232can assign users to a given session (e.g., fill the given sessions with users) using different approaches, such preferential assignment to users that are friends, or assignment on a first-come, first-serve basis. In response to a request from a client device of a user to participate in a session to interact (e.g., facilitate multiuser interaction) with a set of AR objects, the AR object interactive session service232can assign the user to an existing session (e.g., one already operating on a session server) that can service the request, or generate and assign the user to a new session (e.g., spin up a new session on a session server) to service the request. More regarding session assignment and operation of sessions using mapping servers (e.g., world servers) and session servers is described herein with respect toFIGS.3and4. For some embodiments, a user is limited to participation in one session at a time. A user participating in a given session can idle out of the given session (e.g., based on lack of activity or interaction within the session after a period of time). Further, a given session can be assigned a user participant count limit to ensure that the given session operates as expected for participating users. The user participant count limit can vary between different sessions. For instance, the user participant count limit can be based on the geographic area/partition being serviced by the different sessions (e.g., area around a landmark, such as the Washington Monument, can likely involve more user AR object interactions and thus have a lower count limit than an area covering a small city park). For some embodiments, the client device of each user participating in a given session shares data regarding that user's participation in the given session with the client devices of all other users participating in the given session. The data can include, without limitation, the user's inputs (e.g., swipes, head tilts, etc.) to the given session and changes to the state of an AR object involved in the given session caused by interactions of the user. For various embodiments, the sharing of data between the client device is facilitated through operations of the given session. The state of an AR object involved in a session can be referred to as the AR object's session state. The current session state of an AR object of a session can serve as a “ground truth” for users interacting with the AR object through the session. With respect to a given session that is interacting with one or more given AR objects, the client device of user participating in (e.g., assigned to and involved in) the given session can receive the start state of each of those given AR objects at the start of the user's participation in the session, which the client device uses to initialize the session state of each of those given AR objects at the client device. The user can then participate in the session by, for example, interacting with one or more of the given AR objects, or user observing another user of the session interacting with one or more of the given AR objects. As the user participates in the given session, the user's client device can locally work with and maintain (e.g., store and update) a local copy of a session state for each of the given AR objects at the client device. For example, the client device can update the locally maintained session state of a first AR object of the given session based on the user interactions with the first AR object. Concurrently, the client device can update the locally maintained session state of the first AR object of the given session based on session state update data received by the client device regarding interactions with the first AR object by another user participating in the given session (e.g., session state update data being broadcasted, by the other user's client device, through the given session to all user client devices). Depending interaction level of a given AR object (e.g., as defined by a rule associated with the given AR object), at the termination of the given session, the final session state of the given AR object can be stored (e.g., persistently stored) to the AR object system116(e.g., stored to the user data layer216with respect to the users of the given session or stored for all users via the AR object model data layer218). For instance, a rule of the given AR object can define that the given AR object can be interacted with through a session and any changes to the session state of the given AR object will eventually be saved to the AR object system116. Once this final session state is stored to the AR object system116, the stored session state can be used as initial/start state for the given AR object the next time one or more users start interacting with the given AR object again (e.g., within a new session). By maintaining local copies of session states at client devices and saving the final state of a given AR object (where applicable) at the end of a session, various embodiments can enable scalability of, can also promote stability of, and reduce or can avoid/reduce overwrite thrash by the AR object interactive session service232. The AR object interactive session service232can support multiple simultaneous sessions involving interaction of the same AR object. The sessions supported by the AR object interactive session service232can operate independently. Accordingly, with respect to a given AR object of a given session, access to the state of the given AR object within the given session (the given AR object's session state) can be maintained such that the session state cannot be accessed outside of the given session (e.g., by a user not participating in the given session). This means that two independent simultaneous sessions can involve users interacting with the same particular AR object, but each of those independent simultaneous sessions maintains its own session state for that particular AR object. The independence of sessions can enable some embodiments to manage (e.g., generate, assign, and operate) sessions using a mapping server and multiple session servers (e.g., independent sessions servers responsible for servicing sessions based on geopartitioning of the real-world) as described herein, which can provide scalability and stability for users. For instance, an independent session approach means that an embodiment can provide one or more users (within a single session) with the experience of seeing an AR object a user of the single session created with respect to a real-world object (e.g., placing a virtual hate on a real-world statue) for a satisfactory amount of time (rather than less than a small amount time, which would result if all users were interacting with the given AR object operated assigned to the same session or a session state of the given AR object was shared across multiple sessions). Where two or more simultaneous sessions involve one or more same AR objects, a merge rule functionality can be used to merge the final sessions states of those same AR objects if they are to be stored after termination of the simultaneous sessions. More regarding operations of the AR object interactive session service232are described herein with respect toFIGS.3and4. The AR object query service234processes requests or queries, from client devices, for one or more AR objects from the AR registry implemented by the AR object system116(e.g., implemented via geolocation data of the logical topological data layer214). Based on a received request/query, the AR object query service234can determine one or more AR objects (from the AR registry) to be sent back to a client device for use. In this way, the AR object query service234operates as an AR object surface service, given that one or more AR objects provided by the AR object query service234to a client device (e.g., based on a client request or query) causes (or likely causes) the client device to present or surface those one or more AR objects on the client device. The request/query to the AR object query service234can be generated by a client application (e.g.,104) on a client device, where the client application can use one or more AR objects provided by the AR object query service234to present with respect to a view of a real-world environment (e.g., to provide a mixed reality user experience). As described herein, the request/query can include information associated with a client device, such as such as information regarding the user of the client device, the current set of coordinates (e.g., GPS coordinates) of the client device, or a specified radius around the client device. For some embodiments, the AR object query service234uses one or more rankers in determining which one or more AR objects are to be sent back to a client device in response to a request/query. By a ranker, the AR object query service234can prioritize, filter, or sort AR objects to determining a final set of AR objects sent to the client device. For example, the AR object query service234can determine (e.g., identify) an intermediate/initial set of AR objects from the AR registry based on the client request/query, and then use a ranker to filter and sort the intermediate/initial set of AR objects to determine a final set of AR objects to be sent to the client device. Alternatively, the ranker can receive the client request/query and generate a ranker-based query that includes one or more parameters for prioritizing, filtering, or sorting the AR objects the results provided in response to the ranker-based query. The filter, sort, or both can be performed, for example, on attributes associated with geolocation data from the logical topological data layer214. A ranker can be implemented such that it can be horizontally scalable. Depending on the embodiment, the AR object query service234can have one or more rankers available for use. The AR object query service234can select and use one or more rankers (from a plurality of available rankers) based on a number of factors including, for example, based on information provided in the client request/query (e.g., client device geographical location or specific radius) or a user selection or preference. As described herein, the AR object registry service236can facilitate registration of new or existing rankers with the AR object system116, thereby enabling availability of those new/existing rankers for use by the AR object query service234. An example ranker can include a query result limit (e.g., limit to 25 AR objects). Another example ranker can include an algorithm that selects AR objects based on a pseudo-random fairness and then sorts the selected AR objects. For some embodiments, the AR object query service234can use a ranker that accesses bidding data (e.g., provided by a bidding system) to determine priorities of a set of AR objects, which can enable the AR object query service234to filter the set of AR objects based on the determined priorities. For example, with respect to a set of AR objects falling within a specific radius centered at a location corresponding to a location of a client device, a ranker can access (e.g., real-time) bidding data for one or more AR objects (in the set of AR objects), which can determine the priority of those one or more AR objects. Bidding data can be accessed for each AR object by performing a monetization lookup on each AR object. The bidding data can be provided by a bidding system, which can be separate or part of the AR object system116. Where separate, the bidding system can have at least read access to data of the AR object system116, such as the geolocation data of the logical topological data layer214, which can facilitate bidding on registration/placement of AR objects. Some embodiments provide for or utilize a bidding system, that permits one or more users (e.g., representing third-party organizations) to bid on prioritizing (surfacing of) an AR object of their choosing over other AR objects. The one or more users can bid, for example, that an AR object of their choosing be prioritized (e.g., boosted in priority) in association with a set of coordinates, areas relative to (e.g., around) a set of coordinates, a marked area (e.g., described by zone data from the logical topological data layer214), with respect to certain users or types of users, and the like. For example, with respect to a registered AR object (an AR object registered in association with a set of coordinates corresponding to a location on a map), a user can bid on boosting the priority of the registered AR object, (e.g., over other AR objects registered/placed with respect to locations at or around the same set of coordinates). By boosting the priority of the registered AR object via a winning bid, a user can effectively boost the display/presentation/surfacing rank of the registered AR on a client device. For instance, based on a ranker associated with bidding data, the request/query result provided to a client device can include a predetermined number of top bidded AR objects. The bid can comprise a value (e.g., monetary value or virtual credit) being offered by the bid, and can further comprise a priority value being requested by the bid (e.g., amount of priority boost or actual priority value). On the bidding system, a bid can comprise a user associating a monetary value/virtual credit with a geolocation data record (e.g., postion_id corresponding to the record) of the logical topological data layer214. By use of a ranker can enable the AR object system116to decentralize the ability to query the AR registry for AR objects. Additionally, use of a ranker can improve user experience by improving which AR objects are presented/surfaced to a user at a client device. For example, through use of a ranker, the AR object query service234can enable a user to see different types of AR objects based on, for example, the time of year or geographic location. For instance, what AR objects a user wants to be able to see in Las Vegas is not necessarily what one wants to be able to see at a family Thanksgiving dinner. More regarding operations of the AR object query service234are described herein with respect toFIG.5. Through the AR object registry service236, a user can manage (e.g., add, remove, or modify) registration of an AR object in association with one or more coordinates corresponding to a location on a map (e.g., topological map), can manage registration of a ranker in association with one or more coordinates corresponding to a location on a map, or both. For example, a user can use the AR object registry service236to generate a new registration of an AR object with respect to one or more coordinates corresponding to a location on a map. The new registration can be for an AR object newly added to the AR object model data layer218or already stored on the AR object model data layer218. As described herein, registering an AR object in association with a set of coordinates can effectively place the AR object at a location corresponding to the set of coordinates (e.g., place the AR object relative to a real-world map to achieve mixed reality). Depending on the embodiment, a registration of AR object or a ranker can be ephemeral. For some embodiments, the AR object registry service236uses permission data to determine whether a given user can register a given AR object, a given ranker, or both with respect to one or more coordinates on a map. For example, as described herein, zone data from the logical topological data layer214can provide permission data in association with one or more areas of a real-world environment (e.g., marked areas described by the zone data). Additionally, for some embodiments, the AR object registry service236implements one or more rate limitations with respect to registration requests (e.g., request for adding, removing, or updating registrations). For example, a rate limitation can define that a given user is limited to five registrations per a day through the AR object registry service236. In another example, a rate limitation can define that a given user is limited to a predetermined number of registrations per a day, and the given user has to pay to register more than the predetermined number within a day. With a rate limitation, some embodiments can avoid spamming the AR object registry service236with registration requests. Depending on the embodiment, the AR object registry service236can permit or facilitate registration of an AR object, a ranker, or both by the public domain (e.g., public registration). For instance, a user (e.g., from the public) can construct a new AR object or a new ranker and register this new item via the AR object registry service236. For some embodiments, the AR object registry service236stores a registration of an AR object (with respect to a set of coordinates corresponding to a location on a map) in the geolocation data of the logical topological data layer214as described herein (e.g., using model_id of TABLE 2). Similarly, for some embodiments, the AR object registry service236stores a registration of a ranker (e.g., with respect to a set of coordinates corresponding to a location on a map) as geolocation data of the logical topological data layer214as described herein (e.g., using ranker_id of TABLE 2). Some embodiments can facilitate registration of an AR object or a ranker in connection with an attribute of a client device (e.g., identity of a particular client device, a client device type, version of operating system, etc.) or an attribute of client application (e.g., identity or version of a particular client application or a particular client application type, such as a web browser, social networking, or messaging software application). FIG.3is a block diagram illustrating an example of the AR object interactive session service232, according to some embodiments. As shown inFIG.3, the AR object interactive session service232comprises one or more mapping servers302(e.g., world server), and one or more session servers304. Depending on the embodiment, a particular mapping server302can determine and assign a session (operating on a particular session server304) to a client device, and a particular session server304can operate one or more sessions that support user interactions (e.g., multiuser interactions) with one or more AR objects. According to some embodiments, a client device of a user sends a request to use a session to interact (e.g., facilitate multiuser interaction) with a set of AR objects. The one or more mapping servers302can receive the request, determine a particular one of the session servers304(hereafter, the determined session server304) to service the request, assign the user or the client device to a new or an existing session operating on the determined session server304that can service the request, and re-route or otherwise re-direct the client device to the determined session server304. Depending on the embodiment, the mapping server302can determine which of the session servers304is to service a given request based on, for example, the set of coordinates of the client device, identity of the user, current load of session servers, association of session servers304to marked areas (e.g., geopartitioning) of the real-world environment. For instance, the mapping server302can determine which of the session servers304is to service a given request such that multiple simultaneous users interacting with the same set of AR objects are partitioned in a way that does not overload any one of the session servers304, while maintaining preferential user groupings in sessions (e.g., placing users that are friends together in the same session). A given session server304can operate a plurality of simultaneous sessions (e.g., based on its particular load or capacity). As described herein, a given session maintains its own session state for each of AR object involved in the given session, and those session states are maintained not accessible outside of the given session. A given session server304can operate a virtual, canonical copy of a session. Once multiple client devices of users participating in a given session (operating on a given session server304) have established a data connection with the given session, each client device can communicate data, such as a user inputs (e.g., swipes, head tilts, etc.) or session state updates to AR objects, to the given session, and the given session can share the data with the other client devices connected to the given session. A client device can share data with the given session using, for example, a low latency, User Datagram Protocol (UDP)-based connection. Upon receiving user inputs from a client device, the given session can validate the user inputs (e.g., to deter or avoid bots or cheaters) and can share the validated user input to all other client devices (e.g., using the same low latency, UDP-based connection) so the client devices can update their local copies of session states of AR objects based on the validated user inputs accordingly. The given session can also update some session information based on the validated user inputs. FIG.4is a flow diagram illustrating an example of session handling by an AR object interactive session service (e.g.,232), according to some embodiments. At the start, a client application operating on a client device404can cause the client device404to request/query for one or more AR objects from an AR object query service402(e.g., request/query based on a set of coordinates corresponding to the current location of the client device404and a radius value). At operation420, the client device404can download data for the one or more AR objects that result from the request/query, which can include model data and rule data for the one or more AR objects. Subsequently, the user can interact with the one or more AR objects in accordance with one or more rules described by the rule data. Eventually, the user may request a session to facilitate multiuser interaction with respect to at least one of the one or more AR objects. Accordingly, at operation422, the client device404can initialize a connection with a mapping server406(e.g., world server), which can enable the client device404to send its request for a session. In response to the request, at operation424, the mapping server406can check an AR object interactive session cache410to determine whether there are any existing sessions associated with the at least one AR object (to which the client device404can be assigned), or whether a new session needs to be created for the request. InFIG.4, the AR object interactive session cache410can cache information regarding sessions currently being operated by one or more session servers. Accordingly, a session server such as session server408can periodically update information stored on AR object interactive session cache410(as represented by operation428). After the mapping server406identifies and assigns the client device404to a new or existing session, at operation426, the mapping server406can redirect the client device404to the session server operating the assigned session (representing by the session server408). Once redirected to the session server408and a data connection with the assigned session is established, at operation430, the client device can send its user's inputs to the assigned session (to be shared by the assigned session with other client devices connected to the assigned session), and the client device can receive user inputs from client devices of other users participating in the assigned session. Based on the received user inputs, the client device404can update its local copy of session states for AR objects involved in the assigned session. FIG.5is a flow diagram illustrating an example of using one or more rankers for providing a client device with one or more AR objects, according to some embodiments. At the start, a client application operating on a client device502can cause the client device502to request/query for one or more AR objects from an AR object query service504(e.g., request/query based on a set of coordinates corresponding to the current location of the client device404and a radius value). Operation530can represent the client device502sending the request/query to the AR object query service504. The request/query can result from a user of the client device502using the client device502(e.g., smartphone) to scan their surrounding real-world environment for AR objects. The AR object query service504can determine one or more rankers associated with the received request/query (e.g., based on a set of coordinates provided by the request/query). One of the determined rankers can be one that accesses bidding data from a bidding system506at operation532and prioritizes one or more AR objects over other AR objects. As described herein, the bidding system506can enable a user to bid on prioritizing (e.g., boosting the priority of) a registered AR object. At operation534, the AR object query service504can query geolocation data508to determine an intermediate set of AR objects associated with coordinates within a radius of the client device502's current geographic location, and then apply the one or more determined rankers to the intermediate set of AR objects (e.g., filter or sort the intermediate set of AR objects) to reach a final set of AR objects. At operation536, the AR object query service504can obtain (e.g., fetch) data for the final set of AR objects, which can include, for example, data from AR object model data510and rule data associated with the final set of AR objects. At operation538, the data for the final set of AR objects is provided to and downloaded by the client device502(as represented by512). At operation540, the client device502can determine positioning of a virtual camera with respect to a display of the client device502(as represented by514) and, at operation542, the client device502can display rendered models of one or more of the AR objects from the final set based on the positioned virtual camera (as represented by516). Subsequently, the user of the client device502can interact with the AR objects displayed on the client device502. FIG.6is a block diagram illustrating an example implementation of the AR object system116, according to some embodiments. The AR object system116is shown as including an augmented reality (AR) object query module602, an augmented reality (AR) object interactive session module604, an augmented reality (AR) object registry module606, an augmented reality (AR) object bidding module608, a three-dimensional (3D) topological data module610, a logical topological data module612, a user data module614, and an augmented reality (AR) object model data module616. The various modules of the AR object system116are configured to communicate with each other (e.g., via a bus, shared memory, or a switch). Any one or more of these modules may be implemented using one or more processors600(e.g., by configuring such one or more processors600to perform functions described for that module) and hence may include one or more of the processors600. Any one or more of the modules described may be implemented using hardware alone (e.g., one or more of the computer processors of a machine, such as machine1500) or a combination of hardware and software. For example, any described module of the AR object system116may physically include an arrangement of one or more of the processors600(e.g., a subset of or among the one or more processors of the machine, such the machine1500) configured to perform the operations described herein for that module. As another example, any module of the AR object system116may include software, hardware, or both, that configure an arrangement of one or more processors600(e.g., among the one or more processors of the machine, such as the machine1500)) to perform the operations described herein for that module. Accordingly, different modules of the AR object system116may include and configure different arrangements of such processors600or a single arrangement of such processors600at different points in time. Moreover, any two or more modules of the AR object system116may be combined into a single module, and the functions described herein for a single module may be subdivided among multiple modules. Furthermore, according to various embodiments, modules described herein as being implemented within a single machine, database, or device may be distributed across multiple machines, databases, or devices. The AR object query module602to facilitate or implement aspects, features, or functionalities of the AR object query service234described herein with respect toFIG.2. The AR object interactive session module604to facilitate or implement aspects, features, or functionalities of the AR object interactive session service232described herein with respect toFIG.2. The AR object registry module606to facilitate or implement aspects, features, or functionalities of the AR object registry service236described herein with respect toFIG.2. For some embodiments, the AR object registry module606also supports registration of a ranker as described herein. The AR object bidding module608to facilitate or implement aspects, features, or functionalities of a bidding system described herein with respect to the AR object query service234ofFIG.2. The 3D topological data module610to facilitate or implement aspects, features, or functionalities with respect to the 3D topological data layer212described herein with respect toFIG.2. The logical topological data module612to facilitate or implement aspects, features, or functionalities with respect to the logical topological data layer214described herein with respect toFIG.2. The user data module614to facilitate or implement aspects, features, or functionalities of the user data layer216described herein with respect toFIG.2. The AR object model data module616to facilitate or implement aspects, features, or functionalities of the AR object model data layer218described herein with respect toFIG.2. For some embodiments, a set of world servers and a set of session servers used to implement or operate the AR object interactive session module604. Additionally, for some embodiments, the AR object query module602is implemented or operates on a set of query servers that are separate from the set of world servers and the set of session servers used to operate the AR object interactive session module604. More regarding modules602-616is described below with respect to operations of the methods depicted byFIGS.7-13. FIGS.7through13are flowcharts illustrating methods relating to an AR object registry, according to some embodiments. Various methods described herein with respect toFIGS.7through13may be embodied in machine-readable instructions for execution by one or more computer processors such that the operations of the methods may be performed in part or in whole by the server system108or, more specifically, the AR object system116. Accordingly, various methods are described herein by way of example with reference to the AR object system116. At least some of the operations of the method800may be deployed on various other hardware configurations, and the methods described herein are not intended to be limited to being operated by the server system108. Though the steps of the methods described herein may be depicted and described in a certain order, the order in which the operations are performed may vary between embodiments. For example, an operation may be performed before, after, or concurrently with another operation. Additionally, the components described with respect to the methods are merely examples of components that may be used with the methods, and other components may also be utilized, in some embodiments. Referring now toFIG.7, a method700is illustrated for providing AR objects to a client device and handling a session for interacting with a provided AR object. At operation702, the AR object query module602receives a query from a client device for one or more augmented reality objects, where the query can comprise a current set of coordinates that corresponds to a position of the client device on a map, and can further comprise a radius relative to (e.g., centered by a location corresponding to) the current set of coordinates. In response to the query received at operation702, at operation704, the AR object query module602determines (e.g., identifies) a set of augmented reality objects based on the query and, at operation706, sends a query result to the client device, where the query result comprises result data for the set of augmented reality objects determined by operation704. The determination of the set of augmented reality objects based on the query can comprise the AR object query module602executing a search based on the received query. The set of augmented reality objects can be determined by operation704from a plurality of augmented reality objects registered on an augmented reality object registry (e.g., as registered via the AR object registry module606). As described herein, based on the result data provided to the client device by the query result, the client device can display (or surface) one or more of the augmented reality objects from the set of augmented reality objects. Depending on the embodiment, the result data can comprise a current stored state of the at least one augmented reality object (state stored on the AR object system116), where the current stored state once provided to the client device can determine an initial state of the at least one augmented reality object for the user on the client device. The result data can comprise model data for each augmented reality object in the set of augmented reality objects. The result data can comprise location (e.g., position) data that describes, for each augmented reality object in the set of augmented reality objects, a given set of coordinates on the map at which the augmented reality object is to be displayed by a given client device when the given client device generates an augmented reality view relative to the given set of coordinates. The result data can comprise orientation data that describes, for each augmented reality object in the set of augmented reality objects, a given orientation at which a given client device is to display the augmented reality object when the client device generates an augmented reality view that includes the augmented reality object. Additionally, the result data can comprise rule data that describes a set of interaction rules associated with the set of augmented reality objects, where the set of interaction rules can determine interactions available to the user (on the first client device) with respect to the set of augmented reality objects. The augmented reality registry of the AR object system116can associate a given augmented reality object with one or more interaction rules. At operation708, the AR object interactive session module604receives, from the client device, a request for a user at the client device to interact with at least one augmented reality object in the set of augmented reality objects (determined at operation704and for which the query result was sent to the client device at operation706). In response to the request received at operation708, at operation710, the AR object interactive session module604determines (e.g., identifies) a given session server to service the request received at operation708and, at operation712, assigns the client device to a given session operating on the given session server. The given session can be a new session created by the given session in response to the request, or an existing session that involves the same set of AR objects associated with the request. For some embodiments, the AR object interactive session module604can check a session cache to determine whether a relevant, existing session already exists for the request. The given session server determined at operation710can be associated with a geographic partition of the map that contains the position of the client device on the map. As described herein, the given session can facilitate interaction with the at least one augmented reality object by the user of the client device. Additionally, as described herein, the given session can maintain a session state for the at least one augmented reality object with respect to one or more users associated with (e.g., participating in) the given session, where the session state can be updated based on interaction of at least one of the users with the at least one augmented reality object. The given session server can be determined from a plurality of given session servers (e.g.,304), a mapping server (e.g.,302) can perform the determination of the given session server. For some embodiments, the plurality of session servers operates on a first set of computer devices that is separate from a second set of computer devices operating the mapping server. For some embodiments, assigning the first client device to the given session operating on the given session server comprises redirecting the client device from the mapping server to the given session server. Once the given session is assigned to the user, the user data can be updated via the user data module614. Subsequent to the assignment, a network connection can be established between the client device and the (assigned) given session on the given session server. Referring now toFIG.8, a method800is illustrated for providing AR objects to a client device and handling a session for a plurality of users to interact with a provided AR object. For some embodiments, operations802through806are respectively similar to operation702through706of the method700described above with respect toFIG.7, and performed with respect to a first client device (associated with a first user). At operation808, the AR object interactive session module604receives, from the first client device, a request for a plurality of users to interact together (e.g., multiuser interactive session) with at least one augmented reality object in the set of augmented reality objects (determined at operation804and for which the query result was sent to the first client device at operation806). As described herein, a multiuser interactive session can facilitate interaction by a plurality of users with the at least one augmented reality object. In response to the request received at operation808, at operation810, the AR object interactive session module604determines (e.g., identifies) a given session server to service the request received at operation808and, at operation812, assigns the first client device to a given session operating on the given session server. As described herein, the given session server determined at operation810can be associated with a geographic partition of the map that contains the position of the first client device on the map. Additionally, at operation814, the AR object interactive session module604assigns a second client device associated with a second user to the same given session operating on the same given session server (determined at operation810), where the first user of the first client device and the second user of the second client device are part of the plurality of users for which session request was received at operation808. Additionally, other users of the plurality of users can be assigned to the same given session on the same given session server in a similar manner. Referring now toFIG.9, a method900is illustrated for providing AR objects to a client device and handling a session for interacting with a provided AR object. For some embodiments, operations902through912are similar to operation702through712of the method700described above with respect toFIG.7. At operation914, at termination of the given session, the AR object interactive session module604stores (or causes the storage) of a final version of a session state of the at least one augmented reality object. As described herein, the final version of a session state of a given augmented reality object can be determined (e.g., adjusted) by interactions of users participating in the given session. Referring now toFIG.10, a method1000is illustrated for registering an AR object to an AR object registry. At operation1002, the AR object registry module606receives, from a client device associated with a user, a request to register a given augmented reality object on an augmented reality object registry in association with a given set of coordinates on the map. In response to the request received at operation1002, at operation1004, the AR object registry module606determines, based on permission data, whether the user has permission to register the given augmented reality object in association with the given set of coordinates on the map. For some embodiments, the permission data describes an association between at least one set of coordinates on the map and a set of permissions. The permission data can be associated with a marked area of the map that contains the given set of coordinates. Accordingly, for some embodiments, the permission data can be provided by zone data accessible through the logical topological data module612. Additionally, in response to the request received at operation1002, operation1006is performed. At operation1006, based on the determining whether the user has permission, the AR object registry module606registers the given augmented reality object on the augmented reality object registry in association with the given set of coordinates on the map. When doing so, the AR object registry module606can designate the user as the owner or controller of the registration. Referring now toFIG.11, a method1100is illustrated for providing AR objects to a client device based on one or more rankers. At operation1102, the AR object query module602receives a query from a client device for one or more augmented reality objects, where the query can comprise a current set of coordinates that corresponds to a position of the client device on a map, and can further comprise a radius relative to (e.g., centered by a location corresponding to) the current set of coordinates. In response to the query received at operation1102, at operation1104, the AR object query module602: determines (e.g., identifies) an intermediate set of augmented reality objects based on the query; at operation1106, determining a set of rankers for the query, where at least one ranker in the set of rankers is configured to filter or sort a set of augmented reality objects; and at operation1108, generates a final set of augmented reality objects by applying the set of rankers (e.g., filtering or sorting according to the rankers) to the intermediate set of augmented reality objects. An example ranker can include one that applies at least one of a filter or a sort order to a set of augmented reality objects. Another example ranker can include one that filters a set of augmented reality objects based on a set of priorities for the set of augmented reality objects. The priorities can be provided (or determined), for example, by geolocation data (e.g., provided via the logical topological data module612) or by bidding data (e.g., provided via the AR object bidding module608) that is associated with one or more of the augmented reality objects. As described herein, a bidding system (e.g., implemented by the AR object bidding module608) can enable a user to place a bid on an AR object registration to adjust (e.g., boost) the priority of that AR object registration. Through the AR object bidding module608, a ranker can: request, from a bidding system, priority information for a set of augmented reality objects, and receive, from the bidding system, priority data that describes priorities for at least one of the set of augmented reality objects. The determination (e.g., identification) of at least one of the rankers can be based on an association of the ranker to the user of the client device (e.g., user selected use of the ranker or registered by the user). The determination of at least one of the rankers can be based on the current set of coordinates corresponding to the location of the client device. In doing so, a ranker can be applied to a radius around the client device. The determination of at least one of the rankers can be based on an attribute of a client device, such as the identity of the client device or a device type of the client device. The determination of at least one of the rankers can be based on at least one of a set (e.g., range) of dates or a set of times. In doing so, a ranker can be applied based on different portions of the years (e.g., according to seasons of the year). As alternative to operations1104through1108, for some embodiments, in response to the query received at operation1102, the AR object query module602: determines a set of rankers for the client query (e.g., where at least one ranker in the set of rankers comprises a filter parameter for filtering a set of augmented reality objects or a sort order parameter for sorting a set of augmented reality objects); and generates (e.g., constructs) a query (a ranker-based query) based on the client query and the set of rankers; and then determines (e.g., identifies) a final set of augmented reality objects based on the ranker-based query. At operation1110, the AR object query module602sends a query result to the client device, where the query result comprises result data for the final set of augmented reality objects (e.g., the final set as determined by operation1108or the alternative approach). As described herein, the result data for the final set of augmented reality objects can comprise various types of data (e.g., location data, model data, orientation data, etc.) for one or more of the augmented reality objects in the final set. Referring now toFIG.12, a method1200is illustrated for providing AR objects to a client device based on one or more rankers involving a bidding system. For some embodiments, operations1202through1206are respectively similar to operation1102through1106of the method1100described above with respect toFIG.11. At operation1208, the AR object query module602requests, from a bidding system (via the AR object bidding module608), priority information for the intermediate set of augmented reality objects determined at operation1204. For some embodiments, operation1208is performed based on at least one of the rankers determined at operation1206(e.g., the ranker uses priority information of augmented reality objects to filter or sort them). At operation1210, the AR object query module602receives, from the bidding system, priority data (or bidding data) that describes a priority for at least one augmented reality object in the intermediate set of augmented reality objects. For some embodiments, operations1212and1214are respectively similar to operation1108and1110of the method1100described above with respect toFIG.11. As described herein, the priority information obtained via operations1208and1210can enable a ranker applied to the intermediate set of augmented reality objects (by operation1212) to filter or sort the intermediate set of augmented reality objects. Referring now toFIG.13, a method1300is illustrated for registering a ranker to a ranker registry (which may be implemented as part of the AR object registry). At operation1302, the AR object registry module606receives, from a client device associated with a user, a request to register a given ranker on a ranker registry (e.g., in association with the given set of coordinates on the map, the marked area, with a specific client device, a client device type, user, type of user, time of day, date, season, etc.). In response to the request received at operation1302, at operation1304, the AR object registry module606determines, based on permission data, whether the user has permission to register the given ranker. For some embodiments, the permission data describes an association between at least one set of coordinates on the map and a set of permissions. The permission data can be associated with a marked area of the map that contains the given set of coordinates. Accordingly, for some embodiments, the permission data can be provided by zone data accessible through the logical topological data module612. Additionally, in response to the request received at operation1302, operation1306is performed. At operation1306, based on the determining whether the user has permission, the AR object registry module606registers the given ranker on the ranker registry (e.g., in association with the given set of coordinates on the map, the marked area, with a specific client device, a client device type, user, type of user, time of day, date, season, etc.). When doing so, the AR object registry module606can designate the user as the owner or controller of the registration. The ranker can be registered for use by the user only, or open for use by other users on the AR object system116. FIG.14is a block diagram illustrating an example software architecture1406, which may be used in conjunction with various hardware architectures herein described.FIG.14is a non-limiting example of a software architecture and it will be appreciated that many other architectures may be implemented to facilitate the functionality described herein. The software architecture1406may execute on hardware such as machine1500ofFIG.15that includes, among other things, processors1504, memory/storage1506, and I/O components1518. A representative hardware layer1452is illustrated and can represent, for example, the machine1500ofFIG.15. The representative hardware layer1452includes a processing unit1454having associated executable instructions1404. Executable instructions1404represent the executable instructions of the software architecture1406, including implementation of the methods, components and so forth described herein. The hardware layer1452also includes memory or storage modules memory/storage1456, which also have executable instructions1404. The hardware layer1452may also comprise other hardware1458. In the example architecture ofFIG.14, the software architecture1406may be conceptualized as a stack of layers where each layer provides particular functionality. For example, the software architecture1406may include layers such as an operating system1402, libraries1420, applications1416, and a presentation layer1414. Operationally, the applications1416or other components within the layers may invoke application programming interface (API) calls1408through the software stack and receive a response in the example form of messages1412to the API calls1408. The layers illustrated are representative in nature and not all software architectures have all layers. For example, some mobile or special purpose operating systems may not provide a frameworks/middleware1418, while others may provide such a layer. Other software architectures may include additional or different layers. The operating system1402may manage hardware resources and provide common services. The operating system1402may include, for example, a kernel1422, services1424and drivers1426. The kernel1422may act as an abstraction layer between the hardware and the other software layers. For example, the kernel1422may be responsible for memory management, processor management (e.g., scheduling), component management, networking, security settings, and so on. The services1424may provide other common services for the other software layers. The drivers1426are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers1426include display drivers, camera drivers, Bluetooth® drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth depending on the hardware configuration. The libraries1420provide a common infrastructure that is used by the applications1416or other components or layers. The libraries1420provide functionality that allows other software components to perform tasks in an easier fashion than to interface directly with the underlying operating system1402functionality (e.g., kernel1422, services1424, or drivers1426). The libraries1420may include system libraries1444(e.g., C standard library) that may provide functions such as memory allocation functions, string manipulation functions, mathematical functions, and the like. In addition, the libraries1420may include API libraries1446such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as MPEG4, H.264, MP3, AAC, AMR, JPG, PNG), graphics libraries (e.g., an OpenGL framework that may be used to render 2D and 3D graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like. The libraries1420may also include a wide variety of other libraries1448to provide many other APIs to the applications1416and other software components/modules. The frameworks/middleware1418(also sometimes referred to as middleware) provide a higher-level common infrastructure that may be used by the applications1416or other software components/modules. For example, the frameworks/middleware1418may provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks/middleware1418may provide a broad spectrum of other APIs that may be used by the applications1416or other software components/modules, some of which may be specific to a particular operating system1402or platform. The applications1416include built-in applications1438or third-party applications1440. Examples of representative built-in applications1438may include, but are not limited to, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, or a game application. Third-party applications1440may include an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform, and may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or other mobile operating systems. The third-party applications1440may invoke the API calls1408provided by the mobile operating system (such as operating system1402) to facilitate functionality described herein. The applications1416may use built-in operating system functions (e.g., kernel1422, services1424, or drivers1426), libraries1420, and frameworks/middleware1418to create user interfaces to interact with users of the system. Alternatively, or additionally, in some systems, interactions with a user may occur through a presentation layer, such as presentation layer1414. In these systems, the application/component “logic” can be separated from the aspects of the application/component that interact with a user. FIG.15is a block diagram illustrating components of a machine1500, according to some embodiments, able to read instructions from a machine-readable medium (e.g., a computer-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.15shows a diagrammatic representation of the machine1500in the example form of a computer system, within which instructions1510(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1500to perform any one or more of the methodologies discussed herein may be executed. As such, the instructions1510may be used to implement modules or components described herein. The instructions1510transform the general, non-programmed machine1500into a particular machine1500programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, the machine1500operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine1500may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1500may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1510, sequentially or otherwise, that specify actions to be taken by machine1500. Further, while only a single machine1500is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions1510to perform any one or more of the methodologies discussed herein. The machine1500may include processors1504, memory memory/storage1506, and I/O components1518, which may be configured to communicate with each other such as via a bus1502. The memory/storage1506may include a memory1514, such as a main memory, or other memory storage, and a storage unit1516, both accessible to the processors1504such as via the bus1502. The storage unit1516and memory1514store the instructions1510embodying any one or more of the methodologies or functions described herein. The instructions1510may also reside, completely or partially, within the memory1514, within the storage unit1516, within at least one of the processors1504(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1500. Accordingly, the memory1514, the storage unit1516, and the memory of processors1504are examples of machine-readable media. The I/O components1518may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components1518that are included in a particular machine1500will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components1518may include many other components that are not shown inFIG.15. The I/O components1518are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various embodiments, the I/O components1518may include output components1526and input components1528. The output components1526may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components1528may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further embodiments, the I/O components1518may include biometric components1530, motion components1534, environment components1536, or position components1538among a wide array of other components. For example, the biometric components1530may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram based identification), and the like. The motion components1534may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environment components1536may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometer that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components1538may include location sensor components (e.g., a Global Position system (GPS) receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components1518may include communication components1540operable to couple the machine1500to a network1532or devices1520via coupling1522and coupling1524respectively. For example, the communication components1540may include a network interface component or other suitable device to interface with the network1532. In further examples, communication components1540may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices1520may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a Universal Serial Bus (USB)). Moreover, the communication components1540may detect identifiers or include components operable to detect identifiers. For example, the communication components1540may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components1540, such as, location via Internet Protocol (IP) geo-location, location via Wi-Fi® signal triangulation, location via detecting a NFC beacon signal that may indicate a particular location, and so forth. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Although an overview of the inventive subject matter has been described with reference to specific embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the present disclosure. The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The detailed description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. The terms “a” or “an” should be read as meaning “at least one,” “one or more,” or the like. The use of words and phrases such as “one or more,” “at least,” “but not limited to,” or other like phrases shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. Boundaries between various resources, operations, components, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The description above includes systems, methods, devices, instructions, and computer media (e.g., computing machine program products) that embody illustrative embodiments of the disclosure. In the description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. Glossary “AUGMENTED REALITY OBJECT” in this context can refer to a virtual object (e.g., two dimension or three dimensional virtual objects) that can be presented in a client device-generated view of a real-world environment (e.g., a view presented on a display of a mobile client device), where the virtual object can interact with or enhance a real-world physical object of the real-world environment presented in the view. For example, using a camera of a smartphone, a user can view their surrounding real-world environment through the smartphone's display and the smartphone can enhance that view by displaying (e.g., superimposing) one or more virtual objects (e.g., three dimensional virtual objects) in the view in connection with one or more particular real-world physical objects of the real-world environment. For instance, an augmented reality object can be combined with a live (e.g., real-time or near real-time) camera feed such that when the augmented reality object is presented, it appears situated in the live a three-dimensional environment (e.g., augmented reality object appears to occupy a consistent three-dimensional volume and dynamically changing in aspect responsive to movement of the camera in a manner similar to that which would have been the case were the AR object a real-world physical object). In addition to visual information, a client device can convey to a user other sensory information in association with a particular augmented reality object, such as auditory information (e.g., music) and haptic information. “MIXED REALITY” in this context can refer to a merger of real-world environment and a virtual world environment (that can include one or more augmented reality objects) to generate new visualizations through a client device. The new visualizations can enhance one or more real-world physical objects of the real-world environment. The new visualization can create a new mixed reality environment in which real world physical objects and augmented reality objects can coexist and interact with each other in real time. Additionally, within mixed realty, a user can use the client device to interact in real time with the augmented reality objects. “CLIENT DEVICE” in this context can refer to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smart phones, tablets, ultra books, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network. “COMMUNICATIONS NETWORK” in this context can refer to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard setting organizations, other long range protocols, or other data transfer technology. “EMPHEMERAL” in this context can describe an item that is accessible for a time-limited duration. An ephemeral item may be an AR object, text, an image, a video and the like. The access time for the ephemeral item may be set by the item owner or originator (e.g., message sender or user registering the AR object). Alternatively, the access time may be a default setting or a setting specified by accessing user (e.g., the recipient or the user attempting to access the registered AR object). Regardless of the setting technique, the ephemeral item is transitory. “MACHINE-READABLE MEDIUM” in this context can refer to a component, device or other tangible media able to store instructions and data temporarily or permanently and may include, but is not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., Erasable Programmable Read-Only Memory (EEPROM)) and/or any suitable combination thereof. The term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., code) for execution by a machine, such that the instructions, when executed by one or more processors of the machine, cause the machine to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” excludes signals per se. “COMPONENT” in this context can refer to a device, physical entity or logic having boundaries defined by function or subroutine calls, branch points, application program interfaces (APIs), or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a Field-Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In embodiments in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an Application Program Interface (API)). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some embodiments, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other embodiments, the processors or processor-implemented components may be distributed across a number of geographic locations. “PROCESSOR” in this context can refer to any circuit or virtual circuit (a physical circuit emulated by logic executing on an actual processor) that manipulates data values according to control signals (e.g., “commands”, “op codes”, “machine code”, etc.) and which produces corresponding output signals that are applied to operate a machine. A processor may, for example, be a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC) or any combination thereof. A processor may further be a multi-core processor having two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. “TIMESTAMP” in this context can refer to a sequence of characters or encoded information identifying when a certain event occurred, for example giving date and time of day, sometimes accurate to a small fraction of a second. | 113,552 |
11943304 | DESCRIPTION OF EMBODIMENTS At least one embodiment provides an information processing method, a communication apparatus, and a communication system, to establish a shared tunnel between an access network node and a user plane function network element, so that a terminal device transmits data based on the shared tunnel. The following describes at least one embodiment with reference to the accompanying drawings. In the specification, claims, and accompanying drawings of this application, the terms “first”, “second”, and so on are intended to distinguish between similar objects but do not necessarily indicate a specific order or sequence. The terms used in such a way are interchangeable in proper circumstances, which is merely a discrimination manner that is used in response to objects having a same attribute, and are described in at least one embodiment. In addition, the terms “include”, “contain” and any other variants mean to cover the non-exclusive inclusion, so that a process, method, system, product, or device that includes a series of units is not necessarily limited to those units, but includes other units not expressly listed or inherent to such a process, method, system, product, or device. The technical solutions in at least one embodiment are applied to various communication systems for data processing, for example, a code division multiple access (code division multiple access, CDMA) system, a time division multiple access (time division multiple access, TDMA) system, a frequency division multiple access (frequency division multiple access, FDMA) system, an orthogonal frequency division multiple access (orthogonal frequency division multiple access, OFDMA) system, a single-carrier frequency division multiple access (single-carrier FDMA, SC-FDMA) system, and another system. The terms “system” and “network” are interchangeable. The CDMA system implements wireless technologies such as universal terrestrial radio access (universal terrestrial radio access, UTRA) and CDMA2000. The UTRA includes a wideband CDMA (wideband CDMA, WCDMA) technology and another variant technology of CDMA. The CDMA2000 covers the interim standard (interim standard, IS) 2000 (IS-2000), the IS-95, and the IS-856 standard. The TDMA system implements wireless technologies such as a global system for mobile communications (global system for mobile communications, GSM). The OFDMA system implements wireless technologies such as evolved universal terrestrial radio access (evolved UTRA, E-UTRA), ultra mobile broadband (ultra mobile broadband, UMB), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, and Flash OFDMA. The UTRA is a UMTS, and the E-UTRA is an evolved version of the UMTS. A new version of the UMTS that uses the E-UTRA is used in long term evolution (long term evolution, LTE) and various versions evolved based on LTE in 3GPP. A 5th generation (5th Generation, “5G” for short) communication system or new radio (New Radio, “NR” for short) is a next generation communication system under study. In addition, the communication system is further applicable to a future-oriented communication technology, and is applicable to the technical solutions provided in at least one embodiment. A system architecture and a service scenario described in at least one embodiment are intended to describe the technical solutions in at least one embodiment more clearly, and do not constitute a limitation on the technical solutions provided in at least one embodiment. A person of ordinary skill in the art may know that, with evolution of a system architecture and emergence of a new service scenario, the technical solutions provided in at least one embodiment are also applicable to similar technical problems. FIG.1shows a communication system100according to at least one embodiment. The communication system100includes a core network control plane network element101and a session management network element102. The core network control plane network element101is configured to receive a tunnel establishment request from an application function entity, where the tunnel establishment request is used to request to establish a shared tunnel for transmitting data by a terminal device; after receiving the tunnel establishment request, determine a tunnel configuration policy corresponding to the tunnel establishment request; select the session management network element based on the tunnel establishment request; and send the tunnel configuration policy to the session management network element102, where the tunnel configuration policy is used to indicate the session management network element to trigger establishment of the shared tunnel between an access network node and a user plane function network element. The session management network element102is configured to: after receiving the tunnel configuration policy, select the user plane function network element based on the tunnel configuration policy and user plane function network element capability information, and send first shared tunnel establishment information to the user plane function network element; and determine the access network node according to the tunnel configuration policy, and send second shared tunnel establishment information to the access network node, where the first shared tunnel establishment information and the second shared tunnel establishment information are used to establish the shared tunnel between the access network node and the user plane function network element. The communication system100includes a plurality of session management network elements102. The core network control plane network element101selects one session management network element102from the plurality of session management network elements102based on the tunnel establishment request from the application function entity, and then send the tunnel configuration policy to the selected session management network element102. The core network control plane network element101is a control entity having a policy decision function. For example, the core network control plane network element101is a policy control function entity (policy control function, PCF) in a core network of a 5G network. A main function of the PCF is to serve as a policy decision point, and the PCF provides rules such as detection that is based on a service data flow and an application, data transmission threshold control, quality of service (quality of service, QoS), and flow-based charging control. Specifically, the core network control plane network element101selects one session management network element102from the plurality of session management network elements102, and send the tunnel configuration policy to the session management network element102, so that the session management network element102triggers the establishment of the shared tunnel. the core network control plane network element101is alternatively another entity having a control decision function in another network (for example, a 6G network). This is not limited herein. The session management network element102is various entities having a session management function. For example, the session management network element102is a session management function entity (session management function, SMF) in a 5G network. A main function of the SMF is to control establishment, modification, and deletion of a session, select a user plane node, and the like. Specifically, the session management network element102receives the tunnel configuration policy, and trigger the establishment of the shared tunnel. it is not limited that the session management network element102is alternatively another entity having a session management function in another network (for example, a 6G network). This is not limited herein. FIG.2shows a communication system100according to at least one embodiment. In addition to a core network control plane network element101and a session management network element102, the communication system100further includes an access and mobility management network element103. The access and mobility management network element103is configured to receive a session establishment request from a terminal device; determine, based on the session establishment request, the session management network element102corresponding to the terminal device; and send a session management context establishment request to the session management network element102. The session management network element102is configured to: after receiving the session management context establishment request, determine, based on the session management context establishment request, a shared tunnel established for the terminal device; send session establishment information to a user plane function network element, where the session establishment information includes an identifier of the terminal device and an identifier of the shared tunnel; and send shared tunnel configuration information to the access and mobility management network element103, where the shared tunnel configuration information includes the identifier of the shared tunnel and an address of the user plane function network element. The access and mobility management network element103is an entity used for access and mobility management of the terminal device. For example, the access and mobility management network element103is an access and mobility management function entity (access and mobility management function, AMF) in a core network of a 5G network. A main function of the AMF includes user registration management, accessibility detection, SMF selection, mobility status switching management, and the like. The access and mobility management network element103selects the session management network element102based on the session establishment request of the terminal device, and sends the session management context establishment request to the session management network element102. The access and mobility management network element103is alternatively an entity, having an access and mobility management function for the terminal device, in another network (for example, a 6G network). This is not limited herein. FIG.3shows a communication system100according to at least one embodiment. In addition to a core network control plane network element101and a session management network element102, the communication system100further includes an application function entity104. The application function entity104is configured to send a tunnel establishment request to the core network control plane network element101, where the tunnel establishment request is used to request to establish a shared tunnel for transmitting data by a terminal device. The application function entity104is an entity that provides policy support for a core network. For example, the application function entity104is an application function entity (application function, AF) in a 5G network. A main function of the AF is to interact with a 3GPP core network to provide a service, to affect service flow routing, access network exposure, policy control, and the like. The application function entity104obtains user group information, service information, QoS information, network area information, and the like of the terminal device, and then the application function entity104requests to establish the shared tunnel for transmitting data by the terminal device. it is not limited that the application function entity104is alternatively an entity having an application function in another network (for example, a 6G network). This is not limited herein. FIG.4shows a communication system100according to at least one embodiment. In addition to a core network control plane network element101and a session management network element102, the communication system100further includes an application function entity104and a network exposure network element105. The application function entity104is configured to send a tunnel establishment request to the network exposure network element105; and the network exposure network element105is configured to: after receiving the tunnel establishment request from the application function entity104, send the tunnel establishment request to the core network control plane network element101. The network exposure network element105is an entity having a network exposure function. For example, the network exposure network element105is a network exposure function entity (network exposure function, NEF) in a 5G network. The NEF has a capability of securely exposing a service, such as a third party, edge computing, or AF connection, provided by a 3GPP network function. For example, the network exposure network element105provides a network exposure interface, so that the application function entity104sends the tunnel establishment request to the core network control plane network element101by using the network exposure network element105. In at least one embodiment, the session management network element sends first shared tunnel establishment information to a user plane function network element; and the session management network element further sends second shared tunnel establishment information to an access network node. The first shared tunnel establishment information and the second shared tunnel establishment information are used to establish the shared tunnel between the access network node and the user plane function network element.FIG.5is a schematic diagram of establishing a shared tunnel between a RAN and a UPF according to at least one embodiment. For example, a plurality of IoT UEs share a same data transmission tunnel, so that signaling overheads caused by establishing a separate data transmission tunnel for an IoT UE are avoided. For example, UE 1, UE 2, and UE 3 use the shared tunnel between the RAN and the UPF in response to transmitting data. The IoT UEs is of a same device type (for example, water meter devices belonging to a same building), or is IoT devices having a same service configuration, for example, meter reading devices that use a low data volume and operate without a particular transmission delay, including IoT devices such as a water meter, an electricity meter, and a gas meter. Based on the communication systems shown inFIG.1toFIG.4, the communication systems is used to simplify a network architecture in which a large quantity of IoT terminals access a 5G network. In the communication systems, a shared tunnel dedicated to the terminals is established for the large quantity of IoT terminals, and the shared tunnel is dynamically created for an IoT service application, so that the 5G network can provide an IoT service more flexibly and effectively. At least one embodiment further provides an information processing method. As shown inFIG.6, the method includes the following steps. 601: An application function entity sends a tunnel establishment request to a core network control plane network element, where the tunnel establishment request is used to request to establish a shared tunnel for transmitting data by a terminal device. The application function entity obtains information in response to the terminal device transmitting data. For example, the application function entity obtains at least one of the following information: user group information, service information, QoS information, and network area information of the terminal device. The application function entity generates the tunnel establishment request in response to the terminal device using the shared tunnel, where the tunnel establishment request is used to request to establish the shared tunnel for transmitting data by the terminal device. Then, the application function entity sends the tunnel establishment request to the core network control plane network element, to establish and update the shared tunnel. In at least one embodiment, the tunnel establishment request includes at least one of the following: first group information corresponding to the terminal device, service information corresponding to the shared tunnel, quality of service information corresponding to the shared tunnel, and network area information corresponding to the shared tunnel. The first group information is used to identify a user group corresponding to the shared tunnel, and is specifically a user group identifier (group ID). The service information is used to describe a service corresponding to the shared tunnel, and is information such as an internet protocol (internet protocol, IP) address and a port number of the service. The QoS information is used to describe a QoS configuration that is of a service application and that is on the shared tunnel, and is, for example, a data transmission rate (namely, a bandwidth) and an end-to-end delay provided by the shared tunnel. The network area information is used by the core network control plane network element to determine a range for establishing the shared tunnel, and is specific geographical area information. Information content carried by the tunnel establishment request is determined based on a specific scenario. This is not limited herein. In at least one embodiment, that an application function entity sends a tunnel establishment request to a core network control plane network element in step601includes: The application function entity sends the tunnel establishment request to a network exposure network element; and after receiving the tunnel establishment request from the application function entity, the network exposure network element sends the tunnel establishment request to the core network control plane network element. The network exposure network element provides a network exposure interface, and the application function entity sends the tunnel establishment request to the core network control plane network element by using the network exposure network element, so that the core network control plane network element obtains the tunnel establishment request. This implements information exchange between the application function entity and the core network control plane network element. In at least one embodiment, the core network control plane network element is determined by the network exposure network element based on the tunnel establishment request or configuration information of the network exposure network element. The communication system includes a plurality of core network control plane network elements. The network exposure network element selects one core network control plane network element from the plurality of core network control plane network elements, and then send the tunnel establishment request to the core network control plane network element. The configuration information of the network exposure network element refers to a local configuration of the network exposure network element, and the core network control plane network element corresponding to the tunnel establishment request is determined based on the local configuration. 602: After receiving the tunnel establishment request, the core network control plane network element determines a tunnel configuration policy corresponding to the tunnel establishment request. In at least one embodiment, the core network control plane network element obtains the tunnel establishment request from the application function entity, and the core network control plane network element determines, by parsing the tunnel establishment request, that the shared tunnel is established for the terminal device. For example, the tunnel establishment request includes at least one of the following information: the user group information, the service information, the QoS information, and the network area information of the terminal device. Therefore, the core network control plane network element obtains, based on the user group information, information about a group in which the terminal device is located, and obtain, based on the service information, a type of a service that is transmitted in the shared tunnel. The core network control plane network element obtains, based on the QoS information, a QoS configuration that is of the service application and that is on the shared tunnel. The core network control plane network element obtains, based on the network area information, a range for establishing the shared tunnel. In at least one embodiment, the tunnel configuration policy includes at least one of the following: second group information corresponding to the terminal device, an identifier of the shared tunnel, the quality of service information corresponding to the shared tunnel, and the network area information corresponding to the shared tunnel. Specifically, after obtaining the tunnel establishment request, the core network control plane network element generates and determine the tunnel configuration policy corresponding to the tunnel establishment request. The tunnel configuration policy is also referred to as a shared tunnel configuration policy, and includes policy information used for establishing the shared tunnel. For example, the tunnel configuration policy includes at least one of the following: the second user group information of the terminal device, the identifier of the shared tunnel, the QoS information, and the network area information. The second group information corresponding to the terminal device includes at least one of the following: a group identifier, a data network name, and a network slice identifier that correspond to the terminal device. The user group information is a user group identifier, or the core network control plane network element maps the user group identifier to a corresponding unique identifier based on a local configuration. For example, the unique identifier obtained through mapping by using the user group identifier includes a data network name (data network name, DNN), a network slice identifier (single network slice selection assistance information, S-NSSAI), or the like. In at least one embodiment, that the core network control plane network element determines a tunnel configuration policy corresponding to the tunnel establishment request includes: The core network control plane network element obtains a group configuration policy corresponding to the terminal device; and the core network control plane network element generates the tunnel configuration policy based on the tunnel establishment request and the group configuration policy. The group configuration policy corresponding to the terminal device is a group (group) policy of an operator. The core network control plane network element obtains, from the tunnel establishment request, a configuration (for example, QoS information and network area information) of the terminal device and that is on the shared tunnel. The core network control plane network element determines, based on shared tunnel configuration information provided by the application function entity and the group policy of the operator, the tunnel configuration policy corresponding to the tunnel establishment request. For example, the tunnel configuration policy includes user group information, an identifier of the shared tunnel, the QoS information, and the network area information. 603: The core network control plane network element selects a session management network element based on the tunnel establishment request. The communication system includes a plurality of session management network elements. The core network control plane network element selects one session management network element from the plurality of session management network elements based on the tunnel establishment request from the application function entity. For example, the tunnel establishment request includes the network area information, and the core network control plane network element selects, from the plurality of session management network elements based on the network area information, a session management network element that matches the network area information. There is no sequence between step603and step604. Step603is performed before step604, or step604is performed before step603, or the two steps are simultaneously performed. This is not limited herein. 604: The core network control plane network element sends the tunnel configuration policy to the session management network element, where the tunnel configuration policy is used to indicate the session management network element to trigger establishment of the shared tunnel between an access network node and a user plane function network element. In at least one embodiment, after the core network control plane network element selects the session management network element based on the tunnel establishment request, the core network control plane network element sends the tunnel configuration policy to the selected session management network element, so that the session management network element obtains the tunnel configuration policy. The tunnel configuration policy is used to indicate the session management network element to trigger the establishment of the shared tunnel between the access network node and the user plane function network element. For example, the tunnel configuration policy includes at least one of the following: the user group information of the terminal device, the identifier of the shared tunnel, the QoS information, and the network area information. Therefore, the session management network element obtains, based on the user group information, the information about the group in which the terminal device is located, and obtain, based on the service information, the type of the service that is transmitted in the shared tunnel. The session management network element obtains, based on the QoS information, the QoS configuration that is of the service application and that is on the shared tunnel. The session management network element obtains, based on the network area information, the range for establishing the shared tunnel. In at least one embodiment, after the core network control plane network element sends the tunnel configuration policy to the session management network element in step604, the information processing method provided In at least one embodiment further includes: The core network control plane network element receives a tunnel configuration result from the session management network element; and the core network control plane network element sends the tunnel configuration result to the application function entity. After the establishment of the shared tunnel between the access network node and the user plane function network element is completed, the session management network element sends the tunnel configuration result, so that the core network control plane network element receives the tunnel configuration result from the session management network element. The core network control plane network element sends the tunnel configuration result to the application function entity, so that the application function entity obtains the tunnel configuration result. The application function entity determines, based on the tunnel configuration result, that the establishment of the shared tunnel between the access network node and the user plane function network element is completed. 605: After receiving the tunnel configuration policy, the session management network element selects the user plane function network element based on the tunnel configuration policy and user plane function network element capability information, and sends first shared tunnel establishment information to the user plane function network element. In at least one embodiment, the session management network element obtains the tunnel configuration policy from the core network control plane network element. The session management network element stores the user plane function network element capability information, and matches capability information of a plurality of user plane function network elements according to the tunnel configuration policy, to determine an available user plane function network element. For example, the user plane function network element capability information includes whether the user plane function network element supports the shared tunnel, and the session management network element selects the user plane function network element from a plurality of user plane function network elements that support the shared tunnel. For example, the tunnel configuration policy includes the network area information, and the session management network element alternatively selects the user plane function network element based on the network area information and the user plane function network element capability information. For example, the selected user plane function network element matches the network area information. In at least one embodiment, the session management network element sends the first shared tunnel establishment information to the user plane function network element. The first shared tunnel establishment information includes at least one of the following: the identifier of the shared tunnel and the quality of service information corresponding to the shared tunnel. The identifier of the shared tunnel and the quality of service information corresponding to the shared tunnel is used by the user plane function network element to establish the shared tunnel, so that the user plane function network element establishes the shared tunnel. 606: The session management network element determines the access network node according to the tunnel configuration policy, and sends second shared tunnel establishment information to the access network node, where the first shared tunnel establishment information and the second shared tunnel establishment information are used to establish the shared tunnel between the access network node and the user plane function network element. In at least one embodiment, the session management network element obtains the tunnel configuration policy from the core network control plane network element, and determines the access network node according to the tunnel configuration policy. For example, the tunnel configuration policy includes the network area information, and the session management network element alternatively determines the access network node based on the network area information. For example, the access network node matches the network area information. In at least one embodiment, the session management network element sends the second shared tunnel establishment information to the access network node. The first shared tunnel establishment information and the second shared tunnel establishment information are used to establish the shared tunnel between the access network node and the user plane function network element. Specifically, the first shared tunnel establishment information is used by the user plane function network element to establish the shared tunnel, and the second shared tunnel establishment information is used by the access network node to establish the shared tunnel. A shared tunnel establishment procedure is not described in detail herein. In at least one embodiment, the second shared tunnel establishment information includes at least one of the following: the identifier of the shared tunnel, the quality of service information corresponding to the shared tunnel, and an address of the user plane function network element. The access network node determines, based on the address of the user plane function network element, that the shared tunnel is established with the user plane function network element. The identifier of the shared tunnel and the quality of service information corresponding to the shared tunnel is used by the access network node to establish the shared tunnel, so that the access network node establishes the shared tunnel. In at least one embodiment, after the session management network element sends the second shared tunnel establishment information to the access network node in step606, the information processing method provided In at least one embodiment further includes: The session management network element receives a tunnel establishment response from the access network node, where the tunnel establishment response includes an address of the access network node corresponding to the shared tunnel; and after receiving the tunnel establishment response from the access network node, the session management network element sends session configuration information to the user plane function network element, where the session configuration information includes at least one of the following: the identifier of the shared tunnel and the address of the access network node corresponding to the shared tunnel. After establishing the shared tunnel between the user plane function network element and the access network node, the access network node sends the tunnel establishment response to the session management network element. The tunnel establishment response includes the address of the access network node corresponding to the shared tunnel. The session management network element receives the tunnel establishment response from the access network node. The session management network element parses the tunnel establishment response, to obtain the address of the access network node corresponding to the shared tunnel. The session management network element then sends the session configuration information to the user plane function network element. The session configuration information includes at least one of the following: the identifier of the shared tunnel and the address of the access network node corresponding to the shared tunnel. Therefore, the user plane function network element obtains, based on the received session configuration information, the address of the access network node corresponding to the shared tunnel, so that the user plane function network element communicates with the access network node. In at least one embodiment, after the session management network element sends the session configuration information to the user plane function network element, the method provided In at least one embodiment further includes: The session management network element sends the tunnel configuration result to the core network control plane network element. After the establishment of the shared tunnel between the access network node and the user plane function network element is completed, the session management network element sends the tunnel configuration result, so that the core network control plane network element receives the tunnel configuration result from the session management network element. The core network control plane network element sends the tunnel configuration result to the application function entity, so that the application function entity obtains the tunnel configuration result. The application function entity determines, based on the tunnel configuration result, that the establishment of the shared tunnel between the access network node and the user plane function network element is completed. In the example descriptions of the foregoing embodiment, the application function entity determines the terminal device that transmits data by using the shared tunnel, and the application function entity sends the tunnel establishment request to the core network control plane network element. The core network control plane network element sends the tunnel configuration policy to the session management network element, so that the session management network element determines the user plane function network element and the access network node. The user plane function network element and the access network node is the shared tunnel used by the terminal device to transmit data. In at least one embodiment, the shared tunnel requested by the application function entity to be established is used for transmitting data by the terminal device, so that the shared tunnel is dynamically established between the access network node and the user plane function network element based on a service configuration of the terminal device on the data transmission, and the shared tunnel that is dynamically established to implement the data transmission of the terminal device. FIG.6describes at least one embodiment having the shared tunnel between the access network node and the user plane function network element established based on the tunnel establishment request. After the shared tunnel is established, the following describes how the terminal device sends and receives data based on the shared tunnel.FIG.7AandFIG.7Bshow an information processing method according to at least one embodiment. After the session management network element sends the second shared tunnel establishment information to the access network node in step606, the information processing method provided in at least one embodiment further includes the following steps. 701: An access and mobility management network element receives a session establishment request from the terminal device. After the shared tunnel is established between the access network node and the user plane function network element, to receive and send data by using the shared tunnel, the terminal device first triggers session establishment, and sends the session establishment request to the access and mobility management network element. In at least one embodiment, the session establishment request includes the user group information of the terminal device and a slice identifier. The user group information of the terminal device is a data network name DNN of the terminal device, and the slice identifier is an identifier of a network slice used for data transmission. 702: The access and mobility management network element determines, based on the session establishment request, the session management network element corresponding to the terminal device. The communication system includes a plurality of session management network elements, and the access and mobility management network element selects one session management network element from the plurality of session management network elements based on the session establishment request. For example, the session establishment request includes the DNN, and the access and mobility management network element determines, based on the DNN, a session management network element that matches the DNN. In at least one embodiment, that the access and mobility management network element determines, based on the session establishment request, the session management network element corresponding to the terminal device in step702includes: The access and mobility management network element determines group information of the terminal device based on the session establishment request; the access and mobility management network element sends the group information of the terminal device to a session selection network element; and the access and mobility management network element receives session management network element information determined by the session selection network element. After receiving the session establishment request, the access and mobility management network element obtains the group information of the terminal device based on the session establishment request. For example, the access and mobility management network element obtains the user group information (which is referred to as group information for short) of the terminal device from a user subscription data management entity (unified data management, UDM). For example, the group information includes information such as a group identifier, the slice identifier, and the DNN. The session selection network element is an entity having a network element selection function. The session management network element corresponding to the user group information of the terminal device is preconfigured for the session selection network element, and the session selection network element sends the corresponding session management network element information to the access and mobility management network element based on the group information of the terminal device. The session selection network element is a network slice selection function entity (network slice selection function, NSSF) and/or a network exposure function entity (network repository function, NRF). 703: The access and mobility management network element sends a session management context establishment request to the session management network element. In at least one embodiment, after the access and mobility management network element determines the session management network element, the access and mobility management network element sends the session management context establishment request to the session management network element. The session management context establishment request is used to trigger establishment of a session management (session management, SM) context. The session management context establishment request carries the user group information of the terminal device. In at least one embodiment, after the access and mobility management network element sends the session management context establishment request to the session management network element in step703, the information processing method provided in at least one embodiment further includes the following steps: The access and mobility management network element receives shared tunnel establishment information from the session management network element, where the shared tunnel establishment information includes the identifier of the shared tunnel and the address of the user plane function network element; and the access and mobility management network element sends the shared tunnel establishment information to the access network node. In response to the shared tunnel being established for the terminal device, the session management network element further obtains the identifier of the shared tunnel and the address of the user plane function network element, and then send the shared tunnel establishment information to the access and mobility management network element, so that the access and mobility management network element sends the shared tunnel establishment information to the access network node. The access network node stores the shared tunnel establishment information for subsequent data transmission of the terminal device. 704: After receiving the session management context establishment request, the session management network element determines, based on the session management context establishment request, the shared tunnel established for the terminal device. In at least one embodiment, after receiving the session management context establishment request, the session management network element determines, based on the session management context establishment request, whether the shared tunnel has been established for the terminal device in advance. In response to determining that the shared tunnel is established for the terminal device, subsequent step705is performed. In response to the shared tunnel not being established for the terminal device, the shared tunnel establishment procedure shown inFIG.6is performed again. In at least one embodiment, the session management context establishment request includes the group information of the terminal device. For example, the session management context establishment request carries the user group information of the terminal device. The session management network element queries, based on the user group information, the tunnel configuration policy stored in the session management network element, to determine whether the shared tunnel has been established for the terminal device. 705: The session management network element sends session establishment information to the user plane function network element, where the session establishment information includes an identifier of the terminal device and the identifier of the shared tunnel. In at least one embodiment, in response to the shared tunnel being established for the terminal device, the session management network element obtains the identifier of the terminal device and the identifier of the shared tunnel, and then sends the session establishment information to the user plane function network element, so that the user plane function network element obtains the identifier of the terminal device and the identifier of the shared tunnel. Therefore, the user plane function network element receives and sends data by using the shared tunnel. 706: The session management network element sends shared tunnel configuration information to the access and mobility management network element, where the shared tunnel configuration information includes the identifier of the shared tunnel and the address of the user plane function network element. In at least one embodiment, in response to the shared tunnel being established for the terminal device, the session management network element further obtains the identifier of the shared tunnel and the address of the user plane function network element, and then send the shared tunnel establishment information to the access and mobility management network element, so that the access and mobility management network element sends the shared tunnel establishment information to the access network node. The access network node stores the shared tunnel establishment information for subsequent data transmission of the terminal device. The example descriptions describe that, after the shared tunnel is established between the access network node and the user plane function network element, to receive and send data by using the shared tunnel, the terminal device first triggers session establishment, and sends the session establishment request to the access and mobility management network element. The access and mobility management network element interacts with the session management network element, to successfully establish a session context, so that the terminal device transmits data by using the shared tunnel. To better understand and implement at least one embodiment, the following uses corresponding scenarios as examples. As shown inFIG.8, at least one embodiment describes a schematic diagram of an interaction procedure between an AF, an NEF, a PCF, an SMF, a UPF, and a RAN. The PCF establishes a shared tunnel based on a tunnel establishment request from the AF. The following procedure is mainly included.S01: The AF sends the tunnel establishment request to the NEF.S02: The NEF sends the tunnel establishment request to the PCF. Specifically, the AF provides the tunnel establishment request to the NEF, where the tunnel establishment request includes user group information, service information, QoS information (for example, a bandwidth and a delay), network area information, and the like. The user group information is used to identify a user group corresponding to the shared tunnel, and is specifically a user group identifier (group ID). The service information is used to describe a service corresponding to the shared tunnel, and is information such as an IP address and a port number of the service. The network area information is used to determine a range for establishing the shared tunnel, and is specific geographical area information. The QoS information is used to describe a QoS configuration that is of a service application and that is on the shared tunnel, and is, for example, a data transmission rate and an end-to-end delay that is provided by the shared tunnel. The NEF selects the PCF based on the network area information or local configuration information, and forwards the tunnel establishment request obtained from the AF. In actual network deployment, a plurality of PCFs exist. Different PCFs serve different network areas, that is, the NEF selects the PCF based on the network area information. Alternatively, an operator deploys some specific PCFs in advance, to specifically process establishment of the shared tunnel. In this case, the NEF preconfigures and determines address information of the specific PCFs. S03: The PCF generates a tunnel configuration policy. Specifically, the PCF determines, based on the tunnel establishment request provided by the AF and a group policy of the operator, the shared tunnel configuration policy corresponding to the tunnel establishment request of the AF. The shared tunnel configuration policy includes: the user group information, an identifier of the shared tunnel, a QoS information, and the network area information. The user group information is a user group identifier, or the PCF maps the user group identifier to a corresponding data network name or network slice identifier based on a local configuration of the PCF. S04: The PCF sends the tunnel configuration policy to the SMF. The PCF selects the corresponding SMF based on the network area information, and sends the shared tunnel configuration policy to one or more corresponding SMFs, to trigger the SMF to establish the shared tunnel.S05a: The SMF sends first shared tunnel establishment information to the UPF.S05b: The UPF sends a first response message to the SMF. Specifically, the SMF stores the shared tunnel configuration policy, selects the corresponding UPF based on the network area information and UPF capability information (for example, whether the shared tunnel is supported), and sends the first shared tunnel establishment information. The first shared tunnel establishment information includes the identifier of the shared tunnel and the QoS information. The UPF returns acknowledgment (ACK) information to the SMF and stores the shared tunnel establishment information for subsequent data transmission.S06a: The SMF sends second shared tunnel establishment information to the RAN.S06b: The RAN sends a second response message to the SMF. The SMF determines the related RAN node based on the network area information, and sends the second shared tunnel establishment information to the RAN node. The second shared tunnel establishment information includes the identifier of the shared tunnel, the QoS information, and an address of the UPF. For example, the SMF first sends the second shared tunnel establishment information to the AMF, and the AMF forwards the second shared tunnel establishment information to the RAN. The RAN stores the second shared tunnel establishment information, and returns, to the SMF, an address of the RAN corresponding to the shared tunnel.S07: The SMF sends a tunnel establishment response to the UPF. The SMF sends, to the UPF, the address of the RAN corresponding to the shared tunnel.S08: The SMF sends a tunnel configuration result to the PCF.S09: The PCF sends the tunnel configuration result to the NEF.S10: The NEF sends the tunnel configuration result to the AF. After the shared tunnel is established, the SMF sends the tunnel configuration result to the PCF, and the PCF returns the tunnel configuration result to the AF by using the NEF, so that the AF obtains the tunnel configuration result. In at least one embodiment, the AF provides the shared tunnel configuration information to a network, so that the shared tunnel is established. To be specific, during actual application, the AF updates configuration information of the shared tunnel to the network based on a service configuration. For example, the SMF determines whether to re-establish a new shared tunnel or modify a previously established shared tunnel. The PCF obtains the user group information, the service information, the QoS information, and the network area information of the terminal device, so that the PCF correspondingly updates the shared tunnel configuration policy and update establishment of the shared tunnel. In at least one embodiment, the PCF dynamically creates and updates the shared tunnel based on the service configuration. For example, the service configuration includes the user group information, the service information, the QoS information, the network area information, and the like. Therefore, the shared tunnel is applicable to terminal devices of various service types. As shown inFIG.9, after the shared tunnel establishment procedure shown inFIG.8is completed, UE sends and receive data based on the created shared tunnel. The following process is mainly included. S11: The UE sends a session establishment request to an AMF. The UE initiates a PDU session establishment request message to the AMF, where the request message carries requested DNN and slice identifier information. S12: The AMF selects an SMF based on user group information of the UE. The AMF obtains the user group information of the UE from a user subscription data management entity, for example, information such as a user group identifier corresponding to the UE and an identifier of a slice to which the UE subscribes. That the AMF selects a corresponding SMF based on user group information of the UE specifically includes: The AMF sends the user group information of the UE to a session selection network element, where the user group information includes information such as a group identifier, a slice identifier, and a DNN; and the SMF corresponding to the user group information of the UE is preconfigured for the session selection network element, and the session selection network element returns corresponding SMF information to the AMF based on the user group information that is provided by the AMF and that is of the UE. The session selection network element is an NSSF and/or an NRF. S13: The AMF sends a session management context establishment request to the SMF. Specifically, the AMF sends a PDU session SM context establishment request message to the selected SMF, where the PDU session SM context establishment request message carries the user group information of the UE. S14: The SMF determines, based on the user group information of the UE, that the shared tunnel has been established for the UE. Based on the user group information of the UE, the SMF queries the shared tunnel configuration policy stored in the SMF, for whether the shared tunnel has been established for the UE. S15: The SMF sends session establishment information to a UPF. The SMF sends the corresponding session establishment message to the corresponding UPF, where the session establishment message includes an identifier of the UE and an identifier of the corresponding shared tunnel. The UPF returns ACK information to the SMF, and the UPF and the SMF store shared tunnel establishment information for subsequent data transmission of the UE. S16: The UPF sends tunnel establishment information to a RAN. The SMF sends the tunnel establishment information to the RAN, where the tunnel establishment information includes the identifier of the shared tunnel and an address of the UPF. S17: The RAN sends the shared tunnel establishment information to the UE. The RAN sends the shared tunnel establishment information to the UE for subsequent data transmission of the UE. S18: The UE sends uplink data to the RAN. The UE sends the uplink data to the RAN, where the uplink data carries the identifier of the corresponding shared tunnel. S19: The RAN determines the address of the UPF based on the identifier of the shared tunnel, and forwards the uplink data. The RAN determines the address of the corresponding UPF based on the identifier of the shared tunnel, and correspondingly forwards the uplink data. S20: The RAN sends the uplink data to the UPF. The RAN forwards the uplink data to the corresponding UPF, where the uplink data carries the identifier of the shared tunnel. A downlink data transmission process is similar to the uplink data transmission process, and details are not described herein again. In at least one embodiment, the AMF selects the SMF based on the user group information of the UE, and the SMF determines, based on the user group information of the UE, whether the shared tunnel that is used for the UE has been pre-established, so that the UE is allocated to the corresponding shared tunnel for subsequent data transmission. In at least one embodiment, the SMF is selected based on the user group information of the UE, and the SMF allocates the shared tunnel based on the group information of the UE. During actual application, the shared tunnel is flexibly configured, and an operator flexibly configures UE information based on which the shared tunnel is configured and allocated. The foregoing method embodiments are represented as a combination of a series of actions. However, a person skilled in the art appreciates that at least one embodiment is not limited to the described order of the actions, because some steps are performed in another order or simultaneously. A person skilled in the art further knows that at least one embodiment described herein are examples. To better implement at least one embodiment, a related apparatus is further provided below. Refer toFIG.10. A core network control plane network element1000provided in at least one embodiment includes a sending module1001, a receiving module1002, and a processing module1003. The receiving module is configured to receive a tunnel establishment request from an application function entity, where the tunnel establishment request is used to request to establish a shared tunnel for transmitting data by a terminal device; the processing module is configured to determine a tunnel configuration policy corresponding to the tunnel establishment request; the processing module is configured to select a session management network element based on the tunnel establishment request; and the sending module is configured to send the tunnel configuration policy to the session management network element, where the tunnel configuration policy is used to indicate the session management network element to trigger establishment of the shared tunnel between an access network node and a user plane function network element. In at least one embodiment, the receiving module is configured to receive the tunnel establishment request from a network exposure network element, where the network exposure network element receives the tunnel establishment request from the application function entity. In at least one embodiment, the processing module is configured to obtain a group configuration policy corresponding to the terminal device; and generate the tunnel configuration policy based on the tunnel establishment request and the group configuration policy. In at least one embodiment, the tunnel establishment request includes at least one of the following: first group information corresponding to the terminal device, service information corresponding to the shared tunnel, quality of service information corresponding to the shared tunnel, and network area information corresponding to the shared tunnel. In at least one embodiment, the tunnel configuration policy includes at least one of the following: second group information corresponding to the terminal device, an identifier of the shared tunnel, the quality of service information corresponding to the shared tunnel, and the network area information corresponding to the shared tunnel. Refer toFIG.11. A session management network element1100provided in in at least one embodiment includes a sending module1101, a receiving module1102, and a processing module1103. The receiving module is configured to receive a tunnel configuration policy from a core network control plane network element; the processing module is configured to select a user plane function network element based on the tunnel configuration policy and user plane function network element capability information; the sending module is configured to send first shared tunnel establishment information to the user plane function network element; the processing module is configured to determine an access network node according to the tunnel configuration policy; and the sending module is configured to send second shared tunnel establishment information to the access network node, where the first shared tunnel establishment information and the second shared tunnel establishment information are used to establish a shared tunnel between the access network node and the user plane function network element. In at least one embodiment, the first shared tunnel establishment information includes at least one of the following: an identifier of the shared tunnel and quality of service information corresponding to the shared tunnel. In at least one embodiment, the second shared tunnel establishment information includes at least one of the following: the identifier of the shared tunnel, the quality of service information corresponding to the shared tunnel, and an address of the user plane function network element. In at least one embodiment, the receiving module is configured to: after the sending module sends the second shared tunnel establishment information to the access network node, receive a tunnel establishment response from the access network node, where the tunnel establishment response includes an address of the access network node corresponding to the shared tunnel. In at least one embodiment, the sending module is configured to: after the receiving module receives the tunnel establishment response from the access network node, send session configuration information to the user plane function network element, where the session configuration information includes at least one of the following: the identifier of the shared tunnel and the address of the access network node corresponding to the shared tunnel. In at least one embodiment, the receiving module is configured to: after the sending module sends the second shared tunnel establishment information to the access network node, receive a session management context establishment request from an access and mobility management network element;the processing module is configured to determine, based on the session management context establishment request, the shared tunnel established for a terminal device; the sending module is configured to send session establishment information to the user plane function network element, where the session establishment information includes an identifier of the terminal device and the identifier of the shared tunnel; and the sending module is configured to send shared tunnel configuration information to the access and mobility management network element, where the shared tunnel configuration information includes the identifier of the shared tunnel and the address of the user plane function network element. Refer toFIG.12. An access and mobility management network element1200provided in at least one embodiment includes a sending module1201, a receiving module1202, and a processing module1203. The receiving module is configured to receive a session establishment request from a terminal device;the processing module is configured to determine, based on the session establishment request, a session management network element corresponding to the terminal device; and the sending module is configured to send a session management context establishment request to the session management network element. In at least one embodiment, the processing module is configured to determine group information of the terminal device based on the session establishment request; the sending module is configured to send the group information of the terminal device to a session selection network element; and the receiving module is configured to receive session management network element information determined by the session selection network element. In at least one embodiment, the receiving module is configured to: after the sending module sends the session management context establishment request to the session management network element, receive shared tunnel establishment information from the session management network element, where the shared tunnel establishment information includes an identifier of a shared tunnel and an address of a user plane function network element; and the sending module is configured to send the shared tunnel establishment information to an access network node. It is learned from the example descriptions of the foregoing embodiment that the application function entity determines the terminal device that transmits data by using the shared tunnel, and the application function entity sends the tunnel establishment request to the core network control plane network element. The core network control plane network element sends the tunnel configuration policy to the session management network element, so that the session management network element determines the user plane function network element and the access network node. The user plane function network element and the access network node is the shared tunnel used by the terminal device to transmit data. In at least one embodiment, the shared tunnel requested by the application function entity to be established is used for transmitting data by the terminal device, so that the shared tunnel is dynamically established between the access network node and the user plane function network element based on a service configuration of the terminal device on the data transmission, and the shared tunnel that is dynamically established implements the data transmission of the terminal device. Content such as information exchange between the modules/units of the apparatus and the execution processes thereof is based on a same concept as the method embodiments, and achieves same technical effects as the method embodiments. For specific content, refer to the foregoing descriptions in the method embodiments of this application. Details are not described herein again. At least ne embodiment provides a computer storage medium. The computer storage medium stores a program. The program is executed to perform steps recorded in the method embodiments. The following describes another core network control plane network element provided in at least one embodiment. Refer toFIG.13. A core network control plane network element1300includes: a receiver1301, a transmitter1302, a processor1303, and a memory1304(there is one or more processors1303in the core network control plane network element1300, and one processor is used as an example inFIG.13). In at least one embodiment, the receiver1301, the transmitter1302, the processor1303, and the memory1304is connected through a bus or in another manner. InFIG.13, connection through a bus is used as an example. The memory1304includes a read-only memory and a random access memory, and provide instructions and data for the processor1303. A part of the memory1304further includes a non-volatile random access memory (non-volatile random access memory, NVRAM). The memory1304stores an operating system and operation instructions, an executable module or a data structure, or a subset thereof, or an extended set thereof. The operation instructions includes various operation instructions that are used to implement various operations. The operating system includes various system programs, to implement various basic services and process a hardware-based task. The processor1303controls an operation of the core network control plane network element, and the processor1303is also referred to as a central processing unit (central processing unit, CPU). During specific application, components of the core network control plane network element are coupled together through a bus system. In addition to a data bus, the bus system further includes a power bus, a control bus, a status signal bus, or the like. However, for clear description, various types of buses in the figure are referred to as the bus system. The method disclosed in the foregoing embodiments is applied to the processor1303, or is implemented by the processor1303. The processor1303is an integrated circuit chip, and has a signal processing capability. In an implementation process, steps in the foregoing methods are implemented by using a hardware integrated logical circuit in the processor1303, or by using instructions in a form of software. The processor1303is a general-purpose processor, a digital signal processor (digital signal processor, DSP), an application-specific integrated circuit (application-specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA) or another programmable logic component, a discrete gate or transistor logic device, or a discrete hardware component. The processor implements or performs the methods, the steps, and the logical block diagrams that are disclosed in at least one embodiment. The general-purpose processor is a microprocessor, or the processor is a central processing unit, micro-controller, digital signal processor, or the like. Steps of the methods disclosed with reference to at least one embodiment are directly executed and accomplished by using a hardware decoding processor, or are executed and accomplished by using a combination of hardware and software modules in the decoding processor. The software module is located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register. The storage medium is located in the memory1304, and the processor1303reads information in the memory1304and completes the steps in the foregoing methods in combination with hardware in the processor1303. The receiver1301is configured to receive input digit or character information, and generate signal input related to a related setting and function control of the core network control plane network element. The transmitter1302includes a display device such as a display screen, and the transmitter1302is configured to output digit or character information through an external interface. In at least one embodiment, the processor1303is configured to perform the information processing method performed by the core network control plane network element. The following describes another session management network element provided in at least one embodiment. Refer toFIG.14. A session management network element1400includes: a receiver1401, a transmitter1402, a processor1403, and a memory1404(there is one or more processors1403in the session management network element1400, and one processor is used as an example inFIG.14). In at least one embodiment, the receiver1401, the transmitter1402, the processor1403, and the memory1404is connected through a bus or in another manner. InFIG.14, connection through a bus is used as an example. The memory1404includes a read-only memory and a random access memory, and provide instructions and data for the processor1403. A part of the memory1404further includes an NVRAM. The memory1404stores an operating system and operation instructions, an executable module or a data structure, or a subset thereof, or an extended set thereof. The operation instructions includes various operation instructions that are used to implement various operations. The operating system includes various system programs, to implement various basic services and process a hardware-based task. The processor1403controls an operation of the session management network element, and the processor1403is also referred to as a CPU. Components of the session management network element are coupled together through a bus system. In addition to a data bus, the bus system further includes a power bus, a control bus, a status signal bus, or the like. However, for clear description, various types of buses in the figure are referred to as the bus system. The method disclosed in the foregoing embodiments are applied to the processor1403, or are implemented by the processor1403. The processor1403is an integrated circuit chip, and has a signal processing capability. In at least one embodiment, steps in the foregoing methods are implemented by using a hardware integrated logical circuit in the processor1403, or by using instructions in a form of software. The foregoing processor1403is a general-purpose processor, a DSP, an ASIC, an FPGA or another programmable logic component, a discrete gate or transistor logic device, or a discrete hardware component. The processor implements or performs the methods, the steps, and the logical block diagrams that are disclosed in at least one embodiment. The general-purpose processor is a microprocessor, or the processor is, a central processing unit, micro-controller, digital signal processor, or the like. Steps of the methods disclosed with reference to at least one embodiment are directly executed and accomplished by using a hardware decoding processor, or are executed and accomplished by using a combination of hardware and software modules in the decoding processor. The software module is located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register. The storage medium is located in the memory1404, and the processor1403reads information in the memory1404and completes the steps in the foregoing methods in combination with hardware in the processor1403. In at least one embodiment, the processor1403is configured to perform the information processing method performed by the session management network element. The following describes another access and mobility management network element provided in at least one embodiment. Refer toFIG.15. An access and mobility management network element1500includes: a receiver1501, a transmitter1502, a processor1503, and a memory1504(there is one or more processors1503in the access and mobility management network element1500, and one processor is used as an example inFIG.15). In at least one embodiment, the receiver1501, the transmitter1502, the processor1503, and the memory1504is connected through a bus or in another manner. InFIG.15, connection through a bus is used as an example. The memory1504includes a read-only memory and a random access memory, and provide instructions and data for the processor1503. A part of the memory1504further includes an NVRAM. The memory1504stores an operating system and operation instructions, an executable module or a data structure, or a subset thereof, or an extended set thereof. The operation instructions includes various operation instructions that are used to implement various operations. The operating system includes various system programs, to implement various basic services and process a hardware-based task. The processor1503controls an operation of the access and mobility management network element, and the processor1503is also referred to as a CPU. During specific application, components of the access and mobility management network element are coupled together through a bus system. In addition to a data bus, the bus system further includes a power bus, a control bus, a status signal bus, or the like. However, for clear description, various types of buses in the figure are referred to as the bus system. The method disclosed in the foregoing embodiments are applied to the processor1503, or are implemented by the processor1503. The processor1503is an integrated circuit chip, and has a signal processing capability. In an implementation process, steps in the foregoing methods are implemented by using a hardware integrated logical circuit in the processor1503, or by using instructions in a form of software. The foregoing processor1503is a general-purpose processor, a DSP, an ASIC, an FPGA or another programmable logic component, a discrete gate or transistor logic device, or a discrete hardware component. The processor implements or performs the methods, the steps, and the logical block diagrams that are disclosed in at least one embodiment. The general-purpose processor is a microprocessor, or the processor is a central processing unit, micro-controller, digital signal processor, or the like. Steps of the methods disclosed with reference to at least one embodiment are directly executed and accomplished by using a hardware decoding processor, or are executed and accomplished by using a combination of hardware and software modules in the decoding processor. The software module is located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register. The storage medium is located in the memory1504, and the processor1503reads information in the memory1504and completes the steps in the foregoing methods in combination with hardware in the processor1503. In at least one embodiment, the processor1503is configured to perform the request processing method performed by the access and mobility management network element. In at least one embodiment of the core network control plane network element, the session management network element, and the access and mobility management network element being a chip, the chip includes a processing unit and the communication unit. The processing unit is, for example, a processor, and the communication unit is, for example, an input/output interface, a pin, or a circuit. The processing unit executes computer-executable instructions stored in a storage unit, to enable the chip in the terminal device to perform the information processing method according to at least one embodiment. In at least one embodiment, the storage unit is a storage unit in the chip, for example, a register or a cache. Alternatively, the storage unit is a storage unit that is in the terminal and that is located outside the chip, for example, a read-only memory (read-only memory, ROM), another type of static storage device that stores static information and instructions, or a random access memory (random access memory, RAM). The processor mentioned anywhere above is a general-purpose central processing unit, a microprocessor, an ASIC, or one or more integrated circuits configured to control program execution of the method. In addition, embodiments described herein are merely examples. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, is located in one position, or is distributed on a plurality of network units. At least one of the modules are selected according to actual needs to achieve the objectives of the solutions of embodiments. In addition, in the accompanying drawings of the apparatus provided in at least one embodiment, connection relationships between modules indicate that the modules have communication connections, which is implemented as one or more communications buses or signal cables. Based on the description herein, a person skilled in the art clearly understands that at least one embodiment uses software in addition to hardware, or dedicated hardware, including a dedicated integrated circuit, a dedicated CPU, a dedicated memory, a dedicated component, and the like. Generally, any functions that performed by a computer program are easily implemented using corresponding hardware. Moreover, a hardware structure used to achieve a same function is of various forms, for example, in a form of an analog circuit, a digital circuit, or a dedicated circuit. However, in at least one embodiment, a software program is used. Based on such an understanding, the technical solutions of at least one embodiment are implemented as a software product. The computer software product is stored in a readable storage medium, such as a floppy disk, a USB flash drive, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disc of a computer, and includes several instructions for instructing a computer device (which is a personal computer, a server, a network device, or the like) to perform the methods described in at least one embodiment. At least one embodiment is implemented using software, hardware, firmware, or any combination thereof. In response to software being used, at least one embodiment is in a form of a computer program product. The computer program product includes one or more computer instructions. In response to the computer program instructions being loaded and executed on the computer, the procedure or functions according to at least one embodiment are generated. The computer is a general-purpose computer, a dedicated computer, a computer network, or other programmable apparatuses. The computer instructions is stored in a computer-readable storage medium or is transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions is transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium is any usable medium accessible by the computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium is a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a DVD), a semiconductor medium (for example, a solid-state drive (Solid-State Drive, SSD)), or the like. | 81,703 |
11943305 | DETAILED DESCRIPTION FIG.1is a high-level block diagram of an example digital twin software architecture100. The architecture may be divided into client-side software110that executes on one or more computing devices local to an end-user (collectively “client devices”) and cloud-based software112that is executed on one or more computing devices remote from the end-user (collectively “cloud computing devices”) accessible via a network (e.g., the Internet). The client-side software110may include web applications120that operate within a virtual environment (e.g., a browser sandbox) provided by a web browser121(e.g., a Chrome® web browser), desktop applications122that operate under a desktop operating system (e.g., a Windows® operating system) and include an embedded web browser (e.g., a Chromium® browser)123, and mobile applications124that operate under a mobile operating system (e.g., an iOS® or Android® operating system) that include a script engine (e.g., a JavaScript engine)125. The applications120,122,124may be functionally divided into frontend modules130and backend modules132, the functions of which are discussed in more detail below. For each type of application120,122,124, the frontend module130is part of client-side software110. For desktop applications122and mobile applications124, the backend module132is also part of client-side software110, resident on a client device and accessible to the frontend module122via inter-process communication (IPC) or is function calls. For web applications120, the backend module132is part of cloud-based software112, executing on a virtual machine134on a cloud computing device and communicating with the frontend module130via HyperText Transfer Protocol Secure (HTTP). Infrastructure modeling services140may be at the core of the cloud-based software112. Such services software may provide centralized management and synchronization support for infrastructure models (e.g., iModel® models). The term “infrastructure” refers to a physical structure or object that has been built, or is planned to be built, in the real-world. Examples of infrastructure include buildings, factories, roads, railways, utility networks, etc. The term “infrastructure model” refers to an information container that holds data associated with the lifecycle of infrastructure. Infrastructure models may be a constituent part of a digital twin of infrastructure that federates together data from one or more infrastructure models with data from other sources. Infrastructure modeling services140may interact with a number of other services in the cloud that perform information management and support functions. For example, information management services144may manage asset data, project data, reality data, Internet of Things (IoT) data, codes, and other features. Further, bridge services146may work together with infrastructure modeling services140to permit interoperation with other data sources (not shown), incrementally align data using source-format-specific bridges that know how to read and interpret source data of other formats. A wide variety of additional services (not shown) may also be provided and interact with infrastructure modeling services140and the rest of the software architecture100. Working with infrastructure modeling services140, the frontend modules130and backend module132of applications120,122,124may access and operate upon data of digital twins, including data included in infrastructure models. Frontend modules130may be primarily concerned with data visualization and user interaction. They may access data by making requests to backend modules132. Backend modules132may be primarily concerned with administration, data synchronization, interacting with components of infrastructure models such as elements, aspects, and models, working with is local file systems and using native libraries. As mentioned above, users typically interact with digital twins during sessions of applications122,124,126. Sessions of applications may be customized based on settings and workspace resources configured by administrators of the application, the user's organization, a digital twin currently being used in the application, and/or an infrastructure model currently being used by that digital twin. A “setting” refers to a named parameter (e.g., configurable option) defined by an application but supplied at runtime that affects an aspect of the application's operation. A “workspace resource” refers to named reference data used by an application that affects an aspect of the application's operation. At least some workspace resources may specify settings. To address problems of prior workspace resource management schemes, the above-described infrastructure modeling software architecture100may utilize workspace databases. A “workspace database” refers to a database (e.g., a SQLite database) that holds workspace resources. Some workspace databases may be file-based such that they are held in local workspace files176in the local file system of a backend module132(e.g., on a client device in the case of a desktop application122or mobile application124, or virtual machine134in the case of a web application120). Such a workspace database may be referred to as a “File Db”. Other workspace databases are maintained in a cloud-based blob storage container (also referred to simply as a “cloud container”)170of a storage account of a cloud storage system (e.g., Microsoft Azure (Azure), Amazon Web Services (AWS), etc.). Such a workspace database may be referred to as a “Cloud Db”. Each cloud container may hold multiple Cloud Dbs. To use a Cloud Db of a cloud container170, a backend module132may create an in-memory cloud container object172(e.g., on the client device in the case of a desktop application122or mobile application124, or virtual machine134in the case of a web application120) that represents a connection (e.g., a read-only or read-write connection) to the cloud container170. The cloud container object172may be attached to an in-memory object configured to manage a local cache of blocks of workspace databases (referred to as a “cloud cache”)174. The local cache may be located in an associated local directory (e.g., on a temporary is disk on the client device in the case of a desktop application122or mobile application124, or virtual machine134in the case of a web application120). Access to a cloud container170may be managed by access tokens (e.g., shared access signature (SAS) tokens) provided by a container authority178of the cloud storage system. Modifications to existing workspace databases or addition of new workspace databases may be performed by administrators using a workspace editor138. Where the modifications or additions are to a Cloud Db, write access may be managed using write locks. In more detail,FIG.2is a diagram of an example workspace database200that may be implemented in the infrastructure modeling software architecture100ofFIG.1. The workspace database200may be a SQLite database having a database name202and divided into three tables: a string table210that holds strings, a blob table220that holds arrays of unsigned (e.g., 8-bit) integers (blobs), and a file table230that holds arbitrary files. Each table210-230may have a column for a workspace resource name240and a column for a value250. Workspace resource names generally are unique for each resource type, but may be duplicated across types (e.g., a string, a blob, and a file resource in the same workspace database200may have the same workspace resource name). Data may be compressed and broken into values in multiple rows of the tables210-230. For example, data for a file resource may be compressed and broken into multiple rows of the file table230. Typically, there is no limit on the number of workspace resources that may be held in an workspace database200. However, it may be preferable to create multiple workspace databases200rather than utilize databases of large size to avoid lengthy downloads. A workspace database200may be divided into a number of blocks260, which are fixed size pieces of the database. Such blocks260typically do no span across tables210-230(e.g., a block includes entries from just one table). In the case of a Cloud Db, blocks may be considered to be in one of three states. A block may be considered to be in a “remote” state if its contents exist only in the cloud container170. A block may be considered to be in a “local” state if its contents have been downloaded into the cloud cache174and have not been changed, such that they exist in the same form in both the cloud cache174and the cloud container170. A block may be considered to be in a “dirty” state if its contents have been downloaded into the cloud cache174and have been changed, such that the copy in the cloud cache174differs from that in the cloud container170. Optionally, a workspace database200may be versioned. A version number that is incremented when contents change may be associated with the workspace database200, for example, incorporated into the database name202. The version number may be specified according to a semantic versioning (SemVer) format with portions that indicate major version, minor version, and patch. Typically, a major version is incremented when the content change breaks existing application program interfaces (APIs). A minor version is incremented when APIs may change, but there is backwards compatibility. Likewise, a patch is incremented when there is backwards compatibility and no API changes. By default, an initial version may be marked “1.0.0”. While a workspace database200may be versioned, individual workspace resources within the workspace database are typically not versioned. FIG.3is a diagram showing relations of an example cloud container170of a storage account of a cloud storage system to a cloud container object172and cloud cache174of a backend module132. While only a single cloud container170is shown in this example, it should be remembered that a storage account may store many cloud containers170. The cloud container170is identified by a container name302, typically a non-human readable identifier (e.g., a globally unique identifier (GUID) with a prefix or suffix) unique within the storage account. The cloud container170may hold multiple Cloud Dbs200. If versioning is implemented, the multiple Cloud Dbs may include, multiple versions of the same Cloud Db. The cloud container170typically also holds a manifest320(i.e., a specially named blob) that includes a list of the Cloud Dbs200held in the cloud container170. For each Cloud Db200the manifest320further includes a list of its blocks (e.g., identified by a checksum hash of the block's contents). Each cloud container170typically also holds a write lock blob330used in managing write locks, as is discussed in more detail below. A cloud container object172is created by a backend module132to allow use of Cloud Dbs200. The cloud container object172represents a read-only or read-write connection to a cloud container170. The cloud container object172may be created using a cloud storage type that indicates a type of the cloud storage system (e.g., Azure, AWS, etc.), a cloud storage account name that indicates the storage account that holds the container, and the container name302that identifies the specific container within the storage account. Typically, in order to use a cloud container object172to read or write data from the cloud container170, an access token (e.g., a SAS token)310is required from the container authority178of the cloud storage system. Use of access tokens may allow for fine-grained access permissions among different users of applications120,122,124. An access token310typically provides access for a limited time (e.g., a few hours) and requires a refresh for a session that outlives it. An administrator may provide access tokens310to groups of users of applications120,122,124(e.g., via role-based access control (RBAC) rules). Typically, most users of applications120,122,123are provided access tokens310for read-only access. Only a small set of trusted administrators are typically granted access tokens310for read-write access (enabling them to use the workspace editor138). In some cases, read access may be granted to a few special “public” cloud containers170absent any access token310. The cloud container object172includes a local copy of the manifest350of the connected cloud container170. The local copy of the manifest350includes a list of the Cloud Dbs200held in the cloud container170, a list of the block (e.g., identified by a checksum hash of the block's contents) in each Cloud Db200, and the state of each block (e.g., remote, local, or dirty). The cloud container object172is attached to a cloud cache174located in an associated local directory340that stores a local copy of at least some of the blocks of the cloud container170. Typically, the local directory is a unique directory such that each cloud cache174has its own directory. A cloud container object172generally is attached to only one cloud container170, but multiple cloud container objects172may be is attached to the same cloud cache174. In a first session of an application120,122,124, the cloud cache174may be initialized. Blocks may then be downloaded to the initialized cloud cache174from the cloud container170synchronously as they are accessed, or prefetched from the cloud container170asynchronously when there is idle time. In subsequent sessions, already downloaded blocks may be reused. Such an arrangement may allow for initial access without the need for a lengthy pre-download, but also provide fast subsequent access as a result of the local caching. Further, by prefetching all the blocks of a Cloud Db, the database may be accessible even if there is no Internet connectivity, allowing for fully offline use. To prevent consumption of excessive local resources, a cloud cache174may be constrained to a maximum size (e.g., 20 Gb). FIG.4is a flow diagram of an example sequence of steps400for configuring access to, and performing reads operation on, a workspace database. The sequence of steps400may utilize a number of parameters. In the case of a File Db, the parameters may include the file name. In the case of a Cloud Db the parameters may include the cloud storage type and cloud storage account name, the container name302, and the database name202(e.g., including the version). At step410, the backend module132obtains the parameters from workspace settings loaded at application startup (e.g., from a JSON string) or as part of an infrastructure model. For a File Db, at step420, the backend module132opens a local workspace file176for read access using the file name. For a Cloud Db, at step430, the backend module132obtains an access token (e.g., a SAS tokens) from the container authority178of the cloud storage system using the cloud storage type and cloud storage account name. At step440, the backend module132creates a cloud container object172that represents a connection to the cloud container170using the container name302. As part of step440, at sub-step442, a local copy of the manifest320may be created by the cloud container object172and synchronized with the manifest350of the connected cloud container170. At step450, the backend module132attaches a cloud cache174to the cloud container object172. If this is the first session, the cloud cache174is initialized and begins empty. If this is a subsequent session, there may already be blocks260in the cloud cache174that were previously is downloaded synchronously on access, or asynchronously as part of prefetch operations. At step460, the backend module132reads a value250of a workspace resource from the workspace database using a workspace resource name240. In the case of a File Db, the read is conducted on the local workspace file176. In the case of a Cloud Db, if the block260that includes the workspace resource has already been downloaded to the cloud cache174, the read is performed locally therefrom. If the block260that includes the workspace resource has not already been downloaded to the cloud cache174, the access token (e.g., SaS token)310is utilized to access the cloud container170, the block downloaded to make it local at sub-step462, and then the workspace resource is read. FIG.5is a flow diagram of an example sequence of additional steps500for performing write operations on a workspace database200. The sequence of additional step500may assume that parameters have already been loaded and, in the case of a Cloud Db, a cloud container object172and attached cloud cache174have already been created, using operations similar to those set forth above inFIG.4. For a File Db, at step510, the backend module132opens a local workspace file176for write access using the file name. In the case of a Cloud Db, at step520, in response to input in the workspace editor138(e.g., an acquire lock command), the backend module132obtains a write lock330on the cloud container172. If another user attempts to obtain a write lock330while it is currently held, they are typically denied and provided with a notice of the identity of the current write lock holder. To obtain a write lock330, the backend module132, at sub-step522, downloads a specially named blob (referred to as a “write lock blob”) maintained in the cloud container170. At sub-step524, the backend module132modifies the write lock blob's contents by adding a string that identifies the user and adds an expiration time (e.g., 1 hour). The expiration time may allow other users to eventually obtain write access even if the write lock is not explicitly released. If more time is required, a write lock may be re-obtained. However, if another user instead acquires the write lock, any changes that are still only local are abandoned. At sub-step526, the backend module132uploads the modified write lock blob back to the cloud container172. The upload may be performed conditionally (e.g., using an HTTP if-match request-type header) so that if more than one module attempts to obtain a write lock330is simultaneously only one will succeed. If a new workspace database is to be created, the backend module132executes database commands (e.g., SQLite commands) to create a new database. This may be performed in response to input in the workspace editor138indicating creation of a new workspace database (e.g., a create database command). In the case of a Cloud Db, as part of step530, the backend module132updates the local copy of the manifest350to add blocks of the new Cloud Db to the list of blocks (e.g., computing their checksum hashes and adding these to the list of blocks). At step540, if a modification is to be made to an existing workspace database, the backend module executes database commands (e.g., SQLite commands) to make the modification. The database commands (e.g., SQLite commands) may add a new workspace resource, replace an existing workspace resource with another different resource, or delete an existing workspace resource. For a Cloud Db, at sub-step542, if versioning is employed, a new version of the Cloud Db may be created. This may be performed in response to input in the workspace editor138(e.g., a version database command). The backend module132may make a copy of the Cloud Db with a new name (e.g., including a new SemVer format version number) by making a new entry in the local copy of the manifest350with a duplicate list of blocks. Since the actual blocks are not duplicated (just the list of their names), creating the new version typically consumes negligible computing resources. For a Cloud Db, at step544, the backend module132ensures that each modified block260of the Cloud DB is local in the cloud cache174. Blocks260are typically immutable in the cloud container170itself and therefore are made local in the cloud cache174when they are to be modified. Also, for a Cloud Db, at sub-step546the local copy of the manifest350is updated to mark the state of each modified block as dirty. Once all changes are complete (i.e., any modifications to existing workspace databases or creation of new workspace databases is complete), for a File Db the backend module132locally saves the workspace file at step550. For a Cloud DB, at step560, the backend module132changes the identifiers of any dirty blocks in the local copy of the is manifest350(e.g., by computing new checksum hashes based on the block's new contents). When blocks have been modified their checksum should differ from prior values. The modified blocks with their new identifiers (e.g., checksum hashes) and blocks of any new Cloud Dbs may be considered “new” blocks. It should be noted that, at this point, changes can still be abandoned as the cloud container170itself has not yet been modified. In the case of abandonment, the new blocks may be discarded and the local copy of the manifest350resynchronized to match the manifest350in the cloud container170. For a Cloud Db, at step570, the backend module132uploads the new blocks to the cloud container170and they are added thereto to be maintained (at least for a time) alongside existing blocks. After all the new blocks have been uploaded, the backend module132uploads its local copy of the manifest350to the cloud container170to replace the manifest320in the cloud container170. This may be performed in response to input in the workspace editor138indicating changes in the cloud cache174are to be imported to the cloud container170(e.g., an import command). Once the operations of step560are complete, the new blocks embodying the changes are now identified (e.g., by their checksums) in the list of blocks of the manifest320of the cloud container170. Old blocks (i.e., blocks that were subject to modification) are no longer identified in the list of blocks of the manifest350(e.g., because their checksum hashes are no longer present). It should be noted that until upload of the local copy of the manifest350the changes were still invisible to other applications120,122,124of other users. Only after re-synchronizing their local copy of the manifest350with the updated manifest320of the cloud container170are other applications120,122,124able to see the changes. This re-synchronizing typically occurs periodically (e.g., upon expiration of a retention period). Prior to such time, other applications120,122,124may continue to use their out-of-date local copy of the manifest350, and access old blocks already local in their cloud cache174or from the cloud container170. Further, should operations be interrupted after the upload of new blocks but before the upload of the local copy of the manifest350, there is no issue. The uploaded new blocks will simply not be used (and eventually collected by is garbage collection). For a Cloud Db, at step580, in response to input in the workspace editor138(e.g., a release lock command) the backend module132releases the write lock330on the cloud container172. To release the write lock330, the backend module132may remove the string that identifies the user and the expiration time from the write lock blob. Typically, all changes must be uploaded or abandoned before the write lock330on the cloud container is released. It should be remembered that if the write lock330is not explicitly released, it will eventually still be released when the expiration time expires. In such case, changes may be automatically abandoned. At step590, after a period of time sufficient to ensure other applications120,122,124are no longer using old blocks (e.g., one day), a garbage collection process is executed on the cloud container170to delete old blocks that are not identified in the list of blocks of the manifest350. In summary, techniques are described herein for creating and utilizing workspace databases. It should be understood that a wide variety of adaptations and modifications may be made to the techniques to suit various implementations and environments. While it is discussed above that many aspects of the techniques may be implemented by specific software processes and modules executing on specific hardware, it should be understood that some or all of the techniques may also be implemented by different software on different hardware. In addition to general-purpose computing devices, the hardware may include specially configured logic circuits and/or other types of hardware components. Above all, it should be understood that the above descriptions are meant to be taken only by way of example. | 24,543 |
11943306 | DETAILED DESCRIPTION FIGS.1through6, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device. Hereinafter, an operating principle of the disclosure will be described in detail with reference to the accompanying drawings. In the following description, in describing the disclosure, in the case that it is determined that a detailed description of a related well-known function or constitution may unnecessarily obscure the gist of the disclosure, a detailed description thereof will be omitted. Terms described below are terms defined in consideration of functions in the disclosure, which may vary according to intentions or customs of users and operators. Therefore, the definition should be made based on the content throughout this specification. In the following description, in describing the disclosure, in case that it is determined that a detailed description of a related well-known function or constitution may unnecessarily obscure the gist of the disclosure, a detailed description thereof will be omitted. Hereinafter, embodiments of the disclosure will be described with reference to the accompanying drawings. Hereinafter, a term identifying an access node used in the description, a term indicating network entities, a term indicating messages, a term indicating an interface between network objects, a term indicating various types of identification information and the like are exemplified for convenience of description. Accordingly, the disclosure is not limited to the terms described below, and other terms indicating objects having equivalent technical meanings may be used. A 5G mobile communication network is composed of a 5G user equipment (UE, terminal, and the like), 5G radio access network (RAN, base station, 5g nodeB (gNB), evolved nodeB (eNB), and the like), and 5G core network. The 5G core network is composed of an access and mobility management function (AMF) that provides a mobility management function of a UE, a session management function (SMF) that provides a session management function, a user plane function (UPF) that performs a data transmission role, a policy control function (PCF) that provides a policy control function, a unified data management (UDM) that provides a data management function such as subscriber data and policy control data, and a network function such as unified data repository (UDR) that stores data of various network functions such as the UDM. In a 5G system, network slicing refers to a technology and structure that enables virtualized, independent, and multiple logical networks in one physical network. A network operator provides a service by constituting a virtual end-to-end network of a network slice in order to satisfy specialized requirements of a service/application. In this case, the network slice may be identified by an identifier of single-network slice selection assistance information (S-NSSAI). The network may transmit information (e.g., allowed NSSAI(s)) on an allowed slice set to a terminal during a terminal registration procedure (e.g., UE registration procedure), and the terminal may transmit and receive application data through a protocol data unit (PDU) session generated through one S-NSSAI (i.e., network slice) thereof. In the 5G system, there exists a network slice admission control (NSAC) function that enables each of the number of registered UEs per network slice and the number of established PDU sessions (e.g., each of number of registered UEs per network slice and number of established PDU session per network slice) to not exceed a defined maximum value. For an admission control on the maximum number of registered UEs per network slice, whenever a change (addition or deletion of S-NSSAI) of the set (allowed NSSAI) of S-NSSAI allowed to the UE is required, the AMF may update (request to increase or decrease) the number of registered UEs for corresponding slices to an NSAC function (NSACF). When the NSACF receives an increase update request from the AMF for S-NSSAI that has reached the preconfigured maximum number of registered UEs per network slice, the NSACF may provide, to the AMF, information that the corresponding S-NSSAI has reached the maximum number of registered UEs. When the AMF receives the corresponding information, the AMF may exclude the corresponding S-NSSAI from the allowed S-NSSAI set. For an admission control on the maximum number of PDU sessions per network slice, the SMF may update (request to increase or decrease) the number of PDU sessions established for the S-NSSAI to the NSAC function (NSACF) when performing a PDU session creation or release procedure for the S-NSSAI. When the NSACF receives an increase update request from the SMF for S-NSSAI that has reached the preconfigured maximum number of established PDU sessions per network slice, the NSACF may provide, to the SMF, information that the S-NSSAI has reached the maximum number of established PDU sessions. When the SMF receives the corresponding information, the SMF may not perform PDU session creation through the corresponding S-NSSAI. NSAC target information for each S-NSSAI may be configured in the AMF and the SMF, and the AMF and the SMF may perform an NSAC procedure only for S-NSSAI, which is an NSAC target. FIG.1illustrates a method in which an AMF receives alternative S-NSSAI information in a registration procedure when alternative S-NSSAI information is defined and provided in access and mobility (AM) subscription data of an UDM according to an embodiment of the present disclosure. With reference toFIG.1, a UE101may transmit a message to a base station (RAN, gNB)102for a UE registration procedure (step110). In this case, the message for the UE registration procedure may be an AN message (AN parameter, registration request). Here, the AN message represents a message between the UE101and the RAN102. In this case, the registration request message may include at least one of information such as a UE identifier (e.g., subscription concealed identifier (SUCI), 5G-globally unique temporary identity (5G-GUTI), or permanent equipment identifier (PEI)), requested NSSAI, and UE mobility management (MM) core network capability. The RAN102may select an AMF103based on information in the AN message received from the UE101(step120). The RAN102may transmit an N2 message (which may include at least one of N2 parameters or a registration request) to the AMF103(step130). The N2 parameter may include a selected PLMN ID, UE position information, UE context request, and the like. When determining that UE authentication is required, the AMF103may select an authentication server function (AUSF)104based on at least one of a UE identifier (e.g., SUCI or subscription permanent identifier (SUPI)) (step140). The AMF103may perform an authentication procedure for the UE101through the selected AUSF104(step145). Further, when there is no non-access stratum (NAS) security context for the UE101, a procedure for obtaining the NAS security context may be performed. The AMF103may select an UDM105based on SUPI when subscription information for the UE101is required, and the UDM105may select a UDR (not illustrated) in which subscription information on the UE101is stored (step150). The AMF103may request access and mobility subscription information on the UE101to the UDM105through an Nudm_SDM_Get request (which may include at least one of SUPI or access and mobility subscription data) message (step160). In this case, information requesting alternative slice information (e.g., alternative S-NSSAI(s)) may be included in the Nudm_SDM_Get request message in the following cases: determination according to a local configuration of the AMF103, in the case that a UE registration procedure in progress is a registration due to a movement from an EPS to 5GS, and the like. The UDM105may transmit subscription information including subscribed S-NSSAI(s), alternative slice information (e.g., alternative S-NSSAI(s) for each S-NSSAI in subscribed S-NSSAI(s)) in an Nudm_SDM_Get response message to the AMF103(step165). In this case, the alternative S-NSSAI information may be defined in access and mobility (AM) subscription data of the Nudm_SDM_Get response message and transmitted to the AMF103. The UDM105may include alternative S-NSSAI(s) in the Nudm_SDM_Get response message in the following case: determination according to a local configuration of the UDM105, the case that the AMF103requests alternative S-NSSAI(s), and the like. In this case, according to an embodiment, the UDM105may obtain information to transmit to the AMF103from the UDR and transmit the information to the AMF103. The AMF103may calculate allowed NSSAI in consideration of subscribed S-NSSAI(s) and store the allowed NSSAI in the UE context (step170). Further, for S-NSSAI, which is NSAC targets among S-NSSAIs in allowed NSSAI, the AMF103may store alternative S-NSSAI(s) for each S-NSSAI in the UE context. In the case that the AMF103intends to use an alternative slice in an unavailable slice occurring situation (e.g., slice congestion situation occurs, and the like) due to NSAC rejection or other reasons, the AMF103may store an alternative slice (alternative S-NSSAI(s) for each S-NSSAI) in the UE context for subscribed S-NSSAI(s) or allowed NSSAI. The AMF103may utilize alternative S-NSSAI(s) for each S-NSSAI stored for determining a slice (i.e., target slice) to be used instead of an unavailable slice, and move PDU sessions to a target slice when requesting PDU session generation/modification and handover to an unavailable slice. The remaining registration procedure of the UE101may be proceeded (step In one embodiment illustrated inFIG.1, the UE101receives an allowed set of slices through a registration procedure (UE registration procedure), then selects a slice within the corresponding set for each PDU session to be created, and creates a PDU session through a PDU session creation procedure. When an alternative slice (e.g., alternative S-NSSAI) is provided through the registration procedure, as illustrated inFIG.1, because signaling for providing an alternative slice for each PDU session creation procedure is not required, there is an advantage that a signaling load is small. FIG.2illustrates a method in which an AMF selects again S-NSSAI for a PDU session through alternative S-NSSAI information in a PDU session establishment procedure when alternative S-NSSAI information is defined and provided in access and mobility (AM) subscription data of an UDM according to an embodiment of the present disclosure. With reference toFIG.2, in a UE registration procedure, an AMF203may store allowed NSSAI and alternative S-NSSAI(s) information for each S-NSSAI, which is an NSAC target in the allowed NSSAI in a UE context in AMF (step210). This may be performed according to an embodiment described in relation toFIG.1. A UE201may transmit a request message to the AMF203(through a base station202) for a PDU session establishment procedure (step215). The message may be a PDU session establishment request message, and the PDU session establishment request message may be included in a non-access stratum (NAS) message and transmitted to the AMF203. The NAS message means a message between the UE201and the AMF203. According to an embodiment, the NAS message may include at least one of S-NSSAI, a data network name (DNN), or a PDU session ID. In the case that S-NSSAI included in the message received from the UE201is currently unavailable in step250, the AMF203may not select an SMF205and in this case, steps230,240, and250may be omitted and the procedure from step260may be proceeded. The AMF203may select the SMF205based on at least one of the DNN or the S-NSSAI (step220). The AMF203may transmit an Nsmf_PDUSession_CreateSMContext request to the selected SMF205. According to an embodiment, the Nsmf_PDUSession_CreateSMContext request message may include at least one of S-NSSAI, a DNN, a PDU session ID, or a PDU session establishment request message (step230). When S-NSSAI included in the message received from the AMF203is a network slice admission control (NSAC) target, the SMF205may transmit an Nnsacf_NSAC_NumOfPDUsUpdate request (which may include at least one of S-NSSAI, a UE ID, a PDU session ID, or an update flag=INCREASE) message to an NSAC function (NSACF)206(step240). In the case that an update flag value of the received message is INCREASE and the number of PDU sessions established for S-NSSAI of the received message has already reached the maximum number of PDU sessions established for the S-NSSAI, the NSACF206may include a value indicating that the maximum number of PDU sessions established has already been reached in the result value and transmit the result value to the SMF205(step245). In this case, the message may be an Nnsacf_NSA_NumOfPDUsUpdate response message. The SMF205may transmit an Nsmf_PDUSession_CreateSMContext response message to the AMF203(step250). In this case, a cause of the message may include information indicating that session management (SM) context creation has failed due to reaching the maximum number of PDU sessions established for the S-NSSAI. When information indicating that SM context creation has failed due to reaching the maximum number of PDU sessions established for S-NSSAI is included in a cause value of the message from the SMF205, or in the case that the AMF203determines that S-NSSAI included in the message from the UE201is unavailable in step220, the AMF203may select one of S-NSSAI(s) included in allowed NSSAI among alternative S-NSSAI(s) for the failed or unavailable S-NSSAI to attempt again PDU session creation (step260). The alternative S-NSSAI(s) may be stored in the AMF203in the following form: stored in an AMF local configuration, stored in the UE context of the AMF203, and the like. The AMF203may select again an SMF based on the newly selected S-NSSAI (step270). Although the drawing illustrates that the same SMF as that selected in step220is selected in step270, the SMF selected in step220and the SMF selected in step270may be the same or different. The AMF203may transmit an Nsmf_PDUSession_CreateSMContext request message to the newly selected SMF (step280).FIG.2illustrates several SMFs as one SMF for convenience of description. The SMF newly selected by the AMF203in step270may be an SMF different from that selected by the AMF203in step220. The remaining PDU session creation procedure may be proceeded (step290). In one embodiment illustrated inFIG.2, in a situation in which an alternative slice (e.g., alternative-SNSSAI) is provided in advance through a registration procedure (e.g., by the method illustrated inFIG.1), it indicates that the alternate slice is used for a PDU session, and because signaling for providing an alternate slice for each PDU session creation procedure is not required, there is an advantage that a signaling load is small. FIG.3illustrates a method in which an AMF selects again S-NSSAI for a PDU session through alternative S-NSSAI information in a PDU session establishment procedure when alternative S-NSSAI information is defined and provided in session management (SM) subscription data of an UDM according to an embodiment of the present disclosure. With reference toFIG.3, a UE301may include and transmit a PDU session establishment request message in a NAS message to an AMF303(through a base station302) for a PDU session establishment procedure (step310). The NAS message means a message between the UE301and the AMF303. The NAS message may include at least one of S-NSSAI, a DNN, or a PDU session ID. The AMF303may select an SMF305based on at least one of the DNN or the S-NSSAI (step320). The AMF303may transmit an Nsmf_PDUSession_CreateSMContext request (which may include at least one of S-NSSAI, a DNN, a PDU session ID, or a PDU session establishment request message) to the SMF305(step330). When the S-NSSAI included in the message received from the AMF303is an NSAC target, the SMF305may transmit an Nnsacf_NSAC_NumOfPDUsUpdate request (which may include at least one of S-NSSAI, a UE ID, a PDU session ID, and an update flag=INCREASE) message to an NSACF306(step340). When an update flag value of the received message is INCREASE, and the number of PDU sessions established for the S-NSSAI of the received message has already reached the maximum number of PDU sessions established for the S-NSSAI, the NSACF306may include information indicating that the maximum number of PDU sessions established for the corresponding S-NSSAI has already been reached in the result value and is unavailable and transmit the information to the SMF305(step345). In this case, the message may be an Nnsacf_NSAC_NumOfPDUsUpdate response message. The SMF305may transmit an Nudm_SDM_Get request (which may include at least one of SUPI, a DNN, S-NSSAI, or SM subscription data) to an UDM308to request SM subscription data of the UE301(step350). In this case, information requesting alternative S-NSSAI(s) may be included in the Nudm_SDM_Get request message in the following case: determination according to a local configuration of the SMF305, in the case that a UE registration procedure in progress is a registration due to a movement from a first system (e.g., EPS) to a second system (e.g., 5GS), in the case that a response message in step345includes information that S-NSSAI is unavailable, and the like The UDM308may transmit subscription information including alternative NSSAI(s) for S-NSSAI included in the message received from the SMF305in the Nudm_SDM_Get response message to the SMF305(step355). The UDM308may include alternative S-NSSAI(s) in the following cases: determination according to a local configuration of the UDM308and in the case that the SMF305requests alternative S-NSSAI(s), and the like. In this case, according to an embodiment, the UDM308may obtain information to transmit to the SMF305from the UDR (not illustrated) and transmit the information to the AMF303. The SMF305may transmit an Nsmf_PDUSession_CreateSMContext response message to the AMF303(step360). In this case, a cause of the message may include information indicating that SM context creation has failed due to reaching the maximum number of PDU sessions established for S-NSSAI and alternative S-NSSAI(s). When information indicating that SM context creation has failed due to reaching the maximum number of PDU sessions established for the S-NSSAI and alternative S-NSSAI(s) is included in a cause value of the message from the SMF305, the AMF303may select one of S-NSSAI(s) included in allowed NSSAI among alternative S-NSSAI(s) for the failed S-NSSAI to attempt again PDU session creation (step370). The AMF303may select again an SMF based on the newly selected S-NSSAI (step380). The AMF303may transmit an Nsmf_PDUSession_CreateSMContext request message to the newly selected SMF305(step390).FIG.3illustrates several SMFs as one SMF for convenience of description. The SMF newly selected by the AMF303in step380may be an SMF different from that selected by the AMF303in step320. The remaining PDU session creation procedure may be proceeded (step395). In one embodiment illustrated inFIG.3, because an alternate slice is provided through the UDM308that provides subscriber information for the session in the PDU session procedure, there is an advantage that an alternate slice may be provided based on information on a subdivided session of the subscriber. FIG.4illustrates a method in which an AMF selects again S-NSSAI for a PDU session through alternative S-NSSAI information in a PDU session establishment procedure when alternative S-NSSAI information is defined and provided in an NSACF according to an embodiment of the present disclosure. With reference toFIG.4, a UE401may include and transmit a PDU session establishment request message in a NAS message to an AMF403(through a base station402) for a PDU session establishment procedure (step410). The NAS message means a message between the UE401and the AMF403. The NAS message may include at least one of S-NSSAI, a DNN, or a PDU session ID. The AMF403may select an SMF405based on at least one of the DNN or the S-NSSAI (step420). The AMF403may transmit an Nsmf_PDUSession_CreateSMContext request (which may include at least one of S-NSSAI, a DNN, a PDU session ID, or a PDU session establishment request message) to the selected SMF405(step430). When the S-NSSAI included in the message received from the AMF403is an NSAC target, the SMF405may transmit an Nnsacf_NSAC_NumOfPDUsUpdate request (which may include at least one of S-NSSAI, a UE ID, a PDU session ID, or an update flag=INCREASE) message to an NSACF406(step440). In this case, information requesting alternative S-NSSAI(s) may be included in the Nnsacf_NSAC_NumOfPDUsUpdate request message in the following cases: determination according to a local configuration of the SMF405, in the case that a UE registration procedure in progress is a registration due to a movement from a first system (e.g., EPS) to a second system (e.g., 5GS). When an update flag value of the received message is INCREASE, and the number of PDU sessions established for S-NSSAI of the Nnsacf_NSAC_NumOfPDUsUpdate request message has already reached the maximum number of PDU sessions established for the S-NSSAI, the NSACF406may include a value indicating that the maximum number of PDU sessions established has already been reached in the result value to transmit the result value to the SMF405(step445). Further, the NSACF406may include alternative S-NS SAI(s) in the following cases: in the case that the S-NSSAI included in the message from the SMF405is unavailable, determination according to a local configuration of the NSACF406, in the case that alternative S-NSSAI(s) is requested from the SMF405, and the like The NSACF406may include S-NSSAI(s) that have not reached the maximum number of PDU sessions established among stored alternative S-NSSAI(s) for the corresponding S-NSSAI, or S-NSSAI(s) that are not the target of NSAC NSSAI(s) in alternative S-NSSAI(s) of a message transmitting to the SMF405. The SMF405transmits an Nsmf_PDUSession_CreateSMContext response message to the AMF403(step450). In this case, a cause of the message may include information indicating that SM context creation has failed due to reaching the maximum number of PDU sessions established for the S-NSSAI and alternative S-NSSAI(s) received in step445. When information indicating that SM context creation has failed due to reaching the maximum number of PDU sessions established for the S-NSSAI and alternative S-NSSAI(s) is included in a cause value of the message from the SMF405, the AMF403may select one of S-NSSAI(s) included in allowed NSSAI among alternative S-NSSAI(s) for the failed S-NSSAI to attempt again PDU session creation (step460). The AMF403may select again an SMF based on the newly selected S-NSSAI (step470). The AMF403may transmit an Nsmf_PDUSession_CreateSMContext request message to the newly selected SMF405(step480).FIG.4illustrates several SMFs as one SMF for convenience of description. The SMF newly selected by the AMF403in step470may be an SMF different from that selected by the AMF403in step420. The remaining PDU session creation procedure may be proceeded (step490). In one embodiment illustrated inFIG.4, because the NSACF406knowing load information on network slices provides an alternative slice (e.g., alternative-SNSSAI) for the PDU session, there is an advantage that it is possible to determine/provide an alternative slice in consideration of a load on the network slices. For example, the NSACF406may provide an alternate slice with the remaining sufficiently large number of allowable PDU sessions to the SMF405. FIG.5illustrates a constitution of a UE according to an embodiment of the present disclosure. With reference toFIG.5, the UE according to an embodiment of the disclosure may include a transceiver520and a controller510for controlling overall operations thereof. The transceiver520may include a transmitter525and a receiver523. The transceiver520may transmit and receive signals to and from other network entities. The controller510may control the UE to perform any one operation of the above-described embodiments. The controller510and the transceiver520do not necessarily have to be implemented into separate modules, and may be implemented into a single component in the form of a single chip. The controller510and the transceiver520may be electrically connected. For example, the controller510may be a circuit, an application-specific circuit, or at least one processor. Further, operations of the UE may be realized by providing a memory device storing the corresponding program code in an arbitrary component in the UE. FIG.6illustrates a constitution of a network entity according to an embodiment of the disclosure. The network entity of the disclosure is a concept including a network function according to system implementation. With reference toFIG.6, a network entity according to an embodiment of the disclosure may include a transceiver620and a controller610for controlling overall operations of the network entity. The transceiver620may include a transmitter625and a receiver623. The transceiver620may transmit and receive signals to and from other network entities. The controller610may control the network entity to perform any one operation of the above-described embodiments. The controller610and the transceiver620do not necessarily have to be implemented into separate modules, and may be implemented into a single component in the form of a single chip. The controller610and the transceiver620may be electrically connected. For example, the controller610may be a circuit, an application-specific circuit, or at least one processor. Further, operations of the network entity may be realized by providing a memory device storing the corresponding program code in an arbitrary component in the network entity. The network entity may be any one of a base station (RAN), AMF, SMF, UPF, PCF, NSACF, UDM, and UDR. It should be noted that the constitution diagrams illustrated inFIGS.1to6, diagrams of a control/data signal transmission method, operation procedure diagrams, and constitution diagrams are not intended to limit the scope of the disclosure. That is, all components, entities, or steps of operation described inFIGS.1to6should not be construed as essential components for implementation of the disclosure, and the disclosure may be implemented within a range that does not impair the essence of the disclosure even by including only some components. The operations of the network entity or the UE described above may be realized by providing a memory device storing the corresponding program code in an arbitrary component in the network entity or the UE device. That is, a controller of the UE device or the network entity may execute the above-described operations by reading and executing a program code stored in the memory device by a processor or a central processer (CPU). Various components and modules of the network entity, base station, or UE device described in this specification may be operated using a hardware circuit such as a combination of a complementary metal oxide semiconductor-based logic circuit, firmware, software, and/or hardware and firmware and/or software inserted into a machine readable medium. For example, various electrical structures and methods may be implemented using electrical circuits such as transistors, logic gates, and application specific integrated circuits. In the detailed description of the disclosure, although specific embodiments have been described, various modifications are possible without departing from the scope of the disclosure. Therefore, the scope of the disclosure should not be limited to the described embodiments and should be defined by the claims described below as well as by those equivalent to the claims. According to an embodiment of the disclosure, a situation in which a protocol data unit (PDU) session creation request is rejected due to network slice admission control (NSAC) through alternative network slice technology in a 5G system can be alleviated or prevented. Alternative network slice technology may be utilized as shown in following examples. In one example, a network manager can create a session through an alternate slice in the case that an initially requested network slice is unavailable. Therefore, there is an advantage that availability for the network slice is improved. In another example, when a session is concentrated in a specific network slice, the network manager can obtain the effect of distributing a load between network slices by enabling to use an alternate slice for new session requests requesting the corresponding network slice. In yet another example, the network manager can prevent the corresponding session creation requests from being rejected by providing an alternate slice for session creation requests in which a session creation request should not be rejected due to a unusable reason of the network slice (e.g., emergency call service, national security/regulation related service, and the case that a session established in an environment in which NSAC is not supported is moved to an environment in which NSAC is supported). Effects obtainable in the disclosure are not limited to the above-mentioned effects, and other effects not mentioned will be clearly understood by those of ordinary skill in the art to which the disclosure belongs from the description below. Although the present disclosure has been described with various embodiments, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims. | 30,248 |
11943307 | DETAILED DESCRIPTION In the following description of various illustrative embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which are shown, by way of illustration, various embodiments in which aspects of the disclosure may be practiced. It is to be understood that other embodiments may be utilized, and structural and functional modifications may be made, without departing from the scope of the present disclosure. Various aspects are capable of other embodiments and of being practiced or being carried out in various different ways. It is noted that various connections between elements are discussed in the following description. It is noted that these connections are general and, unless specified otherwise, may be direct or indirect, wired or wireless, and that the specification is not intended to be limiting in this respect. It is to be understood that the phraseology and terminology used herein are for the purpose of description and should not be regarded as limiting. Rather, the phrases and terms used herein are to be given their broadest interpretation and meaning. The use of “including” and “comprising” and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items and equivalents thereof. In some arrangements, current web traffic routing protocols might not offer web traffic routing methods that reduce network latency, identify similarities between a plurality of web traffic requests generated by the plurality of computing devices, and/or assign similar web traffic requests to particular web servers such that communication between the computing devices is routed through a common web server. Accordingly, proposed herein is a solution to the problem described above that includes generating a modified web farm framework for routing web traffic based on similarities between web traffic requests. For example, a computing platform may receive a request from an enterprise organization computing device to establish a server connection to host a web-based, collaborative work environment (e.g., a virtual workspace). The computing platform may monitor a volume of web traffic associated with a plurality of web servers, wherein the plurality of web servers may be divided into server pods and each server pod may comprise a subset of the plurality of web servers. The computing platform may identify, based on the monitoring, a web server associated with the lowest volume of web traffic and may assign the identified web server and the corresponding server pod to host the virtual workspace. The computing platform may generate and store a cookie comprising details and/or instructions for accessing the virtual workspace. Based on receiving a request from a consumer computing device to access the virtual workspace, the computing platform may transmit the cookie to the consumer computing device and may establish a network connection between the consumer computing device and the virtual workspace. The computing platform may monitor the virtual workspace to determine whether the enterprise organization computing device is connected to the virtual workspace. Based on determining the enterprise organization computing device lost the network connection to the virtual workspace, the computing platform may determine whether to resume the server connection with the enterprise organization computing device and the virtual workspace, or to identify a second web server with network availability to host the virtual workspace. Based on determining the enterprise organization computing device terminated access to the virtual workspace, the computing platform may deactivate the virtual workspace (e.g., soft delete the virtual workspace from the network) and may transmit a notification to the consumer computing device indicating termination of access to the virtual workspace. Alternatively, based on determining the enterprise organization computing device is present within the virtual workspace, then computing platform may maintain the network connections to the virtual workspace. In further arrangements, current web traffic routing procedures might not permit computing devices to identify a particular web server with which additional computing devices may be associated, transmit a request to establish a network connection with the particular web server, and/or collaborate in real-time or near real-time with the additional computing devices within a virtual workspace hosted by the particular web server. Accordingly, proposed herein is a solution to the problem described above that includes accessing and interacting with requested web content using a modified web farm framework. For example, a computing platform may receive, from a consumer computing device, a request to access a virtual workspace. The computing platform may parse a log of virtual workspaces that are currently active within the network to locate a cookie associated with the requested virtual workspace, wherein the cookie may comprise details and instructions for connecting to a server pod and a web server that host the virtual workspace. The computing platform may transmit a copy of the cookie to the consumer computing device and may receive, from the consumer computing device, a request to establish a network connection with the server pod and the web server assigned to host the virtual workspace. Based on determining the consumer computing device lost access to the virtual workspace, the computing platform may determine whether the cookie that corresponds to the virtual workspace was updated. The computing platform may transmit a copy of the updated cookie to the consumer computing device and may receive, from the consumer computing device, a request to re-connect to the virtual workspace based on the updated connection details within the updated cookie. Based on determining an enterprise organization computing device associated with the virtual workspace terminated access to the virtual workspace, the computing platform may transmit a notification to the consumer computing device indicating termination of access to the virtual workspace. Computer Architecture FIG.1Adepicts an illustrative example of a computer system100that may be used for generating, in real-time or near real-time, a modified web farm framework for routing web traffic based on similarities between web traffic requests, in accordance with one or more aspects described herein. Computer system100may comprise one or more computing devices including at least computing platform110, enterprise organization computing devices120a-120c, and/or consumer computing devices130a-130c. WhileFIG.1Adepicts more than one enterprise organization computing device (e.g., enterprise organization computing devices120a-120c) and/or more than one consumer computing device (e.g., consumer computing devices130a-130c), each of enterprise organization computing devices120a-120cand/or consumer computing devices130a-130cmay be configured in accordance with the features described herein. While the description herein may refer to enterprise organization computing device120and/or consumer computing device130, the functions described herein may also be performed by any one of enterprise organization computing devices120a-120cand/or consumer computing devices130a-130c. WhileFIG.1Adepicts enterprise organization computing devices120a-120cand consumer computing devices130a-130c, more or fewer than three enterprise organization computing devices and/or consumer computing devices may exist within computer system100. Three enterprise organization computing devices and three consumer computing devices are depicted inFIG.1Afor illustration purposes only and are not meant to be limiting. Enterprise organization computing device120may transmit, to computing platform110, a plurality of web traffic requests (e.g., requests to establish a network connection and/or server connection to host a virtual workspace), and may receive access to the virtual workspace based on the network connection and/or server connection. At least one of enterprise organization computing devices120a-120c(e.g., enterprise organization computing device120a) may collaborate with a different one of enterprise organization computing devices120a-120c(e.g., enterprise organization computing devices120b-120c) and/or consumer computing devices130a-130cwithin the virtual workspace. Enterprise organization computing device120may influence whether the network connection and/or server connection to the virtual workspace may remain active (e.g., based on whether at least one of enterprise organization computing devices120a-120cis present within the virtual workspace, based on whether at least one of enterprise organization computing devices120a-120cterminates access to the virtual workspace, based on whether at least one of enterprise organization computing devices120a-120closes the network connection and/or server connection to the virtual workspace, or the like). Consumer computing device130may also transmit, to computing platform110, a plurality of web traffic requests (e.g., requests to establish a server connection and/or network connection with a server pod and a web server assigned to host the virtual workspace). Consumer computing device130may receive access to the virtual workspace based on the network connection and/or server connection with the web server. At least one of consumer computing devices130a-130c(e.g., consumer computing device130a) may collaborate with a different one of consumer computing devices130a-130c(e.g., consumer computing devices130b-130c) and/or enterprise organization computing devices120a-120cwithin the virtual workspace. In some instances, consumer computing device130may lose the network connection and/or the server connection to the virtual workspace and may either re-connect to the web server assigned to host the virtual workspace or establish a second server connection and/or network connection with a second web server assigned to host the virtual workspace. Consumer computing device130may terminate (e.g., intentionally) access to the virtual workspace. In some instances, consumer computing device130may receive a notification indicating termination of access to the virtual workspace based on at least one of enterprise organization computing devices120a-120cterminating access to the virtual workspace. Each one of enterprise organization computing devices120a-120cand consumer computing devices130a-130cmay be configured to communicate with computing platform110through network150. Network150may include one or more sub-networks (e.g., local area networks (LANs), wide area networks (WANs), or the like). In some arrangements, computer system100may include additional computing devices and networks that are not depicted inFIG.1A, which may also be configured to interact with computing platform110. Computer system100may include a local network configured to interconnect each of the computing devices comprising computing platform110. Computing platform110may be associated with a distinct entity such as an enterprise organization, company, school, government, and the like, and may comprise one or more personal computer(s), server computer(s), hand-held or laptop device(s), multiprocessor system(s), microprocessor-based system(s), set top box(es), programmable user electronic device(s), network personal computer(s) (PC), minicomputer(s), mainframe computer(s), distributed computing environment(s), and the like. Computing platform110may include computing hardware and software that may host various data and applications for performing tasks of the centralized entity and interacting with enterprise organization computing device120, consumer computing device130, and/or additional computing devices. Computing platform110may receive, from enterprise organization computing device120, a request to establish a server connection and/or network connection to host the virtual workspace. Computing platform110may identify a server pod and a web server with network availability to host the virtual workspace, and may assign the server pod and the web server to host the virtual workspace. Computing platform110may monitor the presence (or absence) of at least one of enterprise organization computing devices120a-120cwithin the virtual workspace, and may determine whether to deactivate the virtual workspace based on the monitoring. In some instances, computing platform110may determine, based on the monitoring, whether to elect a second server pod and/or a second web server to host the virtual workspace. Computing platform110may further monitor the performance of network150, the server pod (or the second server pod), and the web server (or the second web server) assigned to host the virtual workspace. Based on the monitoring, computing platform110may determine whether to terminate access to the virtual workspace. Additionally or alternatively, computing platform110may receive, from consumer computing device130, a request to access the virtual workspace. Computing platform110may transmit, to consumer computing device130, a cookie comprising connection details corresponding to the virtual workspace. Computing platform110may establish a server connection and/or network connection between consumer computing device130and the virtual workspace, and may monitor the virtual workspace to determine whether to transmit, to consumer computing device130, a notification indicating termination of access to the virtual workspace. In some arrangements, computing platform110may include and/or be part of enterprise information technology infrastructure and may host a plurality of enterprise applications, enterprise databases, and/or other enterprise resources. Such applications may be executed on one or more computing devices included in computing platform110using distributed computing technology and/or the like. In some instances, computing platform110may include a relatively large number of servers that may support operations of the enterprise organization, such as a financial institution. Computing platform110, in this embodiment, may generate a single centralized ledger, which may be stored in a database, for data received from at least one of enterprise organization computing device120and/or consumer computing device130. In some instances, at least one of enterprise organization computing device120and/or consumer computing device130may be configured to receive and transmit information through particular channels and/or applications associated with computing platform110. The requests submitted by at least one of enterprise organization computing device120and/or consumer computing device130may initiate the performance of particular computational functions at computing platform110, such as the analysis of at least one enterprise organization computing device request to establish a virtual workspace and/or at least one consumer computing device request to access the virtual workspace. FIG.1Bdepicts the components of computing platform110that may be used for generating, in real-time or near real-time, a modified web farm framework for routing web traffic based on similarities between web traffic requests, in accordance with one or more aspects described herein. Computing platform110may comprise global server load balancer111, data centers112a-112b, local server load balancers113a-113b, server pods114a-114d, web servers115a-115h, connection database116, audit database117, database118, and/or processor(s)119. WhileFIG.1Bdepicts data centers112a-112b, local server load balancers113a-113b, server pods114a-114d, and web servers115a-115h, more or fewer data centers, local server load balancers, server pods, and web servers may exist within computer system100. Data centers112a-112b, local server load balancers113a-113b, server pods114a-114d, and web servers115a-115hare depicted inFIG.1Bfor illustration purposes only and are not meant to be limiting. Each computing device within computing platform110may contain database118and processor(s)119, which may be stored in memory of the one or more computing devices of computing platform110. Through execution of computer-readable instructions stored in memory, the computing devices of computing platform110may be configured to perform functions of the centralized entity and store the data generated during the performance of such functions in database118. The computing devices within computing platform110may work together to generate a modified web farm framework. A web farm framework may comprise a data routing protocol wherein communication between a plurality of computing devices may be routed to at least one web server of a plurality of web servers. To do so, the web farm framework may use a backplane to facilitate communication between the plurality of web servers. In particular, the backplane may receive a volume of communication (e.g., messages, or the like) to be transmitted between at least two computing devices, parse each received communication, identify at least one recipient web server, and transmit the received communication(s) to the at least one recipient web server. However, a modified web farm framework may be configured to facilitate communication between the plurality of computing devices using a plurality of web server clusters (e.g., server pods) and/or a plurality of load balancers in place of the backplane. The modified web farm framework may determine an overall network processing capacity and/or a current processing capacity for each server pod and/or each web server therein. The modified web farm framework may use the determined processing capacities to assign at least one virtual workspace to a particular server pod and a particular web server. Based on receiving a web request to establish the virtual workspace (e.g., from enterprise organization computing device120, or the like), the modified web farm framework may elect the server pod and the web server with the greatest current processing capacity to host the virtual workspace. Based on receiving a web request to access the virtual workspace (e.g., from consumer computing device130, or the like), the modified web farm framework may route consumer computing device130to the server pod and the web server assigned to host the virtual workspace. Global server load balancer111may receive, from enterprise organization computing device120, a request to establish a server connection and/or network connection to host the virtual workspace. In some instances, global server load balancer111may receive, from consumer computing device130, a request to access the virtual workspace. Global server load balancer111may compare a current processing availability associated with data centers112a-112bto identify a data center that may host the virtual workspace. In some instances, global server load balancer111may use the current processing availability associated with data centers112a-112bto identify a data center that may handle the consumer computing device request to access the virtual workspace. Global server load balancer111may transmit the enterprise organization computing device request and/or the consumer computing device request to the identified data center. Data centers112a-112bmay receive, from global server load balancer111, at least one of the enterprise organization computing device request and/or the consumer computing device request. Data centers112a-112bmay further process the received request using at least one of local server load balancers113a-113b, server pods114a-114d, and/or web servers115a-115h. Local server load balancers113a-113bmay parse the at least one request received by data centers112a-112b. Local server load balancers113a-113bmay also determine a current processing availability associated with each of server pods114a-114dand/or web servers115a-115h. Local server load balancers113a-113bmay use the current processing availabilities to identify at least one of server pods114a-114dand/or web servers115a-115hwith processing availability to host the virtual workspace. In some instances, load server load balancers113a-113bmay transmit the at least one request to at least one of server pods114a-114dand/or web servers115a-115h. Server pods114a-114dmay comprise clusters of web servers115a-115h, wherein each cluster may correspond to a subset of web servers115a-115h. Server pods114a-114dmay receive, from local server load balancers113a-113b, the at least one request. In some instances, server pods114a-114dmay also receive, from local server load balancers113a-113b, an indication of at least one of web servers115a-115hthat may have processing availability to host the virtual workspace. Server pods114a-114dmay transmit the enterprise organization computing device request and/or the consumer computing device request to at least one of web servers115a-115h. Web servers115a-115hmay receive, from server pods114a-114d, the enterprise organization computing device request and/or the consumer computing device request. Based on parsing the enterprise organization computing device request and/or the consumer computing device request, web servers115a-115hmay establish a plurality of server connections and/or network connections between the virtual workspace and at least one of enterprise organization computing devices120a-120cand/or consumer computing devices130a-130c. Web servers115a-115hmay monitor the plurality of server connections and/or network connections and may determine, based on the monitoring, whether at least one server connection and/or network connection should be modified (e.g., terminated, hosted by a different web server, or the like). Generating a Modified Web Farm Framework for Routing Web Traffic Based on Similarities Between Web Traffic Requests/Accessing and Interacting with Requested Web Content Using a Modified Web Farm Framework FIGS.2A-2Cdepict an illustrative event sequence for generating, in real-time or near real-time, a modified web farm framework for routing web traffic based on similarities between the web traffic requests, in accordance with one or more aspects described herein. While aspects described with respect toFIGS.2A-2Cinclude the evaluation of a single enterprise organization computing device request (e.g., a request to establish a server connection and/or network connection to host a virtual workspace, or the like), a plurality of enterprise organization computing device requests may be received and evaluated (e.g., in parallel) without departing from the present disclosure. One or more processes performed inFIGS.2A-2Cmay be performed in real-time or near real-time and one or more steps or processes may be added, omitted, or performed in a different order, without departing from the present disclosure. Referring toFIG.2A, at step201, enterprise organization computing device120may generate a request to establish a server connection and/or network connection to host a virtual workspace. A virtual workspace may correspond to a web-based environment (e.g., an enterprise organization webpage, an enterprise organization web portal, a consumer portal associated with an enterprise organization, or the like) within which enterprise organization computing device120may collaborate with at least one additional computing device (e.g., consumer computing device130, or the like). The server connection and/or network connection associated with virtual workspace may identify a server pod (e.g., a cluster of web servers with a web farm framework, or the like) and a web server within the server pod with network availability to host the virtual workspace. In some instances, the request to establish the server connection and/or network connection may comprise a unique identifier associated with the virtual workspace (e.g., a webpage address that corresponds to the enterprise organization webpage, an Internet address that corresponds to the enterprise organization web portal, an Internet address that corresponds to the consumer portal, or the like). Enterprise organization computing device120may transmit the request to global server load balancer111of computing platform110. At step202, computing platform110(e.g., via global server load balancer111) may receive the request from enterprise organization computing device120and may parse the request to identify a data center (e.g., one of data centers112a-112b, or the like) that may have network availability to host the server connection and/or network connection to the virtual workspace. The data center may comprise at least one local server load balancer (e.g., one of local server load balancers113a-113b, or the like), at least one server pod (e.g., at least one of server pods114a-114d, or the like), and/or at least one web server (e.g., one of web servers115a-115h, or the like). Computing platform110(e.g., via global server load balancer111) may consider the network availability of server pods114a-114dand/or web servers115a-115hto determine the network availability of data centers112a-112b. Computing platform110(e.g., via global server load balancer111) may compare the network availability of each data center (e.g., data centers112a-112b, or the like) and may identify a data center that may host the server connection and/or network connection to the virtual workspace (e.g., a data center with the greatest network availability, or the like). Based on determining data center112amay have network availability to host the virtual workspace (e.g., may have the greatest network availability to host the server connection and/or network connection to the virtual workspace, or the like), computing platform110(e.g., via global server load balancer111) may route the request to data center112a. At step203, computing platform110may identify a server pod (e.g., at least one of server pods114a-114bof data center112a, or the like), and/or a web server (e.g., at least one of web servers115a-115dof data center112a, or the like) that may host the virtual workspace. To do so, computing platform110(e.g., via local server load balancer113a) may determine a network capacity and a network availability of each sever pod and each web server therein. The network capacity may indicate a maximum operational functionality of each server pod and each web server based on a volume of network traffic that each server pod and each web server may receive (e.g., an operational functionality measure beyond which the server pod and/or web server might not have sufficient network processing resources to host the virtual workspace, a maximum volume of network traffic that each server pod may handle, a maximum volume of network traffic that each web server may handle, or the like). The network availability may indicate a current measure of the operational functionality of each server pod and each web server (e.g., a difference between the maximum volume of network traffic associated with each server pod and a current volume of network traffic associated with each server pod, a difference between the maximum volume of network traffic associated with each web server and a current volume of network traffic associated with each web server, or the like). Computing platform110(e.g., via local server load balancer113a) may compare the network capacity (e.g., the maximum volume of network traffic, or the like) of each server pod to the network availability (e.g., the difference between the maximum volume of network traffic and the current volume of network traffic, or the like) of each server pod to identify a server pod (e.g., server pod114a, or the like) with the greatest network availability to host the virtual workspace (e.g., a server pod associated with the least volume of current network traffic, or the like). Based on determining server pod114amay be associated with the greatest network availability to host the virtual workspace, computing platform110(e.g., via local server load balancer113a) may compare the network capacity (e.g., the maximum volume of network traffic, or the like) of each web server therein to the network availability (e.g., the difference between the maximum volume of network traffic and the current volume of network traffic, or the like) of each web server therein to identify the web server (e.g., web server115a, or the like) with the greatest network availability to host the virtual workspace. In some instances, computing platform110(e.g., via local server load balancer113a) may determine that more than one server pod and/or web server may have network availability to host the virtual workspace and, as such, may elect the server pod and the web server to host the virtual workspace based on considering criteria received from the enterprise organization (e.g., a processing history of each server pod, a processing history each web server, or the like). Additionally or alternatively, computing platform110(e.g., via local server load balancer113a) may determine that the server pods and/or the web servers associated with local server load balancer113amight not have network availability to host the virtual workspace and, as such, may transmit a notification to global server load balancer111. The notification may request that a different local server load balancer (e.g., local server load balancer113b, or the like) handle the request from enterprise organization computing device120. At step204, computing platform110(e.g., via local server load balancer113a) may route the request to the identified web server (e.g., web server115a, or the like) associated with the identified server pod with network availability to host the virtual workspace (e.g., server pod114a, or the like). At step205, computing platform110may establish the server connection and/or network connection to the virtual workspace. To do so, computing platform110(e.g., via web server115a) may parse the request to extract at least one unique identifier associated with the virtual workspace (e.g., a workspaceID, a webpage address that corresponds to the enterprise organization webpage, an Internet address that corresponds to the enterprise organization web portal, an Internet address that corresponds to the consumer portal, or the like). Computing platform110(e.g., via web server115a) may use the at least one unique identifier to launch the virtual workspace. To launch the virtual workspace, computing platform110(e.g., via web server115a) may assign a unique network address to the virtual workspace (e.g., an IP address, an Internet address, or the like) such that the virtual workspace may be located within network150. Computing platform110(e.g., via web server115a) may also publish the virtual workspace such that the unique network address associated with the virtual workspace may be identified by at least one computing device within computing platform110(e.g., thereby allowing the at least one computing device within computing platform110to locate and/or request access to the virtual workspace within network150, or the like). Computing platform110(e.g., via web server115a) may store, within connection database116, the at least one unique identifier and/or unique network address associated with the virtual workspace. Connection database116may comprise connection details associated with the virtual workspace (e.g., unique identifier that may be used to identify the virtual workspace, unique identifiers that may be used to launch the virtual workspace, unique identifiers that correspond to the server pod and the web server therein that may host the virtual workspace, a unique identifier that corresponds to the local server load balancer that hosts the server pod and the web server therein, or the like). The data within connection database116may be stored dynamically, such that the data may be modified based on at least one change to the server connection and/or network connection to the virtual workspace (e.g., based on determining at least one of the server pod and/or the web server assignment should be changed to a different server pod and/or different web server, or the like). Access to connection database116may differ depending on the computing device that is requesting access (e.g., a hierarchy of accessibility). Web servers115a-115hmay be associated with a first level of accessibility (e.g., a least restrictive level of accessibility). Web servers115a-115hmay perform functions on the data stored within connection database116(e.g., access the data, add data, remove data, modify data, or the like). The remaining computing devices within computing platform110may be associated with a second level of accessibility (e.g., a more restrictive level of accessibility than the first level of accessibility). The remaining computing devices within computing platform110may access the data stored within connection database116, but might not be permitted to add, remove, and/or modify the data stored within connection database116. Furthermore, computing platform110(e.g., via web server115a) may store, within audit database117, the workspaceID that corresponds the virtual workspace. The workspaceID may correspond to a unique identifier that describes the virtual workspace (e.g., an enterprise organization number associated with the virtual workspace, an enterprise organization curated name associated with the virtual workspace, or the like). The workspaceID might not be repeated and/or shared among a plurality of virtual workspaces (e.g., each workspaceID may be used once and may correspond to a single virtual workspace, or the like). Audit database117may comprise a log of each virtual workspace established by each of web servers115a-115h(e.g., virtual workspaces that might not be accessible to computing devices, virtual workspaces to which an enterprise organization computing device terminated access, virtual workspaces to which a consumer computing device terminated access, virtual workspaces that may be accessible to all computing devices, or the like). The data within audit database117may comprise static data (e.g., the data might not change dynamically to mirror modifications to the server connection and/or network connection to the virtual workspace, or the like). Access to audit database117may differ depending on the computing device that is requesting access (e.g., a hierarchy of accessibility). Web servers115a-115hmay be associated with a first level of accessibility (e.g., a least restrictive level of accessibility). Web servers115a-115hmay perform functions on the data stored within audit database117(e.g., access the data, add data, or the like). The remaining computing devices within computing platform110may be associated with a second level of accessibility (e.g., a more restrictive level of accessibility than the first level of accessibility). The remaining computing devices within computing platform110may access the data stored within audit database117, but might not be permitted to add data to audit database117. At step206, computing platform110(e.g., via web server115a) may generate a cookie comprising server and/or network connection details that correspond to the virtual workspace. The cookie may indicate the server pod and the web server to which local server load balancer113amay assign the virtual workspace. The connection details within the cookie may comprise a unique identifier that corresponds to the local server load balancer (e.g., local server load balancer113a, or the like) that may host the server pod and the web server assigned to host the virtual workspace, a unique identifier that corresponds to the server pod assigned to host the virtual workspace, a unique identifier that corresponds to the web server assigned to host the virtual workspace, a network address that corresponds to the virtual workspace, or the like. As discussed in detail below, the details within the cookie may be used to identify the network location of the virtual workspace and/or to identify the server pod and the web server that may receive subsequent requests (e.g., from consumer computing devices130a-130c, or the like) to connect to the virtual workspace. Web server115amay store the cookie within connection database116. At step207, computing platform110(e.g., via web server115a) may establish a server connection and/or network connection to the virtual workspace and may share the server connection and/or network connection with enterprise organization computing device120. To do so, computing platform110(e.g., via web server115a) may parse connection database116to identify a network address associated with the virtual workspace. In some instances, computing platform110(e.g., via web server115a) may transmit, to enterprise organization computing device120, the network address such that enterprise organization computing device120may access the virtual workspace at a time in the future. Additionally or alternatively, transmitting the network address to enterprise organization computing device120may cause the virtual workspace to be displayed via a display device on enterprise organization computing device120. As such, enterprise organization computing device120may use the virtual workspace to collaborate with at least one additional computing device (e.g., at least a different one of enterprise organization computing devices120a-120c, consumer computing devices130a-130c, or the like). At step208, consumer computing device130may transmit, to computing platform110(e.g., to global server load balancer111), a request to access the virtual workspace. The consumer request may comprise at least one unique identifier associated with the virtual workspace (e.g., a workspaceID, a webpage address that corresponds to the enterprise organization webpage, an Internet address that corresponds to the enterprise organization web portal, an Internet address that corresponds to the consumer portal, or the like). Referring toFIG.2B, at step209, computing platform110(e.g., via web server115a) may establish a server connection and/or network connection between the virtual workspace and consumer computing device130. As discussed in detail in connection withFIGS.3A-3B, computing platform110(e.g., via global server load balancer111) may receive and parse the consumer request. Based on parsing the consumer request, global server load balancer111may identify a unique identifier that corresponds to the virtual workspace and may use the unique identifier to parse connection database116. Global server load balancer111may identify, within connection database116, the cookie comprising the connection details associated with the virtual workspace and may transmit the consumer request to the web server indicated in the cookie (e.g., web server115a, or the like). Computing platform110(e.g., via web server115a) may receive the consumer request from global server load balancer111, and may parse the consumer request and connection database116to gather further details associated with the virtual workspace (e.g., the network address that corresponds to the virtual workspace, or the like). Web server115amay use the details to establish a server connection and/or network connection between the virtual workspace and consumer computing device130. To do so, web server115amay transmit the network address to consumer computing device130and, in some instances, may cause the virtual workspace to be displayed via a display device on consumer computing device130. As such, consumer computing device130may use the virtual workspace to collaborate with at least one additional computing device (e.g., at least one of enterprise organization computing devices120a-120c, a different one of consumer computing devices130a-130c, or the like). At step210, enterprise organization computing device120may lose the server connection and/or network connection to the virtual workspace and may initiate a re-connection loop to access the virtual workspace. In some instances, the loss of the server connection and/or network connection may be caused by a glitch within network150(e.g., a webpage timeout, a web browser timeout, or the like). Additionally or alternatively, the loss of the server connection and/or network connection may be based on reduced functionality of the web server that hosts the virtual workspace based on software and/or hardware complications (e.g., reduced functionality of web server115a, loss of power to web server115awhich may result in loss of functionality of web server115a, or the like). The re-connection loop may comprise of a plurality of steps that, when executed, may re-establish the server connection and/or network connection to the virtual workspace, as discussed in detail below. To initiate the re-connection loop, enterprise organization computing device120may transmit, to computing platform110, a re-connection request (e.g., a request to re-establish the server connection and/or network connection to the virtual workspace, or the like). At step211, computing platform110(e.g., local server load balancer113aassociated with the server pod and the web server that host the virtual workspace, or the like) may receive a network alert (e.g., from network150, or the like) indicating that enterprise organization computing device120lost the server connection and/or network connection to the virtual workspace. Computing platform110may continuously identify the computing devices associated with at least one server connection and/or network connection to at least one virtual workspace, and may log the identified computing devices within connection database116. Based on detecting at least one change in the computing devices associated with the at least one virtual workspace, network150may transmit the network alert to the local server load balancer associated with the virtual workspace (e.g., local server load balancer113a). The notification may identify at least one computing device that may have established a connection with (or disconnected from) at least one virtual workspace. At step212, the computing platform110may identify a server pod (e.g., one of server pods114a-114d, or the like) and a web server (e.g., one of web servers115a-115h, or the like) that may be used to re-establish the server connection and/or network connection to the virtual workspace. To do so, computing platform110(e.g., via local server load balancer113a) may compare the network capacity of each corresponding server pod to the current network availability of each corresponding server pod to identify a second server pod with the greatest network availability to re-establish the server connection and/or network connection to the virtual workspace (e.g., one of server pods114a-114d, or the like). In some instances, and based on the comparison, computing platform110may determine that the server pod indicated in the cookie (e.g., server pod114a) may comprise network capacity to re-establish the server connection and/or network connection to the virtual workspace. Therefore, in some instances, computing platform110may determine, based on the comparison, that the second server pod with the greatest network availability to re-establish the server connection and/or network connection to the virtual workspace may be the server pod assigned to the host the virtual workspace, as indicated in the cookie. Based on identifying the second server pod that may be associated with the greatest network availability to re-establish the server connection and/or network connection to the virtual workspace, computing platform110(e.g., via local server load balancer113a) may compare the network capacity of each web server therein to the current network availability of each web server therein to identify a second web server with the greatest network availability to re-establish the server connection and/or network connection to the virtual workspace (e.g., one of web servers115a-115h). In some instances, and based on the comparison, computing platform110may determine that the web server indicated in the cookie (e.g., web server115a) may comprise the greatest network availability to re-establish the server connection and/or network connection to the virtual workspace. Therefore, in some instances, computing platform110may determine, based on the comparison, that the second web server with the greatest network availability to re-establish the server connection and/or network connection to the virtual workspace may be the web server assigned to the host the virtual workspace, as indicated in the cookie. In some instances, computing platform110(e.g., via local server load balancer113a) may determine that the server pods and/or the web servers associated with local server load balancer113amight not have network availability to re-establish the server connection and/or network connection to the virtual workspace and, as such, may transmit, to global server load balancer111, a request for a different local server load balancer (e.g., local server load balancer113b, or the like) to handle the re-connection request from enterprise organization computing device120. At step213, computing platform110(e.g., via local server load balancer113a) may compare the second server pod and the second web server that may have network availability to re-establish the server connection and/or network connection to the virtual workspace to the server pod and the web server assigned to host the virtual workspace (e.g., server pod114aand web server115aindicated in the cookie, or the like). To do so, local server load balancer113amay parse connection database116to identify the unique identifier associated with the second server pod and/or the unique identifier associated with the second web server. Local server load balancer113amay compare the unique identifier associated with the second server pod and the unique identifier associated with the second web server to the unique identifier associated with the server pod indicated in the cookie and the unique identifier associated with the web server indicated in the cookie. Local server load balancer113amay determine, based on comparing the unique identifiers, whether the server pod and the web server assigned to host the virtual workspace correspond to (e.g., are the same as, or the like) the second server pod and the second web server, respectively. Local server load balancer113amay use the comparison to determine whether to change the web server assignment indicated the cookie. If, at step213, the computing platform110(e.g., via local server load balancer113a) determines that the second server pod and the second web server may correspond to (e.g., are the same as, or the like) the server pod and the web server assigned to host the virtual workspace, then, at step214a, computing platform110may re-establish the server connection and/or network connection to the virtual workspace via the second server pod and the second web server, in accordance with the processes described herein. For example, based on the comparison described in step213, computing platform110(e.g., via local server load balancer113a) may determine that server pod114aand web server115amay have network availability to re-establish the server connection and/or network connection to the virtual workspace. Local server load balancer113amay further determine that server pod114aand web server115amay have been assigned to host the server connection and/or network connection to the virtual workspace (e.g., as indicated in the cookie, or the like). As such, local server load balancer113amay instruct server pod114aand web server115ato re-establish the server connection and/or network connection to the virtual workspace. Based on determining the second server pod and the second web server may correspond to the server pod and the web server that were assigned to host the server connection and/or network connection to the virtual workspace, computing platform110(e.g., via web server115a) may determine that the connection details within the cookie might not need an update. However, if, at step213, computing platform110determines that the second server pod and the second web server might not correspond to (e.g., may be different from, or the like) the server pod and the web server assigned to host the virtual workspace, then, at step214b, the computing platform (e.g., via local server load balancer113a) may determine whether enterprise organization computing device120is authorized to modify a host web server (e.g., the web server assigned to host the server connection and/or network connection to the virtual workspace, or the like). For example, based on the comparison described in step213, computing platform110(e.g., via local server load balancer113a) may determine that server pod114band web server115cmay have network availability to re-establish the server connection and/or network connection to the virtual workspace. Local server load balancer113amay further determine that server pod114aand web server115amay have been assigned to host the server connection and/or network connection to the virtual workspace (e.g., as indicated in the cookie, or the like). Based on determining the second server pod and the second web server might not correspond to (e.g., might be different from, or the like) the server pod and the web server indicated in the cookie, computing platform110(e.g., via local server load balancer113a) may further analyze enterprise organization computing device120to determine whether the cookie may be updated (e.g., whether enterprise organization computing device120may modify the host web server). To do so, computing platform110(e.g., via local server load balancer113a) may consider criteria received from the enterprise organization (e.g., a level of clearance associated with enterprise organization computing device120, a rank within the enterprise organization associated with enterprise organization computing device120, authorization credentials associated with enterprise organization computing device120, or the like). In some instances, local server load balancer113amay receive, from the enterprise organization, a list comprising at least one enterprise organization computing device that may be authorized to modify the host web server associated with the virtual workspace. If, at step214b, computing platform110determines that enterprise organization computing device120may be authorized to modify the host web server associated with the virtual workspace, then, referring toFIG.2Cand at step215a, computing platform110(e.g., via web server115c) may re-establish the server connection and/or network connection to the virtual workspace. To do so, web server115cmay parse connection database116to extract the at least one unique identifier associated with the virtual workspace. The second web server may use the at least one unique identifier to re-launch the virtual workspace and/or re-assign a unique network address to the virtual workspace (e.g., an IP address, an Internet address, or the like) such that the virtual workspace may be located within network150. Web server115cmay publish the virtual workspace such that the unique network address associated with the virtual workspace may be identified by at least one computing device within computing platform110. Publishing the virtual workspace may cause the virtual workspace to be displayed via a display device of enterprise organization computing device120(e.g., in response to the request to re-establish the server connection and/or network connection to the virtual workspace, or the like). Computing platform110(e.g., via web server115c) may also update the cookie that corresponds to the virtual workspace. In particular, web server115cmay update the unique identifiers within the cookie to include at least an updated unique identifier that corresponds to the local server load balancer assigned to host the virtual workspace, an updated unique identifier that corresponds to the server pod assigned to host the virtual workspace, and/or an updated unique identifier that corresponds to the web server assigned to host the virtual workspace. In instances where computing platform110(e.g., via local server load balancer113a) requested that the re-connection request be handled by a different local server load balancer (e.g., local server load balancer113b, or the like), computing platform110(e.g., via load server load balancer113b) may determine that the second server pod and the second web server with network availability to re-establish the server connection and/or network connection to the virtual workspace may be associated with local server load balancer113b(e.g., server pod114c, web server115e, or the like). Computing platform110(e.g., via web server115e) may store, within the cookie, a unique identifier that corresponds to the different local server load balancer. Web server115emay also update the network address that corresponds to the virtual workspace in instances where the second server assigns a network address that may be different from the network address originally assigned to host the virtual workspace. Web server115emay store the updated cookie within connection database116. In instances where the network address that corresponds to the virtual workspace may be different from the network address originally assigned to host the virtual workspace, web server115emay add the network address that corresponds to the virtual workspace to audit database117. Alternatively, if, at step214b, computing platform110determines that enterprise organization computing device120might not be authorized to modify the host web server associated with the virtual workspace, then, at step215b, computing platform110(e.g., via web server115c) may reject the request to re-establish the server connection and/or network connection to the virtual workspace. By doing so, web server115cmay indicate that the second server pod and the second web server (e.g., server pod114band web server115e, or the like) might not be assigned to re-establish the server connection and/or network connection to the virtual workspace. Computing platform110(e.g., via web server115c) may transmit a notification to enterprise organization computing device120indicating rejection of the request to re-establish the server connection and/or network connection to the virtual workspace. In some instances, the notification may indicate at least one reason why the request to re-establish the server connection and/or network connection to the virtual workspace was rejected (e.g., enterprise organization computing device120might not be authorized to modify the host web server, or the like). In such instances, enterprise organization computing device120may continuously re-submit the re-connection request (e.g., for a pre-determined amount of time, until a server pod and web server with network availability to re-establish the server connection and/or network connection to the virtual workspace may be identified, until the server pod and the web server indicated in the cookie have network availability to re-establish the server connection and/or network connection to the virtual workspace, or the like). At step216, enterprise organization computing device120may disconnect from the virtual workspace and, by doing so, may terminate the server connection and/or network connection to the virtual workspace. The server connection and/or network connection may remain active while enterprise organization computing device120is present within the virtual workspace since enterprise organization computing device120may be a virtual workspace leader. As such, the absence of the virtual workspace leader (e.g., at least one of enterprise organization computing devices120a-120c, or the like) may result in a soft deletion of the virtual workspace from network150. However, the presence (or absence) of consumer computing device130within the virtual workspace might not affect the server connection and/or network connection of the virtual workspace since consumer computing device130may be a virtual workspace follower. As such, the presence (or absence) of the virtual workspace follower (e.g., at least one of consumer computing devices130a-130c, or the like) might not result in the soft deletion of the virtual workspace from network150. The soft deletion of the virtual workspace may indicate that computing devices might not be permitted to access the virtual workspace, but a record of the virtual workspace may exist within the running log of virtual workspaces within audit database117. At step217, the computing platform110(e.g., via web server115c) may determine whether the server connection and/or network connection between enterprise organization computing device120and the virtual workspace was lost due to a software and/or hardware failure within network150(e.g., as opposed to intentional termination of the server connection and/or network connection by enterprise organization computing device120, or the like). Enterprise organization computing device120may lose the server connection and/or network connection to the virtual workspace based on at least network issue associated with at least one webpage that may be displayed within the virtual workspace and/or at least one web browser that may be used in the virtual workspace (e.g., a webpage refresh, a webpage timeout, a web browser timeout, or the like). Enterprise organization computing device120may terminate (e.g., intentionally, or the like) the server connection and/or network connection to the virtual workspace (e.g., based on determining a collaboration between a different one of enterprise organization computing devices120a-120cand/or at least one of consumer computing devices130a-130cmay have been completed, determining consumer computing devices130a-130cterminated access to the virtual workspace, or the like). If, at step217, computing platform110(e.g., via web server115c) determines that the server connection and/or network connection between enterprise organization computing device120and the virtual workspace might not have been lost (e.g., may have been intentionally terminated by enterprise organization computing device120, or the like), then, at step218a, computing platform110(e.g., via web server115c) may monitor the virtual workspace to determine whether enterprise organization computing device120resumes access to the virtual workspace. In some instances, enterprise organization computing device120may resume access to the virtual workspace and the second server may continue to host the server connection and/or network connection to the virtual workspace. However, in some instances, enterprise organization computing device120might not resume access and, as such, computing platform110(e.g., via web server115c) may compare an amount of time that enterprise organization computing device120lost the connection to a timeout period, as discussed in detail below. If, at step217, computing platform110(e.g., via web server115c) determines that the server connection and/or network connection between enterprise organization computing device120and the virtual workspace may have been lost (e.g., might not have been intentionally terminated by enterprise organization computing device120, or the like), then, at step218b, computing platform110(e.g., via web server115c) may determine whether the timeout period passed. A timeout threshold value may correspond to a pre-determined amount of time (e.g., determined by the enterprise organization, or the like) during which enterprise organization computing device120may be expected to resume access to the virtual workspace (e.g., an amount of time that enterprise organization computing device120may need to complete the webpage refresh, an amount of time that enterprise organization computing device120may need to recover from the webpage timeout, an amount of time that enterprise organization computing device120may need to recover from the web browser timeout, or the like). Web server115cmay monitor an amount of time since the server connection and/or network connection between enterprise organization computing device120and the virtual workspace was interrupted. Web server115cmay compare the amount of time to the timeout threshold value to determine whether the timeout period has passed. Computing platform110(e.g., via web server115c) may determine that the timeout period may have passed based on determining the amount of time since the server connection and/or network connection between enterprise organization computing device120and the virtual workspace was interrupted may be equal to or greater than the timeout threshold value. Alternatively, web server115cmay determine that the timeout period might not have passed based on determining the amount of time since the server connection and/or network connection between enterprise organization computing device120and the virtual workspace was interrupted may be less than the timeout threshold value. If, at step218b, computing platform110determines that the timeout period might not have passed, then, at step219a, computing platform110(e.g., via web server115c) may continue to monitor the virtual workspace to determine whether enterprise organization computing device120may have resumed access to the virtual workspace and/or whether enterprise organization computing device120may have established a network connection and/or server connection to the second web server. In some instances, enterprise organization computing device120may resume access to the virtual workspace and web server115cmay continue to host the server connection and/or network connection to the virtual workspace. However, in some instances, enterprise organization computing device120might not resume access and, as such, web server115cmay soft delete the virtual workspace from network150, as discussed in detail below. However, if, at step218b, the computing platform110determines that the timeout period may have passed, then, at step219b, the computing platform110(e.g., via web server115c) may soft delete the virtual workspace from network150. The soft deletion of the virtual workspace may indicate that the virtual workspace might not be associated with a virtual workspace leader (e.g., enterprise organization computing devices120a-120cmight not be present within the virtual workspace, or the like). The soft deletion of the virtual workspace may also terminate the server connection and/or network connection between enterprise organization computing device120and at least one web server (e.g., web server115a, web server115c, or the like). As a result, computing devices might not be permitted to access the virtual workspace. Computing platform110(e.g., via web server115c) may update connection database116and audit database117accordingly. In particular, web server115cmay indicate, in connection database116, at least one explanation for the soft deletion of the virtual workspace from network150. Web server115cmay also indicate, in audit database117, that the virtual workspace existed within network150, but may have been soft deleted from network150due to the failure of at least one virtual workspace leader to appear within the virtual workspace. Web server115cmay also indicate, within audit database117, that neither virtual workspace leaders nor virtual workspace followers may be permitted to access the virtual workspace. Since audit database117may comprise a running log of the virtual workspaces hosted on network150, the data within audit database117might not be removed. Instead, web server115cmay insert, within audit database117, at least one note indicating updates to at least one virtual workspace. Computing platform110(e.g., via web server115c) may also transmit, to consumer computing device130, a notification indicating that access to the virtual workspace may be terminated (e.g., based on enterprise organization computing device120failing to resume access to the virtual workspace, or the like). Since the virtual workspace might not remain active within network150in the absence of a virtual workspace leader (e.g., at least one of enterprise organization computing devices120a-120c, or the like), consumer computing device130might not maintain access to the virtual workspace (e.g., access to the virtual workspace may be terminated, or the like). The notification may further indicate at least one reason for terminating access to the virtual workspace. Moreover, computing platform110(e.g., via web server115c) may transmit, to enterprise organization computing device120, a notification indicating that access to the virtual workspace may be terminated (e.g., based on the soft deletion of the virtual workspace from network150, based on enterprise organization computing device120disconnecting from the virtual workspace, or the like). The notification may also indicate that a second request to establish a second server connection and/or network connection may be needed to establish a second virtual workspace within network150. In some instances, computing platform110may receive, from consumer computing device130, at least one request to access at least one virtual workspace established by enterprise organization computing device120. FIGS.3A-3Bdepict an illustrative event sequence for accessing and interacting, in real-time or near real-time, with requested web content using a modified web farm framework, in accordance with one or more aspects described herein. While aspects described with respect toFIGS.3A-3Binclude the evaluation of a single consumer request to access an established virtual workspace, a plurality of consumer requests may be evaluated (e.g., in parallel) without departing from the present disclosure. One or more process performed inFIGS.3A-3Bmay be performed in real-time or near real-time and one or more steps or processes may be added, omitted, or performed in a different order, without departing from the present disclosure. Referring toFIG.3A, at step301, consumer computing device130may generate and transmit, to computing platform110, a request to access a virtual workspace established by at least one of web servers115a-115h. The consumer request may comprise at least one unique identifier associated with the virtual workspace (e.g., a workspaceID, a webpage address that corresponds to an enterprise organization webpage, an Internet address that corresponds to an enterprise organization web portal, an Internet address that corresponds to a consumer portal, or the like). In some instances, consumer computing device130may receive, from computing platform110(e.g., from global server load balancer111of computing platform110), a list indicating a plurality of virtual workspaces that may be active within network150. The list may further indicate at least one unique identifier that corresponds to each virtual workspace. Consumer computing device130may parse the plurality of virtual workspaces and may elect at least one virtual workspace to connect to, and may generate the request based on the elected virtual workspace and the corresponding unique identifier. At step302, computing platform110(e.g., via global server load balancer111) may receive the request from consumer computing device130and may identify at least one data center (e.g., at least one of data centers112a-112b, or the like) that may analyze the consumer request. For instance, global server load balancer111may consider a network availability of server pods114a-114dand/or web servers115a-115hto determine a network availability of data centers112a-112b. Global server load balancer111may compare the network availability of each data center (e.g., data centers112a-112b, or the like) and may identify a data center that may analyze the consumer request. Based on determining data center112amay have network availability to analyze the consumer request, global server load balancer111may route the consumer request to data center112aof computing platform110. At step303, computing platform110(e.g., via local server load balancer113a) may parse the consumer request to identify a cookie that may correspond to the virtual workspace indicated in the consumer request. To do so, local server load balancer113amay extract at least one unique identifier that corresponds to the virtual workspace (e.g., the workspaceID, the webpage address that corresponds to the enterprise organization webpage, the Internet address that corresponds to the enterprise organization web portal, the Internet address that corresponds to the consumer portal, or the like). Local server load balancer113amay use the at least one extracted unique identifier to parse connection database116and to identify the virtual workspace that corresponds to the at least one extracted unique identifier. Local server load balancer113amay identify, based on the parsing, the virtual workspace that corresponds to the unique identifiers indicated in the consumer request and may extract a copy of the cookie associated with the virtual workspace. At step304, computing platform110(e.g., via local server load balancer113a) may transmit the cookie to consumer computing device130. The transmitted cookie may comprise connection details that correspond to the virtual workspace and that may be used to generate a request to access the virtual workspace. At step305, consumer computing device130may receive the cookie from computing platform110, and may generate and transmit a request to access a server pod and a web server assigned to host the virtual workspace. To do so, consumer computing device130may parse the cookie and may identify unique identifiers associated with each computing device assigned to host the virtual workspace (e.g., data center112a, local server load balancer113a, server pod114a, web server115a, or the like). Consumer computing device130may generate a request to connect to at least one of the computing devices indicated in the cookie. Consumer computing device130may transmit the request comprising the unique identifiers to computing platform110. At step306, computing platform110(e.g., via global server load balancer111) may receive, from consumer computing device130, the request comprising the unique identifiers that correspond to the computing devices assigned to host the virtual workspace. For instance, global server load balancer111may parse the consumer request and may extract the unique identifier contained therein. Global server load balancer111may parse the unique identifiers to identify the computing devices assigned to host the virtual workspace and may transmit the request to the identified local server load balancer (e.g., local server load balancer113a, or the like). At step307, computing platform110(e.g., via local server load balancer113b) may receive and parse the request to identify the remaining computing devices assigned to host the virtual workspace. Based on the parsing, local server load balancer113amay transmit the request to the identified server pod (e.g., server pod114a, or the like) and the identified web server (e.g., web server115a, or the like). Computing platform110(e.g., via web server115a) may establish a server connection and/or network connection to the virtual workspace and may share the server connection and/or network connection with consumer computing device130. To do so, web server115emay parse connection database116to identify a network address associated with the virtual workspace. In some instances, web server115emay transmit the network address to consumer computing device130. Transmitting the network address to consumer computing device130may cause the virtual workspace to be displayed via a display device on consumer computing device130. At step308, consumer computing device130may use the virtual workspace to collaborate with at least one additional computing device (e.g., at least one of enterprise organization computing devices120a-120c, a different one of consumer computing devices130a-130c, or the like). Consumer computing device130and enterprise organization computing device120may participate in a shared, web-based experience, wherein edits and/or modifications generated by consumer computing device130(or enterprise organization computing device120) may be reflected in real-time, or near real-time, via the display device of enterprise organization computing devices120(or consumer computing device130). At step309, consumer computing device130may lose the server connection and/or network connection to the virtual workspace. In some instances, consumer computing device130may lose the server connection and/or network connection based on a software and/or hardware issue associated with the virtual workspace (e.g., consumer computing device130may experience a webpage timeout based on receiving webpage modifications generated by enterprise organization computing device120, consumer computing device130may experience a web browser timeout based on transmitting webpage modifications to enterprise organization computing device120, or the like). Therefore, the loss of the server connection and/or network connection may be based on a configuration strength associated with network150and components therein. Additionally or alternatively, consumer computing device130may lose the server and/or network connection based on disconnecting (e.g., intentionally, or the like) from the virtual workspace, based on updates to the connection details associated with the virtual workspace (e.g., enterprise organization computing device120may modify the host web server assigned to host the virtual workspace, or like), and/or based on enterprise organization computing devices120a-120cdisconnecting from the virtual workspace (e.g., based on the lack of a virtual workspace leader within the virtual workspace, or the like). However, the server connection and/or network connection to the virtual workspace may remain active while at least one virtual workspace leader (e.g., at least one of enterprise organization computing devices120a-120c, or the like) is present within the virtual workspace. Referring toFIG.3B, at step310, consumer computing device130may initiate a re-connection loop to re-connect to the virtual workspace. To do so, consumer computing device130may transmit, to computing platform110(e.g., to global server load balancer111), a request to re-connect to the virtual workspace, wherein the request may comprise connection details from the cookie received from computing platform110(e.g., from local server load balancer113a). At step311, computing platform110(e.g., via global server load balancer111) may receive the re-connection request from consumer computing device130and may parse the re-connection request to identify the remaining computing devices assigned to host the virtual workspace (e.g., data center112a, local server load balancer113a, server pod114a, web server115a, or the like). Global server load balancer111may transmit the re-connection request to the data center and local server load balancer indicated in the cookie. Local server load balancer113amay receive the re-connection request and may route the re-connection request to the server pod and the web server assigned to host the virtual workspace. Web server115amay receive and parse the re-connection request to extract the connection details within the cookie. Web server115amay also parse connection database116and may extract the cookie that corresponds to the virtual workspace. Web server115amay compare the connection details within the cookie received from consumer computing device130to the connection details within the cookie extracted from connection database116. Web server115emay determine whether the connection details within the cookie received from consumer computing device130correspond to (e.g., are the same as, are within a pre-determined range of, or the like) the connection details within the cookie extracted from connection database116. In some instances, the cookie stored within connection database116may comprise updated connection details that correspond to the virtual workspace (e.g., an updated data center assigned to host the virtual workspace, an updated server pod assigned to host the virtual workspace, an update web server assigned to host the virtual workspace, or the like). For example, the updated connection details may indicate that enterprise organization computing device120modified the host web server (e.g., the web server assigned to host the virtual workspace, or the like). In such instances, consumer computing device130may lose the server connection and/or network connection to the virtual workspace based on enterprise organization computing device120changing the host web server from a first web server to a second web server. As such, the server connection and/or network connection between consumer computing device130and the virtual workspace may terminate since the virtual workspace might not be associated with the web server that was previously assigned to host the virtual workspace. Therefore, in some instances, the comparison of the connection details within the cookie received from consumer computing device130and the connection details within the cookie extracted from connection database116may indicate at least one reason why consumer computing device130lost the server connection and/or network connection to the virtual workspace. If, at step311, computing platform110(e.g., via web server115a) determines the connection details within the cookie received from consumer computing device130might not correspond to (e.g., may be different from, or the like) the connection details within the cookie extracted from connection database116, then, at step312a, computing platform110(e.g., via web server115a) may extract a copy of an updated cookie from connection database116and may transmit the copy of the updated cookie to consumer computing device130. Computing platform110(e.g., via web server115a) may also transmit, to consumer computing device130, a notification indicating that the connection details within the cookie received from consumer computing device130might not reflect updated connection details. Web server115amay instruct consumer computing device130to generate a revised re-connection request (e.g., based on the updated cookie, or the like). At step313, consumer computing device130may receive, from computing platform110, the updated cookie and the instructions, and may generate a revised re-connection request based on the updated connection details within the updated cookie (e.g., an updated unique identifier associated with an updated server pod assigned to host the virtual workspace, an updated unique identifier associated with an updated web server assigned to host the virtual workspace, or the like). Consumer computing device130may transmit the revised re-connection request to computing platform110(e.g., to global server load balancer111). In some instances, and similar to the process described above, computing platform110(e.g., via global server load balancer111) may parse the revised re-connection request to identify the revised computing devices assigned to host the virtual workspace. Global server load balancer111may transmit the revised re-connection request to the data center and local server load balancer indicated in the revised re-connection request (e.g., data center112b, local server load balancer113b, or the like). Local server load balancer113bmay parse the revised re-connection request to identify the revised server pod and revised web server assigned to host the virtual workspace (e.g., server pod114d, web server115g, or the like). Local server load balancer113bmay transmit the revised re-connection request to the updated web server assigned to host the virtual workspace (e.g., web server115g, or the like). Computing platform110(e.g., via web server115g) may parse the revised re-connection request to identify a unique identifier associated with the virtual workspace, and may parse connection database116to extract an updated cookie that corresponds to the virtual workspace. Web server115gmay compare the revised connection details indicated in the revised re-connection request to the revised connection details extracted from the updated cookie. Based on determining the revised connection details indicated in the revised re-connection request might not correspond to (e.g., may be different from, or the like) the revised connection details extracted from the updated cookie, web server115gmay extract further updated connection details from connection database116(e.g., a second updated cookie, or the like) and may transmit the second updated cookie to consumer computing device130with instructions to revise the re-connection request based on the updated connection details within the second updated cookie. Web server115gmay repeat process described herein for a period of time (e.g., until the connection details within the re-connection request correspond to the connection details indicated in connection database116, until enterprise organization computing device120terminates access to the virtual workspace, or the like). Alternatively, based on determining the revised connection details indicated in the revised re-connection request correspond to (e.g., may be the same as, or the like) the revised connection details extracted from the updated cookie, computing platform110(e.g., via web server115g) may re-establish the server connection and/or network connection between consumer computing device130and the virtual workspace. In particular, web server115gmay extract, from connection database116, an updated network address that corresponds to the virtual workspace and may transmit the updated network address to consumer computing device130. In some instances, the transmission may cause the virtual workspace to be displayed via a display device on consumer computing device130. However, if, at step311, computing platform110(e.g., via web server115a) determines the connection details within the cookie received from consumer computing device130correspond to (e.g., may be the same as, or the like) the connection details within the cookie extracted from connection database116, then, at step312b, computing platform110(e.g., via web server115a) may re-establish the server connection and/or network connection between consumer computing device130and the virtual workspace. Web server115amay parse connection database116to identify the network address associated with the virtual workspace. In some instances, web server115amay transmit the network address to consumer computing device130and may cause the virtual workspace to be displayed via the display device on consumer computing device130. Computing platform110(e.g., via web server115a) may further transmit, to consumer computing device130, a notification indicating restoration of the server connection and/or network connection to the virtual workspace, and indicating restoration of access to the virtual workspace. Based on the restored connection to the virtual workspace, consumer computing device130may collaborate with at least one of enterprise organization computing devices120a-120cwithin the virtual workspace. At step314, enterprise organization computing device120may terminate the server connection and/or network connection to the virtual workspace (e.g., enterprise organization computing device120may disconnect from the virtual workspace, or the like). As a virtual workspace leader, the presence of at least one of enterprise organization computing devices120a-120cmay be needed for the server connection and/or network connection to the virtual workspace to remain active within network150. Therefore, the absence of the virtual workspace leader within the virtual workspace may result in a soft deletion of the virtual workspace from network150. The soft deletion of the virtual workspace may indicate that computing devices might not be permitted to access the virtual workspace, but a record of the virtual workspace may exist within a running log of virtual workspaces within audit database117. At step315, computing platform110(e.g., via web server115a(or web server115g) assigned to host the virtual workspace) may receive a network alert (e.g., from network150, or the like) indicating that enterprise organization computing device120terminated access to the virtual workspace. Since the absence of enterprise organization computing devices120a-120cwithin the virtual workspace may result in the soft deletion of the virtual workspace from network150, computing platform110(e.g., via web server115a(or web server115g) assigned to host the virtual workspace) may determine that consumer computing device130might not access the virtual workspace based on the virtual workspace experiencing the soft deletion from network150. As such, computing platform110(e.g., via web server115a(or web server115g) assigned to host the virtual workspace) may transmit, to consumer computing device130, a notification indicating termination of access to the virtual workspace (e.g., consumer computing device130might not be permitted to access the virtual workspace, or the like). At step316, consumer computing device130may receive the notification from computing platform110(e.g., from web server115a(or web server115g) assigned to host the virtual workspace). Based on enterprise organization computing device120disconnecting from the virtual workspace, at step316, consumer computing device130may lose access to the virtual workspace. FIG.4depicts a flow diagram illustrating one example method for generating, in real-time or near real-time, a modified web farm framework for routing web traffic based on similarities between web traffic requests, in accordance with one or more aspects described herein. The processes illustrated inFIG.4are merely sample processes and functions. The steps shown may be performed in the order shown, in a different order, more steps may be added, or one or more steps may be omitted, without departing from the disclosure. In some examples, one or more steps may be performed simultaneously with other steps shown and described. Further, one or more steps described with respect toFIG.4may be performed in real-time or near real-time. Referring toFIG.4, at step401, global server load balancer111may receive, from enterprise organization computing device120, a request to establish a server connection and/or network connection to host a virtual workspace, and may parse the request to identify a data center (e.g., one of data centers112a-112b, or the like) that may have network availability to host the server connection and/or network connection to the virtual workspace. At step402, based on determining data center112amay have network availability to host the virtual workspace (e.g., may have the greatest network availability to host the server connection and/or network connection to the virtual workspace, or the like), global server load balancer111may route the request to data center112a. At step403, data center112amay receive the request from global server load balancer111and may identify a server pod (e.g., at least one of server pods114a-114bof data center112a, or the like), and/or a web server (e.g., at least one of web servers115a-115dof data center112a, or the like) that may host the virtual workspace. Local server load balancer113amay compare a network capacity of each server pod to a network availability of each server pod to identify a server pod (e.g., server pod114a, or the like) with the greatest network availability to host the virtual workspace (e.g., a server pod associated with the least volume of current network traffic, or the like). Based on determining server pod114amay be associated with the greatest network availability to host the virtual workspace, local server load balancer113amay compare the network capacity of each web server therein to the network availability of each web server therein to identify the web server (e.g., web server115a, or the like) with the greatest network availability to host the virtual workspace. At step404, local server load balancer113amay route the request to the identified web server (e.g., web server115a, or the like) associated with the identified server pod with network availability to host the virtual workspace (e.g., server pod114a, or the like), and may assign the identified server pod and the identified web server to host the virtual workspace. At step405, web server115amay receive the request from local server load balancer113aand may establish the server connection and/or network connection to the virtual workspace based on extracting, from the request, at least one unique identifier associated with the virtual workspace (e.g., a workspaceID, a webpage address that corresponds to the enterprise organization webpage, an Internet address that corresponds to the enterprise organization web portal, an Internet address that corresponds to the consumer portal, or the like). Web server115amay use the extracted data to launch and publish the virtual workspace. At step406, web server115amay generate a cookie comprising server and/or network connection details that correspond to the virtual workspace. The cookie may indicate the server pod and the web server to which local server load balancer113amay assign the virtual workspace. The connection details within the cookie may comprise a unique identifier that corresponds to the local server load balancer (e.g., local server load balancer113a, or the like) that may host the server pod and the web server assigned to host the virtual workspace, a unique identifier that corresponds to the server pod assigned to host the virtual workspace, a unique identifier that corresponds to the web server assigned to host the virtual workspace, a network address that corresponds to the virtual workspace, or the like. Web server115amay store the cookie within connection database116. At step407, web server115amay establish a server connection and/or network connection to the virtual workspace and may share the server connection and/or network connection with enterprise organization computing device120. At step408, global server load balancer111may receive a request, from consumer computing device130, to access the virtual workspace. Based on parsing the consumer request, global server load balancer111may identify a unique identifier that corresponds to the virtual workspace. Global server load balancer111may identify, within connection database116, the cookie comprising the connection details associated with the virtual workspace and may transmit the cookie to consumer computing device130. Global server load balancer111may receive, from consumer computing device130, a request to establish a network connection and/or server connection with the server pod and the web server assigned to host the virtual workspace, as indicated in the cookie. At step409, global server load balancer111may route the consumer request to the data center (e.g., data center112a, or the like) that may host the server pod and the web server assigned to host the virtual workspace. Local server load balancer113aof data center112amay parse the consumer request and may route the consumer request to the server pod and the web server assigned to host the virtual workspace (e.g., server pod114a, web server115a, or the like). Web server115amay receive the consumer request and may establish a server connection and/or network connection between consumer computing device130and the virtual workspace. At step410, enterprise organization computing device120may lose the server connection and/or network connection to the virtual workspace and local server load balancer113amay receive a network alert (e.g., from network150, or the like) indicating that enterprise organization computing device120lost the server connection and/or network connection to the virtual workspace. At step411, local server load balancer113amay identify a web server (e.g., a second web server, or the like) that may be used to re-establish the server connection and/or network connection to the virtual workspace. Local server load balancer113amay compare the network capacity of each corresponding server pod to the current network availability of each corresponding server pod to identify a second server pod with the greatest network availability to re-establish the server connection and/or network connection to the virtual workspace. At step412, local server load balancer113amay compare the second server pod and the second web server that may have network availability to re-establish the server connection and/or network connection to the virtual workspace to the server pod and the web server assigned to host the virtual workspace (e.g., indicated in the cookie, or the like) to determine whether the server pod and the web server indicated in the cookie correspond to (e.g., are the same as, or the like) the second server pod and the second web server. In some instances, and based on the comparison, computing platform110may determine that the server pod indicated in the cookie (e.g., server pod114a) may comprise network capacity to re-establish the server connection and/or network connection to the virtual workspace. Therefore, in some instances, computing platform110may determine, based on the comparison, that the second server pod with the greatest network availability to re-establish the server connection and/or network connection to the virtual workspace may be the server pod assigned to the host the virtual workspace, as indicated in the cookie. Further, in some instances and based on the comparison, computing platform110may determine that the web server indicated in the cookie (e.g., web server115a) may comprise the greatest network availability to re-establish the server connection and/or network connection to the virtual workspace. Therefore, in some instances, computing platform110may determine, based on the comparison, that the second web server with the greatest network availability to re-establish the server connection and/or network connection to the virtual workspace may be the web server assigned to the host the virtual workspace, as indicated in the cookie. If, at step412, local server load balancer113adetermines that the second server pod and the second web server (e.g., server pod114a, web server115a, or the like) correspond to (e.g., are the same as, or the like) the server pod and the web server assigned to host the virtual workspace (e.g., server pod114aand web server115aas indicated in the cookie, or the like), then, at step413, server pod114aand web server115amay re-establish the server connection and/or network connection to the virtual workspace, in accordance with the processes described herein. However, if, at step412, local server load balancer113adetermines that the second server pod and the second web server (e.g., server pod114b, web server115c, or the like) might not correspond to (e.g., may be different from, or the like) the server pod and the web server assigned to host the virtual workspace (e.g., server pod114aand web server115aas indicated in the cookie, or the like), then, at step414, web server115cmay determine whether enterprise organization computing device120is authorized to modify a host web server (e.g., the web server assigned to host the server connection and/or network connection to the virtual workspace, or the like). If, at step414, local server load balancer113adetermines that enterprise organization computing device120may be authorized to modify the host web server associated with the virtual workspace, then, at step415, web server115cmay re-establish the server connection and/or network connection to the virtual workspace. At step416, web server115cmay update the cookie that corresponds to the virtual workspace. In particular, web server115cmay update the unique identifiers within the cookie to include at least an updated unique identifier that corresponds to the local server load balancer assigned to host the virtual workspace, an updated unique identifier that corresponds to the server pod assigned to host the virtual workspace, and/or an updated unique identifier that corresponds to the web server assigned to host the virtual workspace. However, if, at step414, local server load balancer113adetermines that enterprise organization computing device120might not be authorized to modify the host web server associated with the virtual workspace, then, at step417, web server115cmay reject the server connection and/or network connection to the virtual workspace. In such instances, the process described herein may return to step411in that local server load balancer113amay identify a different web server that may have network processing availability to re-establish a server connection and/or network connection to the virtual workspace. In some instances, enterprise organization computing device120may continuously transmit re-connection requests (e.g., for a pre-determined amount of time, until a server pod and web server with network availability to re-establish the server connection and/or network connection to the virtual workspace may be identified, or the like). At step418, enterprise organization computing device120may disconnect from the virtual workspace and, by doing so, may terminate the server connection and/or network connection to the virtual workspace. At step419, web server115cmay determine whether enterprise organization computing device120lost the server connection and/or network connection due to a software and/or hardware failure within network150(e.g., as opposed to intentional termination of the server connection and/or network connection by enterprise organization computing device120, or the like). If, at step419, web server115cdetermines that the server connection and/or network connection between enterprise organization computing device120and the virtual workspace might not have been lost (e.g., may have been intentionally terminated by enterprise organization computing device120, or the like), then, at step420, web server115cmay monitor the virtual workspace to determine whether enterprise organization computing device120resumes access to the virtual workspace. However, if, at step419, web server115cdetermines that the server connection and/or network connection between enterprise organization computing device120and the virtual workspace may have been lost (e.g., might not have been intentionally terminated by enterprise organization computing device120, or the like), then, at step421, web server115cmay determine whether a timeout period passed. A timeout threshold value may correspond to a pre-determined amount of time (e.g., determined by the enterprise organization, or the like) during which enterprise organization computing device120may be expected to resume access to the virtual workspace. Web server115cmay monitor an amount of time since the server connection and/or network connection between enterprise organization computing device120and the virtual workspace was interrupted. Web server115cmay compare the amount of time to the timeout threshold value to determine whether the timeout period has passed. If, at step421, web server115cdetermines that the timeout period might not have passed, then the process described herein may return to step420and the second web server may continue to monitor the virtual workspace to determine whether enterprise organization computing device120may have resumed access to the virtual workspace and/or whether enterprise organization computing device120may have re-established a network connection and/or server connection to the second web server. Alternatively, if, at step421, web server115cdetermines that the timeout period may have passed, then, at step422, web server115cmay soft delete the virtual workspace from network150. The soft deletion of the virtual workspace may indicate that at least one of enterprise organization computing devices120a-120cmight not be present within the virtual workspace. At step423, web server115cmay transmit, to consumer computing device130, a notification indicating that access to the virtual workspace may be terminated (e.g., based on enterprise organization computing device120failing to resume access to the virtual workspace, or the like). Web server115cmay also transmit, to enterprise organization computing device120, a notification indicating that access to the virtual workspace may be terminated (e.g., based on the soft deletion of the virtual workspace from network150, based on enterprise organization computing device120disconnecting from the virtual workspace, or the like). The notification may also indicate that an additional request to establish a second server connection and/or network connection may be needed to establish a second virtual workspace within network150. FIG.5depicts a flow diagram illustrating one example method for accessing and interacting with, in real-time or near real-time, requested web content using a modified web farm framework, in accordance with one or more aspects described herein. The processes illustrated inFIG.5are merely sample processes and functions. The steps shown may be performed in the order shown, in a different order, more steps may be added, or one or more steps may be omitted, without departing from the disclosure. In some examples, one or more steps may be performed simultaneously with other steps shown and described. Further, one or more steps described with respect toFIG.5may be performed in real-time or near real-time. Referring toFIG.5, at step501, consumer computing device130may generate and transmit, to global server load balancer111, a request to access a virtual workspace established by at least one of web servers115a-115h. Global server load balancer111may process the consumer request based on the method described herein. At step502, consumer computing device130may receive, from local server load balancer113a, a cookie comprising connection details that correspond to the virtual workspace and that may be used to generate a request to access the virtual workspace. At step503, consumer computing device130may generate and transmit a request to access a server pod and a web server assigned to host the virtual workspace (e.g., server pod114a, web server115a, or the like). At step504, consumer computing device130may receive, from web server115a, a server connection and/or network connection to the virtual workspace. In some instances, consumer computing device130may receive, from web server115a, a network address that corresponds to the virtual workspace. Receipt of the network address may cause the virtual workspace to be displayed via a display device on consumer computing device130. At step505, consumer computing device130may use the virtual workspace to collaborate with at least one additional computing device (e.g., at least one of enterprise organization computing devices120a-120c, a different one of consumer computing devices130a-130c, or the like). Consumer computing device130and enterprise organization computing device120may participate in a shared, web-based experience, wherein edits and/or modifications generated by consumer computing device130(or enterprise organization computing device120) may be reflected in real-time, or near real-time, via the display device of enterprise organization computing device120(or consumer computing device130). At step506, consumer computing device130may lose the server connection and/or network connection to the virtual workspace. In some instances, consumer computing device130may lose the server connection and/or network connection based on a software and/or hardware issue associated with the virtual workspace. At step507, consumer computing device130may initiate a re-connection loop to re-connect to the virtual workspace by transmitting, to global server load balancer111, a request to re-connect to the virtual workspace, wherein the request may comprise connection details from the cookie. At step508, consumer computing device130may determine whether access to the virtual workspace was restored (e.g., whether the server connection and/or network connection to the virtual workspace was restored, whether a new server connection and/or network connection to the virtual workspace was established with a different server pod and/or web server, or the like). If, at step508, consumer computing device130determines that access to the virtual workspace might not have been restored, then at step509, consumer computing device130may receive (e.g., from web server115a, or the like) a copy of an updated cookie comprising updated connection details associated with the virtual workspace. Consumer computing device130may also receive instructions to generate and transmit a revised re-connection request (e.g., based on the updated cookie, or the like). At step510, consumer computing device130may generate a revised re-connection request based on the updated connection details within the updated cookie (e.g., an updated unique identifier associated with an updated server pod assigned to host the virtual workspace, an updated unique identifier associated with an updated web server assigned to host the virtual workspace, or the like). Consumer computing device130may transmit the revised re-connection request to global server load balancer111. Consumer computing device130may continuously generate and transmit revised requests to re-connect to the virtual workspace based on receiving an updated cookie (e.g., for a pre-determined amount of time, until consumer computing device130receives access to the virtual workspace, or the like). In such instances, the process described herein may return to step508in that consumer computing device130may determine, based on transmitting a revised re-connection request, whether access to virtual workspace was restored. However, if, at step508, consumer computing device130determines that access to the virtual workspace may have been restored, then at step511, consumer computing device130may receive access to the virtual workspace. At step512, consumer computing device130may receive a notification indicating termination of access to the virtual workspace (e.g., based on enterprise organization computing devices120a-120cterminating access to the virtual workspace, based on the absences of enterprise organization computing devices120a-120cwithin the virtual workspace, or the like). Based on receiving the notification, consumer computing device130may lose access to the virtual workspace. As a result, the proposed solution may provide the following benefits: 1) real-time, or near real-time, monitoring of network capacity associated with a plurality of servers; 2) real-time, or near real-time, identification of a pod and/or a server within the pod with network capacity to host a server connection to a virtual workspace; 3) real-time, or near real-time, assignment of the virtual workspace to the identified pod and server; and 4) real-time, or near real-time, routing of subsequent requests to access the virtual workspace to the identified pod and server assigned to the virtual workspace. As a result, the proposed solution may provide the following benefits: 1) real-time, or near real-time, reception of a cookie indicating server connection details associated with a virtual workspace; 2) real-time, or near real-time, transmission of a request to connect to a pod and server that host the virtual workspace; 3) real-time, or near real-time, reception of access to the virtual workspace; real-time, or near real-time, initiation of a re-connection loop based on receiving a notification indicating at least one of loss and/or termination of the server connection. One or more aspects of the disclosure may be embodied in computer-usable data or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices to perform the operations described herein. Generally, program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types when executed by one or more processors in a computer or other data processing device. The computer-executable instructions may be stored as computer-readable instructions on a computer-readable medium such as a hard disk, optical disk, removable storage media, solid-state memory, RAM, and the like. The functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents, such as integrated circuits, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects of the disclosure, and such data structures are contemplated to be within the scope of computer executable instructions and computer-usable data described herein. Various aspects described herein may be embodied as a method, an enterprise computing platform, or as one or more non-transitory computer-readable media storing instructions. Accordingly, those aspects may take the form of an entirely hardware embodiment, an entirely software embodiment, an entirely firmware embodiment, or an embodiment combining software, hardware, and firmware aspects in any combination. In addition, various signals representing data or events as described herein may be transferred between a source and a destination in the form of light or electromagnetic waves traveling through signal-conducting media such as metal wires, optical fibers, or wireless transmission media (e.g., air or space). As described herein, the various methods and acts may be operative across one or more computing servers and one or more networks. The functionality may be distributed in any manner, or may be located in a single computing device (e.g., a server, a user computer, and the like). For example, in alternative embodiments, one or more of the computing platforms discussed above may be combined into a single computing platform, and the various functions of each computing platform may be performed by the single computing platform. In such arrangements, any and/or all of the above-discussed communications between computing platforms may correspond to data being accessed, moved, modified, updated, and/or otherwise used by the single computing platform. Additionally or alternatively, one or more of the computing platforms discussed above may be implemented in one or more virtual machines that are provided by one or more physical computing devices. In such arrangements, the various functions of each computing platform may be performed by the one or more virtual machines, and any and/or all of the above-discussed communications between computing platforms may correspond to data being accessed, moved, modified, updated, and/or otherwise used by the one or more virtual machines. Aspects of the disclosure have been described in terms of illustrative embodiments thereof. Numerous other embodiments, modifications, and variations within the scope and spirit of the appended claims will occur to persons of ordinary skill in the art from a review of this disclosure. For example, one or more of the steps depicted in the illustrative figures may be performed in other than the recited order, and one or more depicted steps may be optional in accordance with aspects of the disclosure. | 112,070 |
11943308 | DESCRIPTION OF EMBODIMENTS Dynamically modifying an HTTP connection is described. A condition is determined to exist that triggers an HTTP server to make a change that affects one or more HTTP connections (e.g., one or more HTTP/2 and/or one or more HTTP/3 connections). The trigger condition may be a hardware resource condition of the server (e.g., server CPU usage greater than a threshold, server memory usage greater than a threshold), a characteristic of a request (e.g., user agent, source IP address), a threat score of the request and/or client being above a threshold, a malicious client detected, malicious behavior detected, and/or protocol misuse detected. The trigger condition may reflect an increased server workload and/or an expectation of future increased server workload. Responsive to determining the condition exists, the HTTP server makes a modification that affects one or more HTTP connections. The modification may affect existing HTTP connection(s) and/or new HTTP connection(s). The modification may include dynamically modifying HTTP connection resource parameter(s) and/or modifying one or more runtime behaviors for the HTTP connection(s). The modification(s) may reduce the workload and/or future expected workload. An HTTP connection that does not respect the modification may be closed with an error. Dynamically modifying HTTP connections provides advantages. For example, dynamically modifying HTTP connections responsive to detecting a condition may allow the server to dynamically reduce its workload in cases of burst or prolonged workload. Dynamically modifying HTTP connections reduces HTTP resource allocation to help reduce the burden of existing or new HTTP connections. This may help avoid or mitigate, for example, denial of service attacks by reducing server load so that HTTP requests can be timely served. Further, applying HTTP connection resource constraints allows for pacing of HTTP requests over that connection. This prevents a client from bursts of activity that can overwhelm a server. Slowing the rate of HTTP activity allows server capacity usage to be more evenly distributed. In some aspects, the dynamic modification involves the client in the resource control by sending the client different parameters, which makes the client indirectly aware of the situation in accordance with the interoperable mechanisms supported by HTTP/2 and HTTP/3. If the client is unaware that it can be causing problems, the client may continue to invoke retransmission and/or retry behavior that causes excess load on the server. In some aspects, instead of failing requests by resetting streams or closing connections, which can lead to data loss or interruption at the client (e.g., a web page can stop working because a critical resource was not delivered), dynamically modifying the connection allows for the data to be transmitted to the client albeit at a slower pace. In some aspects, the dynamic modification is performed for connections associated with personal or small business accounts and may not be performed for connections associated with enterprise accounts. This reduces the resources available for the personal or small business accounts to free resources for enterprise level accounts. In some aspects, the dynamic modification is performed differently within a connection. For instance, flow control of individual streams can exert back pressure on the client, so controlling those differently for streams in the same connection is a dynamic method that supports many types of control. For example, allowing a fraction of streams to be higher bandwidth than others (perhaps based on the nature of the HTTP request itself). FIG.1illustrates an exemplary system for dynamically modifying an HTTP connection according to an embodiment. The system100includes the HTTP clients110A-N connecting to the HTTP server120over the HTTP connections115A-N respectively. The HTTP connections can use different HTTP versions including HTTP/1.1, HTTP/2, and/or HTTP/3. Each HTTP client is typically executed on a client computing device such as a laptop, desktop, smartphone, tablet, gaming system, set top box, wearable device, etc., that can transmit and receive HTTP traffic. Each HTTP client can be a client network application, such as a browser, or other native application that transmits and receives HTTP traffic. The HTTP server120accepts connections from the HTTP clients110A-N. The HTTP server120can be implemented on a server computing device. The HTTP server120may include multiple components including a component for terminating TLS connections and a component to terminate HTTP connections. The HTTP server120includes a condition detector122and an HTTP connection manager124. Although not illustrated inFIG.1, the server may include one or more other components including one or more security components and/or one or more performance components. The one or more security components may include a denial-of-service (DoS) detection and mitigation component, a web application firewall, a threat detecting and blocking component, an access control component, and/or a rate limiting component. The one or more performance components may include components to perform performance services including a content delivery network, caching, video delivery, website optimizations (e.g., asynchronous loading, image optimizations, mobile optimizations), load balancing, intelligent routing, availability, and/or protocol management (e.g., IPv4/v6 gateway). The condition detector122detects whether a condition exists that triggers the HTTP server120to modify an HTTP connection. These conditions are sometimes referred herein as trigger conditions. The trigger condition(s) may be defined in the HTTP connection modification policy126. The HTTP connection modification policy126defines the criteria for a trigger condition and may define the action(s) to take upon the criteria being met. For instance, the trigger condition(s) may be a hardware resource condition of the server, a characteristic of an HTTP request, a threat score of the request and/or client being above a threshold, a malicious client detected, malicious behavior detected, and/or detected protocol misuse. The previous trigger conditions are examples and the condition detector122may detect different trigger conditions based on any defined policy. The trigger condition(s) and the action(s) may be defined by an administrator of the HTTP server120and/or a customer of the HTTP server120. Although not illustrated inFIG.1, the server and/or the system may include a configuration component that provides an interface to allow an administrator to configure the system such as the HTTP connection modification policy126. Example hardware resource trigger conditions include if CPU usage is greater than a threshold, memory usage is greater than a threshold, disk usage is greater than a threshold, data queue is greater than a threshold, and/or network activity is greater than a threshold. The threshold(s) may be defined by an administrator of the server. The condition detector122may continually check whether a hardware resource trigger condition exists. Example characteristics of an HTTP request include specific user agents (e.g., user agents that are suspected as being malicious bots), and/or specific source IP addresses (e.g., source IP addresses that are suspected of sending malicious traffic). The request characteristics may be defined by an administrator of the server. The condition detector122may check for these characteristics responsive to the HTTP server120receiving a request. An HTTP request may be associated with a threat score and/or the HTTP client may be associated with a threat score. The threat score represents the likelihood of whether the request and/or client is malicious. The server may include a component to generate the threat score or receive such a threat score. The thresholds may be defined by an administrator of the server. The condition detector122may check whether the threat score of the request and/or the HTTP client is above a threshold responsive to the HTTP server120receiving a request. Detecting malicious behavior of the client may be a pattern of behavior based on a certain traffic pattern (e.g., many concurrent requests that have been determined to be an attack to force the server to consume excess memory and/or CPU). The condition detector122may detect the pattern of behavior and/or another component of the HTTP server120may detect the pattern of behavior and notify the condition detector122of the pattern detection. Protocol misuse may include a malicious client requesting the HTTP server120generate a response where the client has no intention of reading the response to force the HTTP server120to consume excess memory and/or CPU. Example protocol misuse includes: the client requesting a large amount of data from a resource over multiple streams where the window size and stream priority are manipulated; the client continually transmits pings to the server to build an excessive response queue; the client creates multiple request streams and shuffles the priority of the streams to cause churn to the priority tree; the client opens streams and sends an invalid request over each stream to solicit a stream of RST STREAM frames from the server; the client sends a stream of SETTINGS frames to the server to force the server to acknowledge each frame; the client transmits a stream of headers with a zero length header frame and a zero length header value; the client opens an HTTP/2 window but leaves the TCP window closed so the server cannot write most of the data and then sends a stream of requests for a large response object; and the client transmits a stream of frames with an empty payload and without the end-of-stream flag. The condition detector122may detect the protocol misuse and/or another component of the HTTP server120may detect protocol misuse and notify the condition detector122of the detected protocol misuse. The HTTP connection modification policy126may define the action(s) to take upon the criteria of a trigger condition being met. These action(s) are performed by the HTTP connection manager124and they affect one or more HTTP connections. The HTTP connection manager124dynamically modifies HTTP connection resource parameter(s) for HTTP connection(s) and/or modifies one or more runtime behaviors for the HTTP connection(s). Dynamically controlling HTTP connection resource parameters may include signaling existing HTTP connections with dynamically modified connection resource parameter(s) and/or setting new connections with dynamically modified connection resource parameter(s) (e.g., new HTTP/2 connections and/or new HTTP/3 connections). The modified connection resource parameter(s) may reduce or restrict properties of an HTTP connection. The connection resource parameters can include a setting (or sometimes known as settings parameters) as defined in HTTP/2 and typically carried in a SETTINGS frame, a QUIC transport parameter used as defined in HTTP/3, a setting (or sometimes known as settings parameters) as defined in HTTP/3 and typically carried in a SETTINGS frame, and/or other frames that control runtime limits such as the MAX STREAMS frame defined in QUIC. The one or more runtime behaviors include: limiting or stopping the use of server push; stopping use or limiting the number of hinted link headers included in a 103 Early Hints response; limiting runtime flow control; using a concurrent response prioritization method that is tuned towards lower resource usage; adjustment of denial-of-service (DoS) mitigation thresholds; modifying the TCP congestion control algorithm or any parameter used to send data to the client; limiting or stopping using dynamic header compression; controlling idle timeouts, and/or limiting or stopping the use of HTTP/2 and/or HTTP/3 extensions such as frame-based extensions (e.g., ORIGIN frame) or WebTransport protocol. In an embodiment, the HTTP connection manager124applies the action(s) for all HTTP/2 and/or HTTP/3 connections. In another embodiment, the HTTP connection manager124applies the action(s) for selected HTTP/2 and/or HTTP/3 connections. The HTTP connection modification policy126may define specific zones, specific accounts, specific account types, and/or specific HTTP clients in which to apply the action(s). As an example, the HTTP connection manager124may perform the actions(s) for HTTP/2 and/or HTTP/3 connections associated with personal or small business accounts and may not perform the action(s) for HTTP/2 and/or HTTP/3 connections associated with enterprise accounts. As another example, if a certain zone has become the target of a selective attack, the HTTP connection manager124may perform the action(s) for HTTP/2 and/or HTTP/3 connections only associated for that zone. As another example, if the trigger condition is specific to a particular client or request (e.g., the threat score of the request and/or client is above a threshold, a malicious client is detected, malicious behavior of the client is detected, protocol misuse by the client is detected), the HTTP connection manager124may perform the action(s) only for the HTTP/2 and/or HTTP/3 connections specific to that client or request. When the trigger condition no longer exists, the HTTP connection manager124may revert the modifications. As an example, if the trigger condition was that the server CPU usage was greater than a threshold and the server CPU usage fell below that threshold, the HTTP connection manager124may revert the modifications it made to the HTTP/2 and/or HTTP/3 connections. In an embodiment, the HTTP connection manager124supports different classes or profiles of parameter configuration depending on the lifecycle of the trigger condition and the actions and the different classes or profiles could be phased out differently. For example, for connection resource parameter(s) that applied to new connections while a condition trigger was active (e.g., stream concurrency of HTTP/2 connections, initial window size of HTTP/2 connections, and/or header list size of HTTP/2 connections), the HTTP connection manager124may not revert those connection resource parameter(s) for those HTTP connections but otherwise revert the connection resource parameter(s) for other HTTP connections (e.g., those HTTP connections that were opened before the trigger condition, new HTTP connections). As another example, after a trigger condition no longer exists, the HTTP connection manager124transitions to a post-trigger class of connection resource parameters that fall between the original connection resource parameters and the modified connection resource parameters. This can help alleviate cases where an attack is periodically turned on and off. Although this description refers to dynamically modifying HTTP/2 and/or HTTP/3 connections, in an embodiment, the HTTP server120can dynamically control TCP window sizes to control data receive rates or send rates on a single HTTP/1.1 connection. FIG.2is a flow diagram that illustrates exemplary operations for dynamically modifying an HTTP connection according to an embodiment. The operations ofFIG.2are described with respect to the exemplary embodiment ofFIG.1. However, the operations ofFIG.2can be performed by different embodiments from that ofFIG.1, and the embodiment ofFIG.1can perform different operations from that ofFIG.2. At operation210, a determination is made that a condition exists that triggers the HTTP server120to modify an HTTP connection (e.g., an HTTP/2 and/or HTTP/3 connection). The condition detector122may detect whether the condition exists. For instance, the trigger condition(s) may be a hardware resource condition of the server, a characteristic of an HTTP request, a threat score of the request and/or client being above a threshold, a malicious client detected, malicious behavior detected, and/or detected protocol misuse, as described herein. The trigger conditions may be defined in the HTTP connection modification policy126. Next, at operation215, the HTTP server120dynamically modifies one or more HTTP connections. For instance, the HTTP connection manager124may dynamically modify one or more HTTP connections. The HTTP connection manager124may use the HTTP connection modification policy126for determining the modifications(s) to make. The modification may include dynamically modifying HTTP connection resource parameter(s) for HTTP connection(s) and/or modifying one or more runtime behaviors for the HTTP connection(s). Dynamically modifying HTTP connection resource parameters may include signaling modified connection resource parameters for existing HTTP connections (e.g., existing HTTP/2 connections) and/or setting new connections with modified connection resource parameter(s) (e.g., new HTTP/2 connections and/or new HTTP/3 connections). The modified connection resource parameter(s) may reduce or restrict properties of the HTTP connection. The connection resource parameters can include a setting (or sometimes known as settings parameters) as defined in HTTP/2 and typically carried in a SETTINGS frame, a QUIC transport parameter used as defined in HTTP/3, a setting (or sometimes known as settings parameters) as defined in HTTP/3 and typically carried in a SETTINGS frame, and/or other frames that control runtime limits such as the MAX STREAMS frame defined in QUIC. As examples, the HTTP server120may perform one or more of the following for existing HTTP/2 connections and/or to apply to new HTTP/2 connections: dynamically reduce the maximum concurrent streams that the HTTP client can open; dynamically change the initial window size to limit the initial amount of data that a client can send as part of the request; dynamically change header list sizes to limit the size of request headers that the client can send to the HTTP server; and dynamically controlling the resources for dynamic decompression of client requests. As another example, the HTTP server120may perform one or more of the following for new HTTP/3 connections: dynamically reducing the value for the max idle timeout transport parameter; dynamically tuning max UDP payload sizes; dynamically reducing the initial flow control limit for peer-initiated bidirectional streams; dynamically reducing the initial flow control limit for unidirectional streams; dynamically reducing the initial maximum number of bidirectional streams the client is permitted to initiate; dynamically reducing the initial maximum number of unidirectional streams the client is permitted to initiate; and dynamically controlling the resources for dynamic decompression of client requests. Stream concurrency of HTTP/2 connections is managed by the setting SETTINGS_MAX_CONCURRENT_STREAMS. To dynamically reduce the maximum concurrent streams that the HTTP client can open, the HTTP server120can send a reduced value for this setting to clients on new HTTP/2 connections and/or send a SETTINGS frame for existing HTTP/2 connections with a reduced value for this setting. Alternatively, the HTTP server120can operate a virtual concurrency limit while not communicating the value of this limit. This virtual concurrency limit can be enforced by rejecting streams (e.g., using RST STREAM or serving with an HTTP response such as 5xx) that cause the limit to be met or exceeded. The initial window size of HTTP/2 connections is managed by the setting SETTINGS_INITIAL_WINDOW_SIZE. To dynamically change the initial window size to limit the initial amount of data that a client can send as part of the request, the HTTP server120can send a reduced value for this setting to clients on new HTTP/2 connections and/or send a SETTINGS frame for existing HTTP/2 connections with a reduced value for this setting. The header list size of HTTP/2 connections is managed by the setting SETTINGS_MAX_HEADER_LIST_SIZE. To dynamically change the header list sizes to limit the size of request headers that the client can send to the HTTP server120, the HTTP server120can send a reduced value for this setting to clients on new HTTP/2 connections and/or send a SETTINGS frame for existing HTTP/2 connections with a reduced value for this setting. Alternatively, the HTTP server120can operate a virtual max header list size limit while not communicating the value of this limit. This virtual max header list size limit can be enforced by rejecting streams (e.g., using RST_STREAM or serving with an HTTP response such as 5xx) that cause the limit to be met or exceeded. In all approaches, the size of request headers that can be transmitted to the HTTP server120by the HTTP client can be limited. Dynamic decompression of client requests of HTTP/2 connections is managed by the setting SETTINGS_HEADER_TABLE_SIZE. This setting notifies the client of the size of the table supported by the HTTP server120. A zero size disables the dynamic decompression. To dynamically change the dynamic decompression, the HTTP server120can send a reduced value for this setting to clients on new HTTP/2 connections and/or send a SETTINGS frame for existing HTTP/2 connections with a reduced value for this setting. The idle timeout for HTTP/3 connections is managed by the max idle timeout transport parameter. The HTTP server120can reduce the value for this transport parameter for new HTTP/3 connections to reduce resource burdens by shedding clients earlier. The maximum UDP payload sizes for HTTP/3 connections is managed by the max_udp_payload_size transport parameter. This parameter limits the size of UDP payloads that the HTTP server is willing to receive and payloads that are larger than this limit may not be processed by the HTTP server120. The HTTP server120can dynamically tune the value of this transport parameter up or down for new HTTP/3 connections to alter the workload of the services which may lead to reduced resource usage. The initial flow control limit for peer-initiated bidirectional streams for HTTP/3 connections is managed by the initial_max_stream_data_bidi_remote transport parameter. The HTTP server120can dynamically reduce the value for this transport parameter for new HTTP/3 connections to limit the initial amount of data that an HTTP client can send on request streams. The initial flow control limit for unidirectional streams for HTTP/3 connections is managed by the initial_max_stream_data_uni transport parameter. The HTTP server120can dynamically reduce this value for new HTTP/3 connections to limit the initial amount of data that an HTTP client can send on unidirectional streams. These unidirectional streams can be special cases for new use cases or benign “grease streams” that exercise extension mechanisms. A grease stream uses a range of values for the stream type that are reserved for the purpose of exercising unidirectional streams. New use cases can be an HTTP/3 extension that has a need to send data unidirectionally such as the WebTransport protocol. The initial maximum number of bidirectional streams the client is permitted to initiate for an HTTP/3 connection is managed by the initial_max_streams_bidi transport parameter. The HTTP server120can dynamically reduce this value for new HTTP/3 connections to limit the initial bidirectional stream concurrency. The initial maximum number of unidirectional streams the client is permitted to initiate for an HTTP/3 connection is managed by the initial_max_streams_uni transport parameter. The HTTP server120can dynamically reduce this value for new HTTP/3 connections to limit the initial unidirectional stream concurrency. Dynamic decompression of client requests of HTTP/3 connections is managed by QPACK_MAX_TABLE_CAPACITY. This value notifies the client of the size of the table supported by the HTTP server120. A zero size disables the dynamic decompression. To dynamically change the dynamic decompression, the HTTP server120can send a reduced value for this setting to clients on new HTTP/3 connections with a reduced value for this setting. The change that affects one or more HTTP connections may include changing one or more runtime behaviors for the HTTP connection(s). The one or more runtime behaviors include: limiting or stopping the use of server push; stopping use or limiting the number of hinted link headers included in a 103 Early Hints response; limiting runtime flow control; using a concurrent response prioritization method that is tuned towards lower resource usage; adjustment of denial-of-service (DoS) mitigation thresholds; modifying the TCP congestion control algorithm or any parameter used to send data to the client; limiting or stopping using dynamic header compression; controlling idle timeouts, and/or limiting or stopping the use of HTTP/2 and/or HTTP/3 extensions such as frame-based extensions (e.g., ORIGIN frame) or WebTransport protocol. Regarding server push, if server push is implemented by the HTTP server120(e.g., as described in section 8.4 of RFC 9113 and section 4.6 of RFC 9114), the HTTP server120can stop or limit the use of server push even if a connection supports it (e.g., limit the number of pushes or send no pushes). Regarding 103 Early Hints, if 103 Early Hints is implemented by the HTTP server (e.g., as described in RFC 8297), the HTTP server120can stop issuing the 103 Early Hints or limit the number of hinted link headers in a 103 Early Hints response. Regarding limiting runtime flow control, the HTTP server120can tune the window credits given to clients (e.g., WINDOW_UPDATE frames in HTTP/2 and MAX_DATA and MAX_STREAM_DATA frames in HTTP/3) to limit the data rate of client upload requests. Regarding the concurrent response prioritization method, the HTTP server120can use a concurrent response prioritization method that is tuned towards lower resource usage as compared to under normal circumstances where such a prioritization scheme may be tuned towards fast web site loading. Regarding adjusting DoS mitigation thresholds, the HTTP server120can adjust a DoS mitigation threshold such that some clients are rejected or restricted. Regarding modifying the TCP congestion control algorithm or other TCP related resources, the HTTP server120can modify the TCP congestion control algorithm to slow down the sending rate to the client to reduce egress bandwidth and potentially reduce the rate of inbound requests due to responses taking longer to complete. Regarding limiting or stopping using dynamic header compression, the HTTP server120can stop compressing headers or use a less intensive compression. Regarding controlling idle timeouts, the HTTP server120can reduce the timeout periods for various messages and events (e.g., the period between receiving HTTP headers and HTTP body, the period between chunks of HTTP body). Regarding limiting or stopping the use of HTTP/2 and/or HTTP/3 extensions, the HTTP server120can stop these extensions or limit their use to reduce workload (these extensions may require additional work compared to conventional HTTP), to force clients to use certain protocols, and/or to react to targeted attacks at particular extensions. In an embodiment, the HTTP server120makes the change(s) for all HTTP/2 and/or HTTP/3 connections. In another embodiment, the HTTP server120makes the change(s) only for selected HTTP/2 and/or HTTP/3 connections. For instance, the HTTP server may make the change(s) only for HTTP/2 and/or HTTP/3 connections for specific zones, specific accounts, specific account types, and/or specific HTTP clients. As an example, the HTTP server120may make the change(s) for personal or small business accounts and may not make the change(s) for enterprise accounts. As another example, if a certain zone has become the target of a selective attack, the HTTP server120may make the change(s) for the HTTP/2 and/or HTTP/3 connections for that zone. As another example, if the trigger condition is specific to a particular client or request (e.g., the threat score of the request and/or client is above a threshold, behavior of the client, protocol misuse by the client), the HTTP server120may make the change(s) only for the HTTP connections specific to that client or request. Next, at operation220, which may not be performed in all embodiments, the HTTP server120monitors the protocol behavior of HTTP clients that connect through the modified HTTP/2 and/or HTTP/3 connections. The monitoring depends on the change that was made to the HTTP connection. For example, if the change is to reduce the number of concurrent streams, the HTTP server monitors the number of concurrent streams. If the change is to the initial window size, the HTTP server monitors the initial window size. Next, at operation225, which also may not be performed in all embodiments, the HTTP server120determines whether the HTTP client is complying with the modifications. If the HTTP client is not complying with the modifications, then operation230is performed where the HTTP server120takes one or more mitigation actions. The one or more mitigation actions may include closing the HTTP connection or applying rate limiting to the HTTP client. If the client reconnects after closing the HTTP connection, the HTTP server120may apply rate limiting to the HTTP client. As an example, if the modification is to reduce the number of concurrent streams and the HTTP client is not complying with the modification, the HTTP server120may terminate the connection with an error message. For an existing HTTP connection, complying with the modification may not happen immediately. For instance, if the change is on an existing HTTP/2 connection and is to reduce the maximum number of concurrent streams, the expectation is that the HTTP client will reduce the number of concurrent streams until the target value is hit but does not need to close streams prematurely. If the HTTP client is complying with the modifications, then flow moves back to operation220. The operations220,225, and230may not be performed if the change does not require the HTTP client to make a modification or otherwise involve the HTTP client to comply with the desired protocol behavior. For instance, a change that only affects the runtime behavior of an HTTP connection at the server does not need to be monitored. As an example, if the change includes stopping the use of server push or 103 Early Hints, the HTTP server does not need to monitor the client for the stopping of the server push or 103 Early Hints. FIG.3is a block diagram that illustrates exemplary operations for reverting an HTTP connection back to a default state according to an embodiment. The operations ofFIG.3are described with respect to the exemplary embodiment ofFIG.1. However, the operations ofFIG.3can be performed by different embodiments from that ofFIG.1, and the embodiment ofFIG.1can perform different operations from that ofFIG.3. At operation310, a determination is made that the trigger condition no longer exists. The condition detector122may determine that the trigger condition no longer exists. For example, if the trigger condition was that the server CPU usage was greater than a threshold, the condition detector122determines when the server CPU usage falls below the threshold. The condition detector122may notify or instruct the HTTP connection manager124that a trigger condition no longer exists. At operation315, the HTTP server120(e.g., the HTTP connection manager124) reverts to the default HTTP connection properties (e.g., the properties for the HTTP connections prior to the modification). As an example, if the trigger condition was that the server CPU usage was greater than a threshold and the server CPU usage fell below that threshold, the HTTP connection manager124may revert the modifications it made to the HTTP/2 and/or HTTP/3 connections. In an embodiment, the HTTP connection manager124supports different classes or profiles of parameter configuration depending on the lifecycle of the trigger condition and the actions and the different classes or profiles could be phased out differently. For example, for connection resource parameter(s) that applied to new connections while a condition trigger was active (e.g., stream concurrency of HTTP/2 connections, initial window size of HTTP/2 connections, and/or header list size of HTTP/2 connections), the HTTP connection manager124may not revert those connection resource parameter(s) for those HTTP connections but otherwise revert the connection resource parameter(s) for other HTTP connections (e.g., those HTTP connections that were opened before the trigger condition, new HTTP connections). As another example, after a trigger condition no longer exists, the HTTP connection manager124transitions to a post-trigger class of connection resource parameters that fall between the original connection resource parameters and the modified connection resource parameters. FIG.4illustrates a block diagram for an exemplary data processing system400that may be used in some embodiments. One or more such data processing systems400may be utilized to implement the embodiments and operations described with respect to an HTTP client110and/or HTTP server120. Data processing system400includes a processing system420(e.g., one or more processors and connected system components such as multiple connected chips). The data processing system400is an electronic device that stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine-readable media (also called computer-readable media), such as machine-readable storage media410(e.g., magnetic disks, optical disks, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals—such as carrier waves, infrared signals), which is coupled to the processing system420. For example, the depicted machine-readable storage media410may store program code430that, when executed by the processor(s)420, causes the data processing system400to execute the condition detector122, the HTTP connection manager124, and/or any of the operations described herein. The data processing system400also includes one or more network interfaces440(e.g., a wired and/or wireless interfaces) that allows the data processing system400to transmit data and receive data from other computing devices, typically across one or more networks (e.g., Local Area Networks (LANs), the Internet, etc.). The data processing system400may also include one or more input or output (“I/O”) components450such as a mouse, keypad, keyboard, a touch panel or a multi-touch input panel, camera, frame grabber, optical scanner, an audio input/output subsystem (which may include a microphone and/or a speaker), other known I/O devices or a combination of such I/O devices. Additional components, not shown, may also be part of the system400, and, in certain embodiments, fewer components than that shown in One or more buses may be used to interconnect the various components shown inFIG.4. The techniques shown in the figures can be implemented using code and data stored and executed on one or more electronic devices (e.g., a client device, a server). Such electronic devices store and communicate (internally and/or with other electronic devices over a network) code and data using computer-readable media, such as non-transitory computer-readable storage media (e.g., magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase-change memory) and transitory computer-readable communication media (e.g., electrical, optical, acoustical or other form of propagated signals—such as carrier waves, infrared signals, digital signals). In addition, such electronic devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine-readable storage media), user input/output devices (e.g., a keyboard, a touchscreen, and/or a display), and network connections. The coupling of the set of processors and other components is typically through one or more busses and bridges (also termed as bus controllers). Thus, the storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of that electronic device. Of course, one or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware. In the preceding description, numerous specific details are set forth to provide a more thorough understanding. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail to not obscure understanding. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether explicitly described. Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot-dash, and dots) may be used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention. While the flow diagrams in the figures show a particular order of operations performed by certain embodiments of the invention, such order is exemplary (e.g., alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, etc.). While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting. | 39,105 |
11943309 | Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION The subject matter disclosed herein relates to the distribution of notifications to a user based on the user's subscriptions to various notification categories. In some implementations, these notifications can be associated with notification categories that relate to a function performed by a device that sent the notification or a location of the device that sent the notification. FIG.1is a system diagram illustrating a computing landscape100within a healthcare environment such as a hospital. Various devices and systems, both local to the healthcare environment and remote from the healthcare environment, can interact via at least one computing network105. This computing network105can provide any form or medium of digital communication connectivity (i.e., wired or wireless) amongst the various devices and systems. Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet. In some cases, one or more of the various devices and systems can interact directly via peer-to-peer coupling (either via a hardwired connection or via a wireless protocol such as Bluetooth or WiFi). In addition, in some variations, one or more of the devices and systems communicate via a cellular data network. In particular, aspects of the computing landscape100can be implemented in a computing system that includes a back-end component (e.g., as a data server110), or that includes a middleware component (e.g., an application server115), or that includes a front-end component (e.g., a client computer121having a graphical user interface or a Web browser through which a user may interact with an implementation of the subject matter described herein), or any combination of such back-end, middleware, or front-end components. Clients121,122, and123and servers110and115are generally remote from each other and typically interact through the communications network105. The relationship of the clients121-123and servers110and115arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. Clients121-123can be any of a variety of computing platforms that include local applications for providing various functionality within the healthcare environment. Example clients include, but are not limited to, desktop computers, laptop computers, tablets, and other computers with touch-screen interfaces. The local applications can be self-contained in that they do not require network connectivity and/or they can interact with one or more of the servers110and115(e.g., a web browser). A variety of applications can be executed on the various devices and systems within the computing landscape such as electronic health record applications, medical device monitoring, operation, and maintenance applications, scheduling applications, data set editor applications, billing applications, and the like. The network105can be coupled to one or more data storage systems125. The data storage systems125can include databases providing physical data storage within the healthcare environment or within a dedicated facility. In addition, or in the alternative, the data storage systems125can include cloud-based systems providing remote storage of data in, for example, a multi-tenant computing environment. The data storage systems125can also comprise non-transitory computer readable media. Mobile communications devices (MCDs)130can also form part of the computing landscape100. The MCDs130can communicate directly via the network105and/or they can communicate with the network105via an intermediate network such as a cellular data network135. Various types of communication protocols can be used by the MCDs130including, for example, messaging protocols such as SMS and MMS. Various types of medical devices140,141,142, and143can be used as part of the computing landscape100. These medical devices140-143can comprise, unless otherwise specified, any type of device or system with a communications interface that characterizes one or more physiological measurements of a patient and/or that characterize treatment of a patient. In some cases, the medical devices140-143communicate via peer to peer wired or wireless communications with another medical device (as opposed to communicating with the network105). For example, the medical device140can comprise a bedside vital signs monitor that is connected to medical devices141and142, namely a wireless pulse oximeter and to a wired blood pressure monitor. One or more operational parameters of the medical devices140-143can be locally controlled by a clinician, controlled via a clinician via the network105, and/or they can be controlled by one or more of a server110and/or115, clients121-123, MCDs130, and/or another medical device. Application server115can run a system event notification application that distributes notifications sent from medical devices141-143, clients121-123, MCDs130, or backend server110to various devices connected to network105. These notifications can provide information relating to a function provided by a particular device or the status of the device. For example, a pharmacist can use one of client computers121-123to create or modify a data set using a data set editor application. This data set can contain device configurations, drug libraries, clinical advisories and other important information for medical devices140-143. The system event notification application can send data encapsulating a notification to various care providers to solicit comments or approval of the data set before the data set is deployed to medical devices140-143. In another example, medical devices140-143can collect data characterizing one or more physiological measurements of a patient and/or treatment of a patient (e.g., medical devices140-143can be an infusion management system, etc.). Medical devices140-143can transmit data encapsulating a notification with these measurements to application server115which, in turn, can distribute data comprising this notification to a particular set of users. For example, medical device140can correspond to an infusion pump that infuses a patient with medication. As the amount of medication in the infusion pump depletes below a predetermined threshold, the infusion pump can send data encapsulating a notification to application server115to indicate that a new supply of medication is needed. Application server115can then distribute data comprising this notification to alert the appropriate personnel. Before notifications are distributed to users, the system event notification application can associate each notification with one or more notification categories. A notification category can be based on different characteristics associated with the notification including, for example, a function performed by the device that sent the notification. For example, the system event notification application can automatically associate messages sent by backend server110with an information technology (IT) notification category because the backend server performs IT related functions. If backend server110, for example, sends a notification to application server110that one of its storage devices has failed, then the system event notification application can associate the message with an IT notification category and distribute this notification to users having an IT notification category subscription. In another example, the system event notification application can automatically associate notifications received from a ventilator or electrocardiogram (ECG) machine (e.g., medical device140) with a cardiopulmonary notification category because the ventilator and ECG machine perform cardiopulomonary functions. As application server115receives data measurements from either the ventilator or ECG machine, it can transmit notifications with these measurements to users having a cardiopulmonary notification category subscription. Alternatively or additionally, the notification category can be based on a location of the device that sent the notification. If, for example, the ECG machine described above is used in the intensive care unit of a hospital, the system event notification application can associate messages sent from the ECG with an intensive care unit notification category. The system event notification application can then transmit notifications from the ECG in the intensive care unit to users having a subscription to an intensive care unit notification category. The association process can be based on other notification categories including, for example, a caregiver team (which associates messages from a particular device with a notification category that identifies the caregivers that monitor the device), configurations settings (which associates messages from a device operating outside of preset guardrails with a notification category that identifies the members of a configurations setting group), and the like. As described above, the system event notification application can distribute data that includes these notifications to users based on the user's subscriptions to one or more notification categories. These subscriptions can be based on a role performed by the user and can be automatically assigned to a user based on the user's position within a hospital or a department that the user belongs to. Each role can be a container for various permissions associated with the user. These permissions can designate the types of notifications that a user can receive. The system event notification application can be configured to retrieve position and department information and corresponding user permissions from an employee database. For example, a nurse belonging to the emergency room department of a hospital can be automatically subscribed to a nursing staff notification category and an emergency room notification category. If the nurse leaves the emergency room to work in the intensive care unit, the system event notification program can automatically update the nurse's subscriptions. During this update process, the nurse can be automatically unsubscribed from the emergency room notification category and automatically subscribed to the intensive care unit notification category. In some implementations, the nurse can unsubscribe himself/herself from a notification category by sending a removal request to the system event notification application. The system event notification application can process the removal request and delete the corresponding subscription. FIG.2illustrates a table200that can be maintained by the system event notification application and stored at data storage systems125. Table200can identify different notification categories (in column205) used by the system event notification application and the users subscribed to each notification category (in column210). The system event notification application can update table200as users are subscribed and unsubscribed to different notification categories as described above. In addition, the system event notification application can use table200to determine which users to send a notification. If, for example, the system event notification application associates a received notification with an emergency room notification category, the application can refer to rows215of table200to identify the users that have a subscription to this notification category (i.e., user1, user3, and user4) and distribute the notification to these subscribed users. As illustrated in table200, some users can have multiple subscriptions. For example, as indicated by rows220, user4can receive notifications associated with emergency room, intensive care unit, and data set review notification categories. Once the system event notification application has identified which users should receive a particular notification, it can distribute data that includes the notification to these users. The system event notification application can distribute these notifications in their original form or modify them to include less information or additional information. The additional information can include, for example, the date/time that the application server received the notification from the device, and the like. Distribution can occur via different modalities. For example, the system event notification application can send data including the notification to an address associated with the user (e.g., an e-mail address or a text message phone number). In some implementations, these notifications can be displayed on a user's notifications page300as illustrated inFIG.3. A user can access his/her notification page by logging onto the system event notification application using, for example, computer clients121-123or MCDs130. Notifications page300can display all of the notifications for user4from table200. For each of these notifications, notifications page300can display the notification category (in column305), a status associated with the notification (in column310), and the date/time that the notification was distributed from application server115(in column315). In some implementations, column315can also display the date/time that application server115received the notification from the originating device. As explained above with respect toFIG.2, user4can have subscriptions to emergency room, intensive care unit, and data set review notification categories, and these notifications can be displayed at rows320,325, and330, respectively. The status description (in column310) can provide information regarding the device that sent the notification. This information can include, for example, whether the device is connected to network105, when the device is turned on, when the device is turned off, and the like. The status description (in column310) can also indicate whether a notification requires user action. For example, the status associated with notification320can indicate that infusion pump A in the emergency room needs to be refilled with medication. This notification can appear on the notifications page of all users that have an emergency room notification category subscription. Once infusion pump A has been refilled, the pump can report its status to application server115. Upon receiving this status message, the system event notification application can delete the original notification from the notification page of the designated subscribers and/or send a follow-up notification to the subscribers to indicate that action is no longer required. In another example, the status associated with notification330can indicate that data set C is awaiting review by user4. This notification can appear when an entity designates user4as a reviewer for data set C and can remain on notifications page300until user4completes the review. A user can customize his/her notifications page by grouping notifications according to their priority levels. This feature can be useful, for example, when a user has subscriptions to numerous notification categories and wants to review notifications from the most important categories first. The user can designate the relative importance of different notification categories by assigning a priority level to each category. A user can, for example, use window400ofFIG.4Ato assign a high priority level405or a low priority level410to a notification category. Window400can appear when a user, for example, clicks on or hovers over a notification category305on notifications page300. In the example ofFIG.3, user4can use window400to assign a high priority level to the intensive care unit notification category and a low priority level to the emergency room and data set review notification categories. Although the implementation ofFIG.4Aillustrates two priority levels (i.e., low and high), any number of priority levels can be used (e.g., low, medium, high, and urgent). As a user assigns priority levels to each notification category, the system event notification program can modify the appearance of notifications page300. Based on the priority levels assigned by user4, notifications window450inFIG.4Bcan display notifications in a high priority notification category (i.e., intensive care unit) before notifications in a low priority notification category (i.e., emergency room and data set review). FIG.5illustrates a flowchart500for distributing notifications to a user based on the user's subscriptions to various notification categories. At505, application server115can receive via a network data encapsulating a notification from a device connected to the network. The device can be configured to provide a health related treatment for a patient and can be, for example, any one of medical devices140-143, clients121-123, MCDs130, or backend server110. In some implementations, the device can be a computing device that hosts a data set editor application that generates data sets to be used by medical devices140-143. The data encapsulating the notification sent by the device can provide information relating to the status of the device. At510, the system event notification application running on application server115can associate the notification received at510with one or more notification categories. Notification categories can be associated with a function performed by the device that sent the notification. In some implementations, notification categories can be associated with a location of the device that sent the notification. At515, the system event notification application can access a table of notification categories and subscribed users. This table can be stored at data storage systems125and can identify which users are subscribed to a particular notification category. The system event notification application can automatically assign subscriptions to users based on one or more roles associated with the user. Each of these roles can be associated with various permissions that designate the type of notifications that a user can receive. This role can be based on, for example, the user's position or a department that the user is a member of. At520, the system event notification application can identify which users to distribute the notification to. A user can be eligible to receive a notification if he/she has a subscription to the notification category associated with the notification received at505. The system event notification application can determine whether there is a notification category—user subscription match by referring to the table accessed at515. If there is a match, then the system event notification application can distribute the notification to the user. At525, the system event notification application can distribute data comprising the notification to the users identified at520. In some implementations, the notification can be loaded onto a notifications page associated with the user or sent to an address associated with the user (e.g., an e-mail address or a text message phone number). In some implementations, the user can assign a priority level to each notification category, and the notifications page can display notifications in accordance with these assigned priority levels. One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations may include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device (e.g., mouse, touch screen, etc.), and at least one output device. These computer programs, which can also be referred to programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural language, an object-oriented programming language, a functional programming language, a logical programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example as would a processor cache or other random access memory associated with one or more physical processor cores. To provide for interaction with a user, the subject matter described herein can be implemented on a computer having a display device, such as for example a cathode ray tube (CRT) or a liquid crystal display (LCD) monitor for displaying information to the user and a keyboard and a pointing device, such as for example a mouse or a trackball, by which the user may provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well. For example, feedback provided to the user can be any form of sensory feedback, such as for example visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including, but not limited to, acoustic, speech, or tactile input. Other possible input devices include, but are not limited to, touch screens or other touch-sensitive devices such as single or multi-point resistive or capacitive trackpads, voice recognition hardware and software, optical scanners, optical pointers, digital image capture devices and associated interpretation software, and the like. The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flow(s) depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims. | 23,575 |
11943310 | DETAILED DESCRIPTION Subject matter will now be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific example embodiments. This description is not intended as an extensive or detailed discussion of known concepts. Details that are known generally to those of ordinary skill in the relevant art may have been omitted, or may be handled in summary fashion. The following subject matter may be embodied in a variety of different forms, such as methods, devices, components, and/or systems. Accordingly, this subject matter is not intended to be construed as limited to any example embodiments set forth herein. Rather, example embodiments are provided merely to be illustrative. Such embodiments may, for example, take the form of hardware, software, firmware or any combination thereof. 1. Computing Scenario The following provides a discussion of some types of computing scenarios in which the disclosed subject matter may be utilized and/or implemented. 1.1. Networking FIG.1is an interaction diagram of a scenario100illustrating a service102provided by a set of servers104to a set of client devices110via various types of networks. The servers104and/or client devices110may be capable of transmitting, receiving, processing, and/or storing many types of signals, such as in memory as physical memory states. The servers104of the service102may be internally connected via a local area network106(LAN), such as a wired network where network adapters on the respective servers104are interconnected via cables (e.g., coaxial and/or fiber optic cabling), and may be connected in various topologies (e.g., buses, token rings, meshes, and/or trees). The servers104may be interconnected directly, or through one or more other networking devices, such as routers, switches, and/or repeaters. The servers104may utilize a variety of physical networking protocols (e.g., Ethernet and/or Fiber Channel) and/or logical networking protocols (e.g., variants of an Internet Protocol (IP), a Transmission Control Protocol (TCP), and/or a User Datagram Protocol (UDP). The local area network106may include, e.g., analog telephone lines, such as a twisted wire pair, a coaxial cable, full or fractional digital lines including T1, T2, T3, or T4 type lines, Integrated Services Digital Networks (ISDNs), Digital Subscriber Lines (DSLs), wireless links including satellite links, or other communication links or channels, such as may be known to those skilled in the art. The local area network106may be organized according to one or more network architectures, such as server/client, peer-to-peer, and/or mesh architectures, and/or a variety of roles, such as administrative servers, authentication servers, security monitor servers, data stores for objects such as files and databases, business logic servers, time synchronization servers, and/or front-end servers providing a user-facing interface for the service102. Likewise, the local area network106may comprise one or more sub-networks, such as may employ differing architectures, may be compliant or compatible with differing protocols and/or may interoperate within the local area network106. Additionally, a variety of local area networks106may be interconnected; e.g., a router may provide a link between otherwise separate and independent local area networks106. In the scenario100ofFIG.1, the local area network106of the service102is connected to a wide area network108(WAN) that allows the service102to exchange data with other services102and/or client devices110. The wide area network108may encompass various combinations of devices with varying levels of distribution and exposure, such as a public wide-area network (e.g., the Internet) and/or a private network (e.g., a virtual private network (VPN) of a distributed enterprise). In the scenario100ofFIG.1, the service102may be accessed via the wide area network108by a user112of one or more client devices110, such as a portable media player (e.g., an electronic text reader, an audio device, or a portable gaming, exercise, or navigation device); a portable communication device (e.g., a camera, a phone, a wearable or a text chatting device); a workstation; and/or a laptop form factor computer. The respective client devices110may communicate with the service102via various connections to the wide area network108. As a first such example, one or more client devices110may comprise a cellular communicator and may communicate with the service102by connecting to the wide area network108via a wireless local area network106provided by a cellular provider. As a second such example, one or more client devices110may communicate with the service102by connecting to the wide area network108via a wireless local area network106provided by a location such as the user's home or workplace (e.g., a WiFi (Institute of Electrical and Electronics Engineers (IEEE) Standard 802.11) network or a Bluetooth (IEEE Standard 802.15.1) personal area network). In this manner, the servers104and the client devices110may communicate over various types of networks. Other types of networks that may be accessed by the servers104and/or client devices110include mass storage, such as network attached storage (NAS), a storage area network (SAN), or other forms of computer or machine readable media. 1.2. Server Configuration FIG.2presents a schematic architecture diagram200of a server104that may utilize at least a portion of the techniques provided herein. Such a server104may vary widely in configuration or capabilities, alone or in conjunction with other servers, in order to provide a service such as the service102. The server104may comprise one or more processors210that process instructions. The one or more processors210may optionally include a plurality of cores; one or more coprocessors, such as a mathematics coprocessor or an integrated graphical processing unit (GPU); and/or one or more layers of local cache memory. The server104may comprise memory202storing various forms of applications, such as an operating system204; one or more server applications206, such as a hypertext transport protocol (HTTP) server, a file transfer protocol (FTP) server, or a simple mail transport protocol (SMTP) server; and/or various forms of data, such as a database208or a file system. The server104may comprise a variety of peripheral components, such as a wired and/or wireless network adapter214connectible to a local area network and/or wide area network; one or more storage components216, such as a hard disk drive, a solid-state storage device (SSD), a flash memory device, and/or a magnetic and/or optical disk reader. The server104may comprise a mainboard featuring one or more communication buses212that interconnect the processor210, the memory202, and various peripherals, using a variety of bus technologies, such as a variant of a serial or parallel AT Attachment (ATA) bus protocol; a Uniform Serial Bus (USB) protocol; and/or Small Computer System Interface (SCI) bus protocol. In a multibus scenario, a communication bus212may interconnect the server104with at least one other server. Other components that may optionally be included with the server104(though not shown in the schematic diagram200ofFIG.2) include a display; a display adapter, such as a graphical processing unit (GPU); input peripherals, such as a keyboard and/or mouse; and a flash memory device that may store a basic input/output system (BIOS) routine that facilitates booting the server104to a state of readiness. The server104may operate in various physical enclosures, such as a desktop or tower, and/or may be integrated with a display as an “all-in-one” device. The server104may be mounted horizontally and/or in a cabinet or rack, and/or may simply comprise an interconnected set of components. The server104may comprise a dedicated and/or shared power supply218that supplies and/or regulates power for the other components. The server104may provide power to and/or receive power from another server and/or other devices. The server104may comprise a shared and/or dedicated climate control unit220that regulates climate properties, such as temperature, humidity, and/or airflow. Many such servers104may be configured and/or adapted to utilize at least a portion of the techniques presented herein. 1.3. Client Device Configuration FIG.3presents a schematic architecture diagram300of a client device110whereupon at least a portion of the techniques presented herein may be implemented. Such a client device110may vary widely in configuration or capabilities, in order to provide a variety of functionality to a user such as the user112. The client device110may be provided in a variety of form factors, such as a desktop or tower workstation; an “all-in-one” device integrated with a display308; a laptop, tablet, convertible tablet, or palmtop device; a wearable device mountable in a headset, eyeglass, earpiece, and/or wristwatch, and/or integrated with an article of clothing; and/or a component of a piece of furniture, such as a tabletop, and/or of another device, such as a vehicle or residence. The client device110may serve the user in a variety of roles, such as a workstation, kiosk, media player, gaming device, and/or appliance. The client device110may comprise one or more processors310that process instructions. The one or more processors310may optionally include a plurality of cores; one or more coprocessors, such as a mathematics coprocessor or an integrated graphical processing unit (GPU); and/or one or more layers of local cache memory. The client device110may comprise memory301storing various forms of applications, such as an operating system303; one or more user applications302, such as document applications, media applications, file and/or data access applications, communication applications such as web browsers and/or email clients, utilities, and/or games; and/or drivers for various peripherals. The client device110may comprise a variety of peripheral components, such as a wired and/or wireless network adapter306connectible to a local area network and/or wide area network; one or more output components, such as a display308coupled with a display adapter (optionally including a graphical processing unit (GPU)), a sound adapter coupled with a speaker, and/or a printer; input devices for receiving input from the user, such as a keyboard311, a mouse, a microphone, a camera, and/or a touch-sensitive component of the display308; and/or environmental sensors, such as a global positioning system (GPS) receiver319that detects the location, velocity, and/or acceleration of the client device110, a compass, accelerometer, and/or gyroscope that detects a physical orientation of the client device110. Other components that may optionally be included with the client device110(though not shown in the schematic architecture diagram300ofFIG.3) include one or more storage components, such as a hard disk drive, a solid-state storage device (SSD), a flash memory device, and/or a magnetic and/or optical disk reader; and/or a flash memory device that may store a basic input/output system (BIOS) routine that facilitates booting the client device110to a state of readiness; and a climate control unit that regulates climate properties, such as temperature, humidity, and airflow. The client device110may comprise a mainboard featuring one or more communication buses312that interconnect the processor310, the memory301, and various peripherals, using a variety of bus technologies, such as a variant of a serial or parallel AT Attachment (ATA) bus protocol; the Uniform Serial Bus (USB) protocol; and/or the Small Computer System Interface (SCI) bus protocol. The client device110may comprise a dedicated and/or shared power supply318that supplies and/or regulates power for other components, and/or a battery304that stores power for use while the client device110is not connected to a power source via the power supply318. The client device110may provide power to and/or receive power from other client devices. In some scenarios, as a user112interacts with a software application on a client device110(e.g., an instant messenger and/or electronic mail application), descriptive content in the form of signals or stored physical states within memory (e.g., an email address, instant messenger identifier, phone number, postal address, message content, date, and/or time) may be identified. Descriptive content may be stored, typically along with contextual content. For example, the source of a phone number (e.g., a communication received from another user via an instant messenger application) may be stored as contextual content associated with the phone number. Contextual content, therefore, may identify circumstances surrounding receipt of a phone number (e.g., the date or time that the phone number was received), and may be associated with descriptive content. Contextual content, may, for example, be used to subsequently search for associated descriptive content. For example, a search for phone numbers received from specific individuals, received via an instant messenger application or at a given date or time, may be initiated. The client device110may include one or more servers that may locally serve the client device110and/or other client devices of the user112and/or other individuals. For example, a locally installed webserver may provide web content in response to locally submitted web requests. Many such client devices110may be configured and/or adapted to utilize at least a portion of the techniques presented herein. 2. Presented Techniques One or more computing devices and/or techniques for determining activity patterns based upon user activity and/or performing operations based upon the activity patterns are provided. For example, a user may access and/or interact with a communication interface (e.g., an email interface, a messaging interface, a social network interface, etc.) for sending and/or receiving emails, uploading social media posts, and/or performing communications via messaging, voice calls, video calls, etc. In some examples, the communication interface may be accessed and/or interacted with using a plurality of devices associated with a user account of the user. In accordance with one or more of the techniques presented herein, activity performed using the communication interface may be analyzed to determine a plurality of activity patterns associated with the activity. For example, each activity pattern may be associated with a set of conditions of a plurality of sets of conditions. For example, each activity pattern of the plurality of activity patterns may correspond to one or more interactions with the communication interface (e.g., a navigational interaction where a message and/or a portion of the communication interface is opened, one or more interactions associated with transmitting an email of a first type, etc.) and/or one or more actions using the communication interface that occur when a set of conditions of a plurality of sets of conditions are met. In some examples, it may be determined that a first set of conditions corresponding to a first activity pattern of the plurality of activity patterns are met. Responsive to determining that the first set of conditions are met, one or more operations associated with the first activity pattern may be performed (automatically), such that the user may not have to spend time and/or energy to perform the one or more operations manually. An embodiment of determining activity patterns based upon user activity and/or performing operations based upon the activity patterns is illustrated by an example method400ofFIG.4A. A first user, such as user Jill, and/or a first client device associated with the first user may access and/or interact with a first communication interface (e.g., an email interface, a messaging interface, a social network interface, etc.) for sending and/or receiving emails, uploading social media posts, and/or performing communications via messaging, voice calls, video calls, etc. The first communication interface may be associated with a communication system (e.g., an email service provider, a communication service provider, etc.). For example, a first user account (e.g., one or more of an email account, a messaging account, a social media account, etc.) of the communication system may be accessed and/or interacted with via the first communication interface. The first communication interface may be an email client, a messaging client, a web interface accessed via a browser (e.g., a web email interface, a web messaging interface, a web social media interface, etc.), an application (e.g., an email application, a messaging application, a social media application, etc.), etc. A graphical user interface of the first client device may be controlled to display the first communication interface. For example, a plurality of messages (e.g., a plurality of emails, a plurality of instant messages, etc.) associated with the first user account may be accessed using the first communication interface. For example, a portion of the plurality of messages may be messages that are transmitted by the first user account to one or more user accounts (e.g., sent messages). Alternatively and/or additionally, a portion of the plurality of messages may be messages that are received by the first user account from one or more user accounts (e.g., received messages). The plurality of messages may be displayed and/or consumed using the first communication interface (e.g., messages may be displayed responsive to selections of selectable inputs of the first communication interface). In some examples, messages (e.g., emails, instant messages, social media posts, etc.) may be composed and/or transmitted to one or more user accounts using the first communication interface. Alternatively and/or additionally, settings associated with the first communication interface and/or the first user account may be managed and/or modified using the first communication interface (e.g., interface template settings, color settings, reply-to settings, privacy settings, etc. may be managed and/or modified using the first communication interface). At402, first activity performed using the first communication interface may be detected. For example, the first activity may comprise selectable inputs of the first communication interface being selected (e.g., clicked, pressed, etc.) using one or more of a touchscreen of the first client device, one or more switches (e.g., one or more buttons) of the first client device, a conversational interface (e.g., a voice recognition and natural language interface) of the first client device, etc. For example, the selectable inputs may correspond to one or more of one or more messages of the plurality of messages (e.g., emails, instant messages, multimedia messages, social media posts, etc. may be opened using the selectable inputs), one or more settings associated with the first user account and/or the first communication interface (e.g., interface template settings, color settings, reply-to settings, etc.), one or more actions (e.g., composing a message, transmitting a message, deleting a message, replying to a message, forwarding a message, opening a message, initiating an audio call, initiating a video call, etc.), etc. In some examples, the first activity may include activity performed using one or more communication interfaces different than the first communication interface. For example, the one or more communication interfaces and/or the first communication interface may be associated with the communication system. Each communication interface may be associated with a service, of a plurality of services, provided by the system. For example, the system may be an internet system providing a plurality of communication interfaces, where each communication interface of the plurality of communication interfaces may provide a service of the plurality of services (e.g., an email service, a messaging service, a social media service, a video calling service, an audio calling service, etc.). Alternatively and/or additionally, the first activity may include activity performed using one or more client devices, different than the first client device. For example, a plurality of client devices, comprising the one or more client devices and/or the first client device, may be associated with the first user account. For example, each client device the one or more client devices may have the first communication interface installed (e.g., a version of the first communication interface associated with a client device of the one or more client devices may be installed on the client device). Alternatively and/or additionally, the first communication interface may be a web interface accessed via a browser of the first client device and/or the one or more client devices. At404, the first activity may be analyzed to determine a first activity pattern associated with a first set of conditions. In some examples, the first activity pattern may be indicative of user behavior associated with the first user. For example, the first activity pattern may be indicative of one or more first actions that are performed (by the first user) using the first communication interface when the first set of conditions are met. For example, the one or more actions may be performed one or more times during the first activity. For example, the one or more first actions may correspond to one or more first interactions with the first communication interface. For example, the one or more first interactions may comprise a selection of a first exemplary selectable input of the first communication interface. The first exemplary selectable input may be a selectable input corresponding to an inbox of the first user account (e.g., the inbox may be opened responsive to a selection of the selectable input). Alternatively and/or additionally, the first exemplary selectable input may be a selectable input corresponding to composing a message (e.g., the email drafting interface may be opened responsive to a selection of the selectable input). Alternatively and/or additionally, the first exemplary selectable input may be a selectable input corresponding to a request to display a message (e.g., the message may be displayed responsive to a selection of the selectable input). Alternatively and/or additionally, the first exemplary selectable input may be a selectable input corresponding to labelling a message (e.g., placing the message in an organization folder, such as a folder associated with a type of the message, a subject of the message, a user account that the message is received from, etc.). Alternatively and/or additionally, the one or more first interactions may comprise a first navigational interaction. The first navigational interaction may correspond to accessing a part of the first communication interface (e.g., switching screens from a first exemplary part of the first communication interface to a second exemplary part of the first communication interface) and/or accessing a resource associated with the first user account (e.g., opening a screen comprising the resource). For example, the first navigational interaction may correspond to navigating to the inbox of the first user account (e.g., opening a screen comprising the inbox of the first user account). Alternatively and/or additionally, the first navigational interaction may correspond to navigating to a message drafting interface of the first communication interface (e.g., opening a screen comprising an interface for composing a message). Alternatively and/or additionally, the first navigational interaction may correspond to navigating to a message associated with (e.g., received by and/or transmitted by) the first user account (e.g., opening a screen comprising the message). Alternatively and/or additionally, the one or more first actions may comprise a first drafting action. For example, the first drafting action may be performed using a message drafting interface of the first communication interface. In some examples, the message drafting interface may be used to draft messages (e.g., draft instant messages, draft text messages, draft social media posts, draft (e.g., compose) emails). Alternatively and/or additionally, the message drafting interface may be used to transmit messages to user accounts. For example, a drafted message may be transmitted to one or more user accounts using the message drafting interface. The first drafting action may correspond to inputting (e.g., typing, copy and pasting, etc.) a set of text into the message drafting interface. Alternatively and/or additionally, the first drafting action may correspond to inputting one or more content items (e.g., images, videos, animations, graphics interchange format (GIF) animations, audio files, files, etc.) into the message drafting interface. Alternatively and/or additionally, the first drafting action may correspond to attaching one or more files (e.g., email attachments) to a message using the message drafting interface. Alternatively and/or additionally, the one or more first actions may comprise a first transmitting action. For example, the first transmitting action may be performed using the message drafting interface. For example, the first transmitting action may correspond to transmitting a message to one or more user accounts. Alternatively and/or additionally, the one or more first actions may comprise a first forwarding action. For example, the first forwarding action may correspond to forwarding a message to one or more user accounts. Alternatively and/or additionally, the one or more first actions may comprise a first deleting action. For example, the first deleting action may correspond to deleting a message of the first user account. In some examples, the first set of conditions may correspond to one or more second actions performed using the first communication interface. For example, the first activity may comprise the one or more second actions being performed using the first communication interface. The first activity pattern may correspond to the one or more second actions being performed prior to the one or more first actions (e.g., the one or more first actions may typically be performed upon completion of the one or more second actions). For example, the one or more second actions may comprise a selection of a second exemplary selectable input (e.g., the second exemplary selectable input may correspond to opening the inbox of the first user account, composing a message, a request to display a message, etc.). Alternatively and/or additionally, the one or more second actions may comprise a second navigational action. The second navigational action may correspond to accessing a part of the first communication interface and/or accessing a resource associated with the first user account. Alternatively and/or additionally, the second navigational action may correspond to the first communication interface being opened (e.g., a request to open the first communication interface and/or access the first user account may be received from the first client device). Alternatively and/or additionally, the second navigational action may correspond to navigating to the inbox of the first user account, navigating to a message drafting interface of the first communication interface, navigating to a message associated with the first user account, etc. Alternatively and/or additionally, the one or more second actions may comprise a second drafting action. For example, the second drafting action may be performed using the message drafting interface of the first communication interface. The second drafting action may correspond to inputting a set of text into the message drafting interface, inputting one or more content items into the message drafting interface, attaching one or more files (e.g., email attachments) to a message using the message drafting interface, etc. Alternatively and/or additionally, the one or more second actions may comprise transmitting a message to one or more user accounts, forwarding a message to one or more user accounts, deleting a message of the first user account, etc. In some examples, the first activity pattern may be indicative of a first sequence of actions comprising the one or more second actions (corresponding to the first set of conditions) and/or the one or more first actions. For example, the first sequence of actions may correspond to the one or more second actions preceding the one or more first actions. For example, a plurality of pattern instances corresponding to the first activity pattern may be detected (during the first activity). For example, each pattern instance of the plurality of pattern instances may correspond to a set of (continuous) activity, (performed during the first activity) corresponding to the first sequence of actions. For example, each pattern instance of the plurality of pattern instances may correspond to a set of activity where the one or more second actions are performed and/or after the one or more second actions are performed, the one or more first actions are performed. In a first example, the one or more second actions (corresponding to the first set of conditions) may comprise the first communication interface being opened. In some examples, a list of messages (e.g., an inbox of the first user account) may be displayed responsive to the first communication interface being opened. Alternatively and/or additionally, the one or more first actions may comprise an exemplary message that is a first type of message being opened. For example, an exemplary message that is the first type of message may be opened responsive to a selection of the exemplary message from the list of messages. The first type of message may correspond to a most recently received unread message of the plurality of messages associated with the first user account (e.g., a newest unread message, for example). For example, the first sequence of actions associated with the first activity pattern may comprise the first communication interface being opened, followed by the exemplary message that is the first type of message being opened. In some examples, one or more sets of activity, corresponding to the first sequence of actions, may be detected while monitoring the second activity. In a second example, the one or more second actions may comprise a first exemplary set of text being inputted into a first input field (e.g., a subject line field, a message body field, etc.) of the message drafting interface. Alternatively and/or additionally, the one or more first actions may be associated with a second exemplary set of text being inputted into a second input field of the message drafting interface. Alternatively and/or additionally, the one or more first actions may be associated with an exemplary content item (e.g., an image, a video, an animation, a GIF animation, an audio file, etc.) being inputted into a third input field (and/or the second input field) of the message drafting interface. In some examples, the first exemplary set of text, the second exemplary set of text and/or the exemplary content item may be associated with a first exemplary topic (e.g., the first exemplary topic may be associated with one or more of a birthday, a holiday, an occasion, condolences, etc.). The first activity pattern and/or the first exemplary topic may be determined based upon a first set of messages transmitted to one or more first user accounts. For example, the first activity may comprise the first set of messages being transmitted to the one or more first user accounts. In some examples, messages of the first set of messages may comprise sets of text associated with the first exemplary topic. For example, the first set of messages may be analyzed to determine the first exemplary set of text (associated with the one or more second actions). For example, the first exemplary topic may be associated with a birthday message. The first exemplary set of text may comprise “Happy Birthday”, the second exemplary set of text may comprise text associated with the first exemplary topic (e.g., “I wish you best wishes on your birthday”) and/or the exemplary content item may comprise an image associated with the first exemplary topic (e.g., an image of a birthday cake, for example). Accordingly, the first sequence of actions associated with the first activity pattern may comprise the first exemplary set of text being inputted into the message drafting interface, followed by the second exemplary set of text and/or the exemplary content item being inputted into the message drafting interface. In a third example, the one or more second actions may comprise an exemplary message that is a second type of message being opened. For example, the second type of message may correspond to a work-related message. Alternatively and/or additionally, the one or more second actions may (further) comprise a selection of an exemplary reply selectable input corresponding to drafting and/or transmitting an exemplary reply message as a response to the exemplary message. Alternatively and/or additionally, the one or more second actions may (further) comprise the message drafting interface being opened for drafting the exemplary reply message. Alternatively and/or additionally, the one or more first actions may comprise opening an attachment interface and/or attaching one or more files to the exemplary reply message using the attachment interface. Accordingly, the first sequence of actions associated with the first activity pattern may comprise the second type of message being opened, the selection of the exemplary reply selectable input and/or the message drafting interface being opened for drafting the exemplary reply message, followed by the attachment interface being opened and/or the one or more files being attached to the exemplary reply message. In some examples, the first set of conditions may correspond to one or more first conditions associated with the first activity pattern. For example, it may be determined that the one or more first actions are performed at times determined to meet the one or more first conditions. In some examples, the one or more first conditions may comprise a first condition associated with a first time of day. For example, the one or more first actions may be performed during the first time of day (e.g., early morning (5:00 AM-8:00 AM), morning (8:00 AM-11:00 AM), noon (11:00 AM-1:00 PM), afternoon (1:00 PM-5:00 PM), early evening (5:00 PM-7:00 PM), evening (7:00 PM-11:00 PM), late night (11:00 PM-5:00 AM), etc.). Alternatively and/or additionally, the one or more first conditions may comprise a second condition associated with a first set of weather characteristics. For example, the one or more first actions may be performed while the first client device (and/or a different client device associated with the first user account) is in a region determined to have the first set of weather characteristics (e.g., rainy, sunny, cloudy, snowy, cold, warm, windy, etc.). Alternatively and/or additionally, the one or more first conditions may comprise a third condition associated with a first location. For example, the one or more first actions may be performed while the first client device (and/or a different client device associated with the first user account) is at the first location (e.g., the first location may be associated with a workplace of the first user, the first location may be associated with a home of the first user, the first location may be associated with a shopping center, the first location may be associated with a park, the first location may be associated with outside of the workplace of the first user, the first location may be associated with outside of the home of the first user, etc.). Alternatively and/or additionally, the one or more first conditions may comprise a fourth condition associated with an occasion (e.g., a birthday, a holiday, an event, etc.). For example, the one or more first actions may be performed at times associated with the occasion (e.g., birthday messages may be transmitted during birthdays associated with user accounts, new year messages may be transmitted at a time associated with New Year's Day, etc.). In some examples, the first set of conditions may be indicative of a combination of the one or more first conditions and the one or more second actions. For example, the first set of conditions may be met when the one or more first conditions are met and the one or more second actions are performed. In a fourth example, the one or more second actions may comprise the first communication interface being opened. Alternatively and/or additionally, the one or more first conditions may be associated with the first time of day (e.g., morning (8:00 AM-11:00 AM)). Alternatively and/or additionally, the one or more first actions may comprise an exemplary message that is the first type of message being opened. Accordingly, the first activity pattern may be indicative of the first communication interface being opened, followed by the exemplary message being opened, during the first time of day (e.g., the first sequence of actions may comprise the first communication interface being opened, followed by the exemplary message being opened, during the first time of day). For example, the first activity pattern may be associated with the one or more first conditions, the one or more second actions and the one or more first actions. Alternatively and/or additionally, the first set of conditions may be indicative of merely the one or more first conditions. For example, the first set of conditions may be met when the one or more first conditions are met. In a fifth example, the one or more first conditions may be indicative of an exemplary occasion (e.g., an exemplary birthday) associated with an exemplary user account that is a first type of user account. For example, the first type of user account may correspond to a type of relationship between the first user and an exemplary user associated with the exemplary user account (e.g., familial relationship, social relationship, coworker relationship, business relationship, etc.). Alternatively and/or additionally, the first type of user account may correspond to a level of communication between the first user account and the exemplary user account. Alternatively and/or additionally, the one or more first actions may comprise drafting an exemplary occasion-related message (e.g., happy birthday message) and/or transmitting the exemplary occasion-related birthday message to the exemplary user account. For example, the first activity may comprise a second set of messages being transmitted to one or more user accounts. The second set of messages (e.g., content of the second set of messages) may be analyzed to determine that messages of the second set of messages are associated with a second exemplary topic (e.g., the second set of messages may be occasion-related messages). Alternatively and/or additionally, times of transmission of the second set of messages may be analyzed to determine that messages of the second set of messages are transmitted during occasions (e.g., birthdays) associated with the one or more user accounts. Alternatively and/or additionally, the one or more user accounts may be analyzed (e.g., messages associated with the one or more user accounts, social media profiles associated with the one or more user accounts, communications between the first user account and the one or more user accounts, etc. may be analyzed) to determine the first type of user account that the second set of messages are transmitted to. Accordingly, the first activity pattern may be indicative of the exemplary occasion-related message (e.g., happy birthday message) being drafted and/or transmitted to an exemplary user account that is the first type of user account, during a time associated with an exemplary occasion associated with the exemplary user account (e.g., the first sequence of actions may comprise the exemplary occasion-related message being drafted and/or transmitted to an exemplary user account that is the first type of user account, during a time associated with an exemplary occasion associated with the exemplary user account). For example, the first activity pattern may be associated with the one or more first conditions and the one or more first actions. Alternatively and/or additionally, the first set of conditions may be indicative of merely the one or more second actions. For example, the first set of conditions may be met when the one or more second actions are performed (e.g., the first set of conditions may be met upon completion of the one or more second actions). For example, the first activity pattern may be associated with the one or more second actions and the one or more first actions. In some examples, a set of activity (e.g., a portion of the first activity) may be determined to be a pattern instance (of the plurality of pattern instances) corresponding to the first activity pattern if the set of activity comprises the one or more second actions being performed prior to the one or more first actions. For example, a set of activity may be determined to be a pattern instance corresponding to the first activity pattern if the set of activity comprises the one or more second actions and the one or more first actions being performed consecutively (and/or that the one or more second actions are performed prior to the one or more first actions). Alternatively and/or additionally, a set of activity may be determined to be a pattern instance corresponding to the first activity pattern if the set of activity comprises the one or more first actions being performed immediately following the one or more second actions. For example, it may be determined that the set of activity comprises the one or more first actions being performed immediately following the one or more second actions if 0 actions (different than the one or more second actions and/or the one or more first actions) are performed between the one or more second actions and the one or more first actions. In an example, the first user may open the first communication interface. Then, the first user may open a message that is the first type of message. Between the first user opening the first communication interface and the first user opening the message, 0 different actions may be performed (by the first user). Alternatively and/or additionally, a set of activity may be determined to be a pattern instance corresponding to the first activity pattern if the set of activity comprises the one or more first actions being performed following the one or more second actions and/or that a quantity of different actions performed in between the one or more second actions and the one or more first actions being performed is less than a threshold quantity of different actions. For example, the quantity of different actions may correspond to a quantity of actions of different actions (e.g., actions different than the one or more second actions and/or different than the one or more first actions) that are performed after the one or more second actions are performed and/or before the one or more first actions are performed. In an example, the first user may open the first communication interface. Then, the first user may open a message that is the first type of message. Between the first user opening the first communication interface and the first user opening the message, one or more different actions may be performed (by the first user). A quantity of actions of the one or more different actions may be less than the threshold quantity of different actions. In an example, the threshold quantity of different actions may be 3. A first exemplary set of activity of the first activity may comprise the one or more second actions being performed. Following the one or more second actions being performed, 2 actions, different than the one or more second actions and/or the one or more first actions, may be performed. Following the 2 actions being performed, the one or more first actions may be performed. Accordingly, because the threshold quantity of different actions is 3 and/or because the quantity of different actions corresponding to different actions performed between the one or more second actions and the one or more first actions is 2, the first exemplary set of activity may be determined to be a first exemplary pattern instance of the plurality of pattern instances corresponding to the first activity pattern. In a different example, the threshold quantity of different actions may be 2. A second exemplary set of activity of the first activity may comprise the one or more second actions being performed. Following the one or more second actions being performed, 3 actions, different than the one or more second actions and/or the one or more first actions, may be performed. Following the 3 actions being performed, the one or more first actions may be performed. Accordingly, because the threshold quantity of different actions is 2 and/or because the quantity of different actions corresponding to different actions performed between the one or more second actions and the one or more first actions is 3, the first exemplary set of activity may be determined to be not be a pattern instance corresponding to the first activity pattern. Alternatively and/or additionally, a set of activity may be determined to be a pattern instance corresponding to the first activity pattern if the set of activity comprises the one or more first actions being performed following the one or more second actions and/or a duration of time between the one or more second actions being performed and the one or more first actions being performed is less than a threshold duration of time (e.g., 30 seconds, 1 minute, 5 minutes, etc.). In an example, the threshold duration of time may be 30 seconds. A third exemplary set of activity of the first activity may comprise the one or more second actions being performed. After 20 seconds, the one or more first actions may be performed (e.g., the one or more first actions may begin to be performed after 20 seconds after the one or more second actions are performed). Accordingly, because the threshold duration of time is 30 seconds and/or because the duration of time between the one or more second actions being performed and the one or more first actions being performed is 20 seconds (which is less than the threshold duration of time), the third exemplary set of activity may be determined to be a second exemplary pattern instance of the plurality of pattern instances corresponding to the first activity pattern. In a different example, the threshold duration of time may be 20 seconds. A fourth exemplary set of activity of the first activity may comprise the one or more second actions being performed. After 25 seconds, the one or more first actions may be performed (e.g., the one or more first actions may begin to be performed after 25 seconds after the one or more second actions are performed). Accordingly, because the threshold duration of time is 20 seconds and/or because the duration of time between the one or more second actions being performed and the one or more first actions being performed is 25 seconds (which is greater than the threshold duration of time), the fourth exemplary set of activity may be determined to be not be a pattern instance corresponding to the first activity pattern. Alternatively and/or additionally, a set of activity may be determined to be a pattern instance corresponding to the first activity pattern if the set of activity is performed at a time determined to meet the one or more first conditions. At406, the first activity pattern may be stored in a first user profile associated with the first user account. For example, the first user profile may comprise a plurality of activity patterns associated with the first user account. Alternatively and/or additionally, each activity pattern of the plurality of activity patterns may be associated with a set of conditions of a plurality of sets of conditions. In some examples, the plurality of activity patterns may be determined based upon the first activity, other activity different than the first activity and/or messages received and/or transmitted by the first user account. In some examples, a first quantity of pattern instances of the plurality of pattern instances (associated with sets of activity corresponding to the first activity pattern) may be determined. In some examples, the first activity pattern may be stored in the first user profile responsive to determining that the first quantity of pattern instances of the plurality of pattern instances exceeds a threshold quantity of pattern instances (e.g., the first activity pattern may be stored in the first user profile responsive to determining that the first quantity of pattern instances exceeds 50 pattern instances, the first quantity of pattern instances exceeds 500 pattern instances, etc.). Alternatively and/or additionally, a first pattern instance rate at which pattern instances corresponding to the first activity pattern occurs may be determined. For example, the first pattern instance rate may correspond to a quantity of pattern instances that occurs per unit of time (e.g., per minute, per hour, per day, per week, etc.). Alternatively and/or additionally, the first activity pattern may be stored in the first user profile responsive to determining that the first pattern instance rate exceeds a threshold pattern instance rate (e.g., the first activity pattern may be stored in the first user profile responsive to determining that the first pattern instance rate exceeds 5 pattern instances corresponding to the first activity pattern per day, the first pattern instance rate exceeds 5 pattern instances corresponding to the first activity pattern per week, etc.). In some examples, a pattern precision associated with the first activity pattern may be determined. For example, the pattern precision may be determined based upon the first quantity of pattern instances and/or a total quantity of instances of the first set of conditions being met. The total quantity of instances of the first set of conditions being met may correspond to a quantity of instances that the one or more second actions are performed. In some examples, the pattern precision may be determined by combining the first quantity of pattern instances (corresponding to instances where the one or more first actions are performed when the first set of conditions are met) and the total quantity of instances (corresponding to instances where the first set of conditions are met). In some examples, the pattern precision may be determined by dividing the first quantity of pattern instances by the total quantity of instances. The pattern precision having a low value may be indicative of a low probability of the one or more first actions being performed when the first set of conditions are met. For example, the pattern precision having a low value may be indicative of a low probability that, after detecting the one or more second actions being performed, the one or more first actions will be performed. Alternatively and/or additionally, the pattern precision having a low value may be indicative of a low probability that the one or more first actions will be performed at a time determined to meet the one or more first conditions. Alternatively and/or additionally, the pattern precision having a high value may be indicative of a high probability of the one or more first actions being performed when the first set of conditions are met. For example, the pattern precision having a high value may be indicative of a high probability that, after detecting the one or more second actions being performed, the one or more first actions will be performed. Alternatively and/or additionally, the pattern precision having a high value may be indicative of a high probability that the one or more first actions will be performed at a time determined to meet the one or more first conditions. In some examples, the first activity pattern may be stored in the first user profile responsive to determining that the pattern precision exceeds a threshold pattern precision. Alternatively and/or additionally, responsive to a determination that the pattern precision is less than the threshold pattern precision, the first activity pattern may not be stored in the first user profile (and/or the first activity pattern may be discarded). At408, a determination may be made that the first set of conditions are met. It may be determined that the first set of conditions are met by determining that the one or more first conditions are met. Alternatively and/or additionally, it may be determined that the first set of conditions are met by detecting second activity, comprising the one or more second actions (being performed using the first communication interface). For example, the plurality of sets of conditions may be analyzed based upon the second activity to determine that the second activity (comprising the one or more second actions) is associated with the first set of conditions of the plurality of sets of conditions. In the first example (where the first sequence of actions comprises the first communication interface being opened, followed by the exemplary message that is the first type of message being opened), it may be determined that the first set of conditions are met by detecting the first communication interface being opened. In some examples, an indication that the first communication interface is opened may be received. It may be determined that the first set of conditions are met based upon the indication that the first communication interface is opened. For example, a request to access the first user account and/or the first communication interface may be received from the first client device (and/or a different client device associated with the first user account). It may be determined that the first set of conditions are met based upon the request to access the first user account and/or the first communication interface. In the second example (where the first sequence of actions comprises the first exemplary set of text being inputted into the message drafting interface, followed by the second exemplary set of text and/or the exemplary content item being inputted into the message drafting interface), it may be determined that the first set of conditions are met by detecting a third exemplary set of text, that is associated with the first exemplary topic associated with the first activity pattern, being inputted into the message drafting interface (e.g., detecting the third exemplary set of text being inputted into the subject line field, detecting the third exemplary set of text being inputted into the message body field, etc.). For example, the third exemplary set of text may be determined to be associated with the first exemplary topic by analyzing the third exemplary set of text and/or by comparing the third exemplary set of text with messages of the first set of messages associated with the first exemplary topic. In the third example (where the first sequence of actions comprises the second type of message being opened, the selection of the exemplary reply selectable input and/or the message drafting interface being opened for drafting the exemplary reply message, followed by the attachment interface being opened and/or the one or more files being attached to the exemplary reply message), it may be determined that the first set of conditions are met by detecting an exemplary message that is the second type of message (e.g., a work-related message) being opened and/or a selection of an exemplary reply selectable input corresponding to drafting and/or transmitting an exemplary reply message as a response to the exemplary message. For example, it may be determined that the exemplary message is the second type of message (e.g., a work-related message) by analyzing content of the exemplary message. In the fourth example (where the first sequence of actions comprises the first communication interface being opened, followed by the exemplary message that is the first type of message being opened, during the first time of day), it may be determined that the first set of conditions are met by detecting the first communication interface being opened during the first time of day (e.g., 8:00 AM-11:00 AM)). In the fifth example (where the first sequence of actions comprises the exemplary occasion-related message (e.g., happy birthday message) being drafted and/or transmitted to an exemplary user account that is the first type of user account, during a time associated with an exemplary occasion (e.g., birthday) associated with the exemplary user account), it may be determined that the first set of conditions are met by determining that a current time matches an exemplary occasion (e.g., birthday) associated with an exemplary user account and/or that the exemplary user account is the first type of user account. For example, it may be determined that the first set of conditions are met by determining that the current time is within an exemplary threshold duration of time before the exemplary occasion (e.g., birthday), that the current time is during the exemplary occasion (e.g., birthday) and/or that the current time is within the exemplary threshold duration of time after the exemplary occasion (e.g., birthday). Alternatively and/or additionally, it may be determined that the first exemplary user account is the first type of user account based upon a determination that an exemplary type of relationship between the first user and an exemplary user associated with the exemplary user account is a first type of relationship associated with the first type of user account (e.g., familial relationship). For example, it may be determined that the exemplary type of relationship is the first type of relationship based upon a social media profile associated with the first user account, a social media profile associated with the exemplary user account, content within messages associated with the first user account and/or the exemplary user account, etc. Alternatively and/or additionally, it may be determined that the first exemplary user account is the first type of user account based upon a determination that an exemplary level of communication between the first user account and the exemplary user account exceeds a threshold level of communication associated with the first type of user account. For example, the exemplary level of communication may correspond to an amount of communication between the first user account and the exemplary user account. For example, the exemplary level of communication may be determined based upon a quantity of communications (e.g., messages, emails, phone calls, video calls, voice calls, etc.) between the first user account and the exemplary user account. Alternatively and/or additionally, the exemplary level of communication may be determined based upon a frequency of communications (e.g., a number of communications per day, a number of communications per week, etc.) between the first user account and the exemplary user account. Alternatively and/or additionally, the exemplary level of communication may be determined based upon lengths of communications (e.g., number of characters in emails and/or messages, lengths of time associated with phone calls, calls using the communication app and/or video calls, etc.) between the first user account and the exemplary user account. At410, responsive to determining that the first set of conditions are met, one or more first operations associated with the first activity pattern may be performed. For example, the one or more first operations may be associated with the one or more first actions of the first activity pattern. In the first example (where the first sequence of actions comprises the first communication interface being opened, followed by the exemplary message that is the first type of message being opened), rather than displaying the list of messages (e.g., the inbox of the first user account), an exemplary message that is the first type of message may be displayed (automatically without a selection of the exemplary message from the list of messages) responsive to the first communication interface being opened. For example, responsive to determining that the first set of conditions are met (e.g., responsive to receiving a request to open the first communication interface), the plurality of messages associated with the first user account (e.g., messages received and/or transmitted by the first user account) may be analyzed based upon the first type of message to identify an exemplary message that is the first type of message (e.g., a most recently received unread message of the plurality of messages associated with the first user account). For example, responsive to identifying the exemplary message that is the first type of message, the exemplary message may be displayed using the first communication interface (automatically). In the second example (where the first sequence of actions comprises the first exemplary set of text being inputted into the message drafting interface, followed by the second exemplary set of text and/or the exemplary content item being inputted into the message drafting interface), responsive to detecting the third exemplary set of text associated with the first exemplary topic being inputted into the message drafting interface, content associated with the first exemplary topic may be generated. The content may be generated based upon the first set of messages (comprising sets of text associated with the first exemplary topic). For example, first text, associated with the first exemplary topic, may be extracted from the first set of messages. A first message body (e.g., an email body) may be generated based upon the first text. Alternatively and/or additionally, the content may be generated based upon one or more messages transmitted to a second user account. For example, the second user account may be an intended recipient of a message being drafted using the message drafting interface (e.g., an indication of the second user account may be entered into a recipient input field of the message drafting interface). For example, second text may be extracted from the one or more messages transmitted to the second user account. The first message body may be generated based upon the second text. For example, one or more characteristics of the one or more messages transmitted to the second user account by the first user account may be determined (e.g., a nickname used in the one or more messages for addressing a second user associated with the second user account, a level of formality of the one or more messages, etc.). For example, the first message body may be generated based upon the second text and/or the one or more characteristics. Alternatively and/or additionally, content items (e.g., images, videos, animations, GIF animations, audio files, files, etc.) may be extracted from the first set of messages. For example, one or more content items may be selected from the content items for inclusion into the content. In some examples, the content, comprising the first message body and/or the one or more content items may be (automatically) entered into one or more input fields (e.g., a message body field) of the message drafting interface. Alternatively and/or additionally, a first notification, associated with entering the content into the message drafting interface, may be displayed using the first communication interface. For example, the first notification may comprise a first selectable input corresponding to entering the content into the one or more input fields of the message drafting interface. For example, responsive to a selection of the first selectable input, the content may be entered into the one or more input fields. Alternatively and/or additionally, the first notification may comprise a second selectable input corresponding to (automatically) generating a message comprising the content and/or (automatically) transmitting the message to the second user account. For example, responsive to a selection of the second selectable input, the message may be generated (using the first message body) and/or transmitted to the second user account. In the third example (where the first sequence of actions comprises the second type of message (e.g., a work related message) being opened, the selection of the exemplary reply selectable input and/or the message drafting interface being opened for drafting the exemplary reply message, followed by the attachment interface being opened and/or the one or more files being attached to the exemplary reply message), the attachment interface may be opened (automatically) responsive to a selection of the exemplary reply selectable input corresponding to drafting and/or transmitting an exemplary reply message as a response to the exemplary message. For example, responsive to determining that the first set of conditions are met (and/or responsive to the selection of the exemplary reply selectable input), the attachment interface may be opened. Alternatively and/or additionally, responsive to determining that the first set of conditions are met and/or receiving a request to transmit the exemplary reply message as a response to the exemplary message, the exemplary reply message may be analyzed to determine whether the exemplary reply message comprises one or more attachments. Responsive to a determination that the exemplary reply message does not comprise one or more attachments, a second notification may be displayed using the first communication interface. For example, the second notification may be indicative of the exemplary reply message not comprising one or more attachments. The second notification may comprise a third selectable input corresponding to opening the attachment interface for selecting one or more files as attachments to the exemplary reply message. For example, responsive to a selection of the third selectable input, the attachment interface may be displayed. In the fourth example (where the first sequence of actions comprises the first communication interface being opened, followed by the exemplary message that is the first type of message being opened, during the first time of day), rather than displaying the list of messages (e.g., the inbox of the first user account), an exemplary message that is the first type of message may be displayed (automatically without a selection of the exemplary message from the list of messages) responsive to the first communication interface being opened during the first time of day. In the fifth example (where the first sequence of actions comprises the exemplary occasion-related message (e.g., happy birthday message) being drafted and/or transmitted to an exemplary user account that is the first type of user account, during a time associated with an exemplary occasion (e.g., birthday) associated with the exemplary user account), a third notification may be displayed using the first communication interface responsive to determining that a current time matches an occasion (e.g., birthday) of a third user account and/or that the third user account is the first type of user account. Alternatively and/or additionally, the third notification may be displayed responsive to the first communication interface being opened. In some examples, the third notification may comprise a fourth selectable input corresponding to opening the message drafting interface for drafting a message corresponding to the occasion (e.g., birthday) of the third user account. For example, responsive to a selection of the fourth selectable input, the message drafting interface may be displayed using the first communication interface. Alternatively and/or additionally, an indication of the third user account (e.g., an email address associated with the third user account) may be (automatically) entered into a recipient input field of the message drafting interface. Alternatively and/or additionally, the third notification may comprise a fifth selectable input corresponding to generating second content and/or entering the second content into the message drafting interface. For example, responsive to receiving a selection of the fifth selectable input, the second content may be generated based upon the second set of messages (e.g., the second content may be generated based upon text of the second set of messages, content items of the second set of messages, etc.). Alternatively and/or additionally, the second content may be generated based upon one or more second messages transmitted by the first user account to the third user account (e.g., the second content may be generated based upon one or more second characteristics of the one or more second messages). In some examples, the second content may be entered into the message drafting interface. Alternatively and/or additionally, the third notification may comprise a sixth selectable input corresponding to automatically generating an occasion-related message (e.g., happy birthday message) comprising the second content and/or automatically transmitting the occasion-related message (e.g., happy birthday message) to the third user account. For example, responsive to a selection of the sixth selectable input, the occasion-related message (e.g., happy birthday message) may be generated and/or transmitted to the third user account (e.g., the happy birthday message may comprise the second content). Alternatively and/or additionally, the third notification may not be displayed. For example, rather than displaying the third notification, the occasion-related message (e.g., happy birthday message) (comprising the second content) may automatically be generated and/or transmitted to the third user account (without notifying the first user). Alternatively and/or additionally, it may be determined that merely a portion of the first set of conditions are met. For example, it may be determined that a current time matches an occasion (e.g., birthday) associated with a fourth user account and/or that the fourth user account is not the first type of user account (e.g., a level of communication between the first user account and the fourth user account may be less than the threshold level of communication and/or a type of relationship between the first user and a fourth user associated with the fourth user account may be a type of relationship different than the first type of relationship). In some examples, responsive to detecting an indication of the fourth user account (e.g., an email address associated with the fourth user account, a name associated with the fourth user account, etc.) being entered into an input field (e.g., a recipient field) of the message drafting interface, a fourth notification may be displayed. For example, the fourth notification may comprise a seventh selectable input corresponding to adding a set of text (e.g., one or more sentences, one or more words, etc.) associated with the occasion (e.g., birthday) to an input field (e.g., a message body field) of the message drafting interface (e.g., the set of text may be a summarized and/or shorter version of the second content used to generate the happy birthday message). In some examples, responsive to a selection of the seventh selectable input, the set of text may be added to the input field of the message drafting interface. Alternatively and/or additionally, rather than displaying the fourth notification, the set of text may be added to the input field of the message drafting interface automatically (without a selection of the seventh selectable input and/or without notifying the first user). An embodiment of determining unanswered emails and/or displaying notifications indicative of the unanswered emails is illustrated by an example method450ofFIG.4B. A first user, such as user Jack, and/or a first client device associated with the first user may access and/or interact with a communication system (and/or an email system, messaging system, etc.) for sending and/or receiving emails and/or performing communications via messaging, voice calls, video calls, etc. For example, a first email account (and/or a different type of user account, such as a messaging user account, a social media user account, etc.) of the first user with the communication system may be accessed and/or interacted with via a first email interface (and/or a different type of communication interface), such as an email client, a web email interface accessed via a browser, an email application, etc. on the first client device. In some examples, the communication system (and/or the first email interface) may be associated with an email service provider. At452, a first email received by the first email account may be identified. For example, the first email may be received from a second email account. The first email may comprise first email text (e.g., a subject line, an email body, etc.) and/or one or more content items (e.g., one or more images, one or more videos, one or more animations, one or more GIF animations, etc.). In some examples, responsive to the first email being received by the first email account, the first email may be added to a list of emails associated with the first email account (e.g., an inbox of the first email account). Alternatively and/or additionally, the first email may be accessed using the first email interface. At454, the first email may be analyzed to determine whether the first email is unanswered (e.g., whether the first email is unresolved, whether the first email is open, etc.). In some examples, the first email may be analyzed to determine whether the first email is unanswered responsive to a determination that a first duration of time since the first email was received (by the first email account) exceeds a first threshold duration of time. Alternatively and/or additionally, the first email may periodically (e.g., once per hour, once per day, etc.) be analyzed to determine whether the first email is unanswered. For example, the first email may be analyzed to determine whether the first email is unanswered by analyzing the first email to determine whether the first email comprises one or more questions. For example, it may be determined that the first email comprises one or more first questions. It may be determined that the first email comprises the one or more first questions by using one or more text analysis techniques to identify the one or more first questions. Alternatively and/or additionally, it may be determined that the first email comprises the one or more first questions by using one or more vector analysis techniques (e.g., analyzing vectors associated with the first email). Responsive to determining that the first email comprises the one or more first questions, emails associated with the first email account may be analyzed to determine whether an email comprising one or more answers to the one or more first questions has been transmitted to the second email account (e.g., the one or more answers may correspond to responses directed towards the one or more first questions, the one or more answers may correspond to information related to the one or more first questions, etc.). For example, a first set of emails transmitted by the first email account to the second email account may be identified. The first set of emails may be analyzed (based upon the one or more first questions) to determine whether the first set of emails comprise one or more emails that comprise one or more answers to the one or more first questions. For example, one or more first emails, comprising one or more first answers to the one or more first questions, may be identified. In some examples, the one or more first emails and/or the one or more first answers may be identified using one or more text analysis techniques and/or one or more machine learning techniques. In an example, the one or more first questions may comprise a first exemplary question “Do you know when you'll be able to work on this project with me?” and/or the one or more first answers may comprise a first exemplary answer “I'll be free next week. Call me on Monday”. Responsive to identifying the one or more first emails and/or the one or more first answers, it may be determined that the first email is not unanswered (e.g., it may be determined that the first email is answered, it may be determined that the first email is resolved, it may be determined that the first email is closed, etc.). Alternatively and/or additionally, responsive to determining that the first set of emails (transmitted to the second email account by the first email account) does not comprise one or more emails comprising one or more answers to the one or more first questions, it may be determined that the first email is unanswered. For example, responsive to determining that an email comprising an answer to the one or more first questions was not transmitted to the second email account by the first email account, it may be determined that the first email is unanswered. Alternatively and/or additionally, the first email may be analyzed to determine whether the first email is unanswered by analyzing the first email to determine whether the first email comprises one or more requests (e.g., one or more requests for information, a request for a report, a request for a file, etc.). For example, it may be determined that the first email comprises one or more first requests. It may be determined that the first email comprises the one or more first requests by using one or more text analysis techniques to identify the one or more first requests. Alternatively and/or additionally, it may be determined that the first email comprises the one or more first requests by using one or more vector analysis techniques (e.g., analyzing vectors associated with the first email). Responsive to determining that the first email comprises the one or more first requests, emails associated with the first email account may be analyzed to determine whether an email comprising information associated with the one or more first requests has been transmitted to the second email account (e.g., the information may be requested information associated with the one or more first requests, responses related to the one or more first requests, etc.). For example, the first set of emails may be analyzed (based upon the one or more first requests) to determine whether the first set of emails comprise one or more emails that comprise information associated with the one or more first requests. For example, one or more second emails, comprising first information associated with the one or more first requests, may be identified. In some examples, the one or more second emails and/or the first information may be identified using one or more text analysis techniques and/or one or more machine learning techniques. In an example, the one or more first requests may comprise a first exemplary request “Please send me times you'll be available next week” and/or the first information may comprise a set of text “I'll be free Monday 6 PM-10 PM and Tuesday 4 PM til late”. In a different example, the one or more first requests may comprise a second exemplary request “Send me a report by the end of the week” and/or the first information may comprise a file attached to an email of the one or more second emails entitled “Project Report”. Responsive to identifying the one or more second emails and/or the first information associated with the one or more first requests, it may be determined that the first email is not unanswered. Alternatively and/or additionally, responsive to determining that the first set of emails (transmitted to the second email account by the first email account) does not comprise one or more emails comprising information associated with the one or more first requests, it may be determined that the first email is unanswered. For example, responsive to determining that an email comprising information associated with the one or more first requests was not transmitted to the second email account by the first email account, it may be determined that the first email is unanswered. Alternatively and/or additionally, a second set of reply emails transmitted by the first email account to the second email account may be identified. For example, the second set of reply emails may be transmitted in response to the first email (e.g., the second set of reply emails and/or the first email may be a part of a single email conversation). In some examples, rather than analyzing the first set of emails, merely the second set of reply emails may be analyzed (based upon the one or more first questions and/or the one or more first requests) to determine whether the first email is unanswered. For example, responsive to identifying one or more reply emails, of the second set of reply emails, that comprise one or more answers to the one or more first questions, it may be determined that the first email is not unanswered. Alternatively and/or additionally, responsive to determining that the second set of reply emails does not comprise one or more reply emails comprising one or more answers to the one or more first questions, it may be determined that the first email is unanswered. Alternatively and/or additionally, responsive to identifying one or more reply emails, of the second set of reply emails, that comprise information associated with the one or more first requests, it may be determined that the first email is not unanswered. Alternatively and/or additionally, responsive to determining that the second set of reply emails does not comprise one or more reply emails comprising information associated with the one or more first requests, it may be determined that the first email is unanswered. In some examples, rather than analyzing the first set of emails and/or the second set of reply emails, it may be determined that the first email is not unanswered by (merely) identifying a reply email transmitted by the first email account to the second email account in response to the first email (regardless of whether the reply email comprises information associated with the one or more first requests and/or one or more answers to the one or more first questions). Alternatively and/or additionally, it may be determined that the first email is unanswered by determining that a reply email was not transmitted by the first email account to the second email account in response to the first email. At456, responsive to determining that the first email is unanswered, a first notification may be transmitted to the first client device (associated with the first email account). The first notification may be indicative of the first email being unanswered. Alternatively and/or additionally, responsive to determining that the first email is unanswered, an indication of the first email may be stored in a list of unanswered emails. For example, the list of unanswered emails may comprise indications of a plurality of emails that are unanswered. In some examples, the first notification may be transmitted to the first client device responsive to a determination that a second duration of time since the first email was received (by the first email account) exceeds a second threshold duration of time. In an example, 1 week after the first email account received the first email from the second email account, the first notification may be transmitted (automatically). Alternatively and/or additionally, the first notification may be transmitted to the first client device responsive to a determination that a level of communication between the first email account and the second email account exceeds a threshold level of communication. For example, the level of communication may correspond to an amount of communication between the first email account and the second email account (and/or between a third account associated with the user of the first email account and a fourth account associated with the user of the second email account, wherein the third account and/or the fourth account may be email accounts or other types of accounts, such as messaging accounts, phone numbers, etc.). For example, the level of communication may be determined based upon a quantity of communications (e.g., messages, emails, phone calls, video calls, voice calls, etc.) between the first email account and the second email account. Alternatively and/or additionally, the level of communication may be determined based upon a frequency of communications (e.g., a number of communications per day, a number of communications per week, etc.) between the first email account and the second email account (and/or between the third account associated with the user of the first email account and the fourth account associated with the user of the second email account). Alternatively and/or additionally, the level of communication may be determined based upon lengths of communications (e.g., number of characters in emails and/or messages, lengths of time associated with phone calls, calls using the communication app and/or video calls, etc.) between the first email account and the second email account. In some examples, the first notification may be transmitted responsive to detecting an email address of the second email account being inputted into one or more input fields of the first email interface (using the first client device). For example, an email drafting interface of the first email interface may be opened (responsive to a selection of a compose selectable input of the first email interface corresponding to drafting and/or transmitting an email). The email drafting interface may be displayed using the first email interface. The email drafting interface may comprise one or more email header fields. For example, the email drafting interface may comprise a first email header field “To:”, a second email header field “CC:” and/or a third email header field “BCC:”. The first email header field, the second email header field and/or the third email header field may correspond to recipients of the email being drafted using the email drafting interface. In some examples, it may be detected that the email address of the second email account is inputted into one or more of the first email header field, the second email header field and/or the third email header field. Responsive to detecting the email address of the second email account being inputted into the one or more email header fields, the first notification may be transmitted to the first client device. Alternatively and/or additionally, responsive to detecting the email address of the second email account being inputted into the one or more email header fields, emails received by the first email account from the second email account may be analyzed to determine whether one or more emails, received by the first email account from the second email account, are unanswered. Alternatively and/or additionally, the list of unanswered emails may be analyzed to determine whether one or more emails, received by the first email account from the second email account, are unanswered. For example, it may be determined that the first email is unanswered (and/or the first notification may be transmitted to the first client device). In some examples, the first notification may be displayed using the first email interface. For example, the first notification may be opened (automatically) by the first email interface and/or the first client device. The first notification may be overlaid onto at least a portion of the email drafting interface. In an example, the first notification may comprise “[email protected] already sent you an email which contains a question which you have not responded to. Would you like to respond to that email rather than sending a new email?”. In some examples, the first notification may comprise a first selectable input corresponding to composing (e.g., drafting and/or transmitting) a first reply email in response to the first email. Alternatively and/or additionally, the first notification may comprise a second selectable input corresponding to displaying the first email. For example, responsive to receiving a selection of the second selectable input, the first email may be displayed using the first email interface. Alternatively and/or additionally, in an example where the first email is answered, emails received by the first email account from the second email account may be analyzed to identify a received email (and/or an email conversation) received from the second email account (e.g., the received email may correspond to a most recently received email received from the second user account). For example, responsive to identifying the received email, a second notification may be displayed using the first client device. The second notification may comprise a representation of the received email (e.g., the second notification may comprise a body of the received email) and/or an email conversation associated with the received email. Alternatively and/or additionally, the second notification may comprise an indication of a time that the received email was received by the first email account. Alternatively and/or additionally, the second notification may comprise an indication of a subject line associated with the received email. Alternatively and/or additionally, the second notification may comprise a selectable input corresponding to displaying the received email. At458, a request to compose the first reply email in response to the first email may be received via a selection of the first selectable input of the first notification. At460, responsive to receiving the request to compose the first reply email in response to the first email, the email drafting interface may be displayed. One or more of the first email header field, the second email header field and/or the third email header field of the email drafting interface may comprise the email address of the second email account. Alternatively and/or additionally, a fourth email header field of the email drafting interface, corresponding to an email subject of the first reply email, may comprise a first subject line indicative of the first reply email being a response to the first email. For example, if a second subject line of the first email is “Report Needed”, then the first subject line may be “Re: Report Needed”. Alternatively and/or additionally, an indication of an email body of the first email may be (automatically) entered into an email body field of the email drafting interface. Alternatively and/or additionally, the first notification may comprise a third selectable input corresponding to composing a second email for transmission to the second email account (e.g., the second email may correspond to an email that is not transmitted as a response to the first email). For example, a request to compose the second email may be received via a selection of the third selectable input of the first notification. Responsive to receiving the request to compose the second email, the email drafting interface may be displayed. One or more of the first email header field, the second email header field and/or the third email header field of the email drafting interface may comprise the email address of the second email account. Alternatively and/or additionally, the fourth email header field of the email drafting interface may not comprise the first subject line. Alternatively and/or additionally, the email body of the first email may not be entered into the email body field of the email drafting interface. In some examples, one or more operations may be performed (automatically) using the email drafting interface based upon a user profile associated with the first email account (as described in method400). For example, the user profile may comprise a plurality of activity patterns associated with the first email account. Responsive to detecting activity corresponding to an activity pattern of the plurality of activity patterns, one or more operations associated with the activity pattern may be performed. For example, responsive to a first set of text being entered into the email body field of the email drafting interface, the first set of text may be analyzed to determine identify an activity pattern of the plurality of activity patterns associated with the first set of text being inputted. For example, it may be determined that the first set of text (e.g., “Happy New Year's”) is associated with a first activity pattern of the plurality of activity patterns (e.g., a topic of the first set of text, one or more words and/or characters of the first set of text, etc. may be associated with the first activity pattern). In some examples, responsive to the first set of text being entered into the email body field, a first set of conditions associated with the first activity pattern may be met. Alternatively and/or additionally, responsive to the first set of conditions being met, one or more operations associated with the first activity pattern may be performed. For example, a second set of text (e.g., “Melissa and I wish you and Greg the best for the New Year”) may be generated based upon the first activity pattern and/or the first set of text. The second set of text may be entered into the email body field (automatically). Alternatively and/or additionally, a third notification may be displayed. The third notification may comprise a fourth selectable input corresponding to entering the second set of text into the email body field (and/or one or more input fields of the email drafting interface). For example, responsive to a selection of the fourth selectable input, the second set of text may be entered into the email body field. FIGS.5A-5Fillustrate examples of a system501for determining activity patterns based upon user activity and/or performing operations based upon the activity patterns. A user, such as user Thomas, (e.g., and/or a first client device500associated with the user) may access and/or interact with a first email interface for sending and/or receiving emails, performing communications via messaging, voice calls, video calls, etc. In some examples, the first client device500may comprise a microphone504, a speaker506and/or a button502(e.g., a switch). In some examples, a first email account, associated with the user, may be accessed using the first email interface. First activity performed using the first email interface on the first client device500may be detected. For example, the first activity may comprise selectable inputs of the first email interface being selected (e.g., clicked, pressed, etc.) using the first client device500. For example, the selectable inputs may correspond to one or more of one or more messages of a plurality of emails associated with the first email account, one or more settings associated with the first email account, one or more actions (e.g., transmitting a composed email, deleting an email, replying to an email, forwarding an email, opening an email, opening the first email interface, etc.), etc. FIG.5Aillustrates the first client device500being used to open the first email interface. For example, the first client device500may display a list of application selectable inputs (e.g., the list of application selectable inputs may be displayed within an exemplary home-screen of the first client device500). For example, the list of application selectable inputs may comprise a first selectable input508corresponding to the first email interface. For example, responsive to a selection of the first selectable input508, a request to access the first email interface may be received and/or the first email interface may be opened using the first client device500. FIG.5Billustrates a graphical user interface of the first client device500being controlled to display the first email interface. For example, the graphical user interface of the first client device500may be controlled to display the first email interface responsive to the selection of the first selectable input508. In some examples, the first email interface may comprise a list of emails comprising a plurality of emails received by the first email account. A portion of the plurality of emails may be unread and/or recently received emails. For example, the list of emails may comprise a first email514that is associated with a first type of email. For example, the first type of email may correspond to a most recently received unread email of the plurality of emails (e.g., a newest unread email). In some examples, a selection of the first email514may be received and/or the first email514may be opened (and/or displayed). In some examples, the first email interface being opened using the first client device500followed by the first email514being opened may correspond to a first pattern instance associated with a first activity pattern524(illustrated inFIG.5C) determined based upon the first activity. For example, the first activity may comprise a plurality of sets of activity, corresponding to a plurality pattern instances, where the first email interface is opened using the first client device500followed by an email that is the first type of email being opened. For example, the first activity pattern524may be determined based upon the plurality of pattern instances. FIG.5Cillustrates the first activity pattern524being stored in a user profile526associated with the first email account. For example, the first activity pattern524may be stored in the user profile526responsive to determining the first activity pattern. In some examples, the first activity pattern524may be indicative of a first set of conditions (e.g., receiving a request to open the first email interface) and/or one or more actions (e.g., open an email that is the first type of email). FIG.5Dillustrates the first client device500being used to open the first email interface. For example, responsive to a selection of the first selectable input508, a request to access the first email interface may be received.FIG.5Eillustrates a backend system550performing one or more operations associated with the first activity pattern524. For example, responsive to receiving the request to access the first email interface, the user profile526may be analyzed to select the first activity pattern524from a plurality of activity patterns comprised within the user profile526. For example, the first activity pattern524may be selected based upon the request to access the first email interface. Alternatively and/or additionally, a second email associated with the first email account may be selected for being displayed based upon a determination that the second email is the first type of email (e.g., the second email may be a most recently received unread email associated with the first email account).FIG.5Fillustrates the graphical user interface of the first client device500being controlled to display the second email. For example, the second email may be displayed automatically responsive to the first email interface being opened (e.g., the user may not be required to select the second email). FIGS.6A-6Cillustrate examples of a system601for determining activity patterns based upon user activity and/or performing operations based upon the activity patterns. A user, such as user Tiffany, (e.g., and/or a first client device600associated with the user) may access and/or interact with a first email interface for sending and/or receiving emails, performing communications via messaging, voice calls, video calls, etc. In some examples, the first client device600may comprise a microphone604, a speaker606and/or a button602(e.g., a switch). In some examples, a first email account, associated with the user, may be accessed using the first email interface. First activity performed using the first email interface on the first client device600may be detected. For example, the first activity may comprise emails being transmitted to one or more email accounts. For example, a first activity pattern may be determined based upon the first activity. The first activity may be associated with a first set of conditions and/or one or more actions. For example, the one or more actions may comprise drafting a happy birthday email and/or transmitting the happy birthday email to an exemplary email account. Alternatively and/or additionally, the first set of conditions may be associated with an occasion (e.g., a birthday) associated with an email account that is a first type of email account. For example, it may be determined that the first set of conditions are met responsive to a determination that a current time matches a birthday associated with an email account that is the first type of email account. FIG.6Aillustrates the first client device600being used to open the first email interface. For example, the first client device600may display a list of application selectable inputs (e.g., the list of application selectable inputs may be displayed within an exemplary home-screen). For example, the list of application selectable inputs may comprise a first selectable input608corresponding to the first email interface. For example, responsive to a selection of the first selectable input608, a request to access (and/or open) the first email interface may be received and/or the first email interface may be opened using the first client device600. In some examples, the request to access the first email interface may be received during a first birthday associated with a second email account. Alternatively and/or additionally, it may be determined that the second email account is the first type of email account based upon a type of relationship between the user and a second user associated with the second email account is a first type of relationship (e.g., social relationship) associated with the first type of user account. Alternatively and/or additionally, it may be determined that the first email account is the first type of email account based upon a determination that a level of communication between the first email account and the second email account exceeds a threshold level of communication associated with the first type of email account. In some examples, it may be determined that the first set of conditions associated with the first activity pattern are met responsive to a determination that a current time matches the first birthday associated with the second email account. Alternatively and/or additionally, it may be determined that the first set of conditions associated with the first activity pattern are met responsive to a determination that the first email account is the first type of email account. In some examples, responsive to determining the first set of conditions associated with the first activity pattern are met, a first notification614may be transmitted to the first client device600. FIG.6Billustrates a graphical user interface of the first client device600being controlled to display the first notification614. For example, the first notification614may be displayed responsive to the first email interface being opened (and/or responsive to determining the first set of conditions associated with the first activity pattern are met). In some examples, the first notification614may comprise a second selectable input616corresponding to opening an email drafting interface for drafting and/or transmitting a first email (e.g., a happy birthday email) associated with the second email account. In some examples, responsive to a selection of the first selectable input616, content may be generated for inclusion in the first email. For example, a first set of emails associated with the first activity pattern (e.g., happy birthday emails) may be identified and/or first text may be extracted from the first set of emails. A first set of text634(illustrated inFIG.6C) may (automatically, for example) be generated based upon the first set of emails. Alternatively and/or additionally, content items may be extracted from the first set of emails. For example, a first content item636(illustrated inFIG.6C) may be selected from the content items for inclusion in the content. FIG.6Cillustrates the graphical user interface of the first client device600being controlled to display the email drafting interface. For example, an email address associated with the second email account may (automatically) be entered into a first email header field626corresponding to recipients of the first email. Alternatively and/or additionally, a set of text may (automatically) be entered into a second email header field630corresponding to a subject line of the first email. Alternatively and/or additionally, the content (comprising the first set of text634and/or the first content item636) may be entered into an email body field632. FIGS.7A-7Dillustrate examples of a system701for determining unanswered emails and/or displaying notifications indicative of the unanswered emails. A user, such as user Thomas, (e.g., and/or a first client device700associated with the user) may access and/or interact with a first email interface for sending and/or receiving emails, performing communications via messaging, voice calls, video calls, etc. In some examples, the first client device700may comprise a microphone704, a speaker706and/or a button702(e.g., a switch). FIG.7Aillustrates a first email710being received by a first email account712associated with the user. For example, the first email710may be transmitted by a second email account708. In some examples, the first email710may comprise a first subject line “Experiment Results” and/or a first email body “Hello Thomas, Please send me the results of your last experiment.”. In some examples, the first email710may be analyzed to determine whether the first email is unanswered (e.g., whether the first email is unresolved, whether the first email is open, etc.). In some examples, the first email710may be analyzed to determine whether the first email710is unanswered responsive to a determination that a first duration of time since the first email710was received (by the first email account) exceeds a first threshold duration of time. Alternatively and/or additionally, the first email710may periodically (e.g., once per hour, once per day, etc.) be analyzed to determine whether the first email710is unanswered. In some examples, the first email710may be analyzed to determine whether the first email710is unanswered by analyzing the first email710to determine whether the first email710comprises one or more requests (e.g., one or more requests for information, a request for a report, a request for a file, etc.). For example, it may be determined that the first email710comprises a first request “Please send me the results of your last experiment”. Responsive to determining that the first email710comprises the first request, emails associated with the first email account712may be analyzed to determine whether an email comprising information associated with the first request has been transmitted to the second email account708. In some examples, responsive to determining that an email comprising information associated with the first request was not transmitted to the second email account708by the first email account712, it may be determined that the first email710is unanswered. Alternatively and/or additionally, responsive to determining that a reply email was not transmitted to the second email account708in response to the first email710, it may be determined that the first email710is unanswered. FIG.7Billustrates a graphical user interface of the first client device700being controlled to display an email drafting interface for composing an email. For example, the email drafting interface may comprise a first email header field720(e.g., “TO”) corresponding to first recipients of an email. Alternatively and/or additionally, the email drafting interface may comprise a second email header field722(e.g., “CC”) corresponding to second recipients of an email. Alternatively and/or additionally, the email drafting interface may comprise a third email header field724corresponding to a subject line field of an email. Alternatively and/or additionally, the email drafting interface may comprise an email body field726corresponding to an email body of an email. In some examples, it may be detected that an email address of the second email account708is inputted into the first email header field720. Responsive to detecting the email address of the second email account708being inputted into the first email header field720, a first notification732(illustrated inFIG.7C) may be transmitted to the first client device700. In some examples, the first notification732may be indicative of the first email710being unanswered. FIG.7Cillustrates the graphical user interface of the first client device700being controlled to display the first notification732. In some examples, the first notification732may comprise a first selectable input734corresponding to composing (e.g., drafting and/or transmitting) a first reply email in response to the first email710. For example, a request to compose the first reply message may be received via a selection of the first selectable input734. FIG.7Dillustrates the graphical user interface of the first client device700being controlled to display the email drafting interface for composing the first reply email. In some examples, the first email header field720may comprise the email address of the second email account708. Alternatively and/or additionally, the third email header field724may comprise a second subject line (e.g., “RE: Experiment Results”) indicative of the first reply email being a response to the first email710. Alternatively and/or additionally, an indication740of the first email body of the first email710may (automatically) be entered into the email body field726of the email drafting interface. It may be appreciated that the disclosed subject matter may assist a user (e.g., and/or one or more client devices associated with the user) in interacting with a communication interface more conveniently and/or performing actions and/or tasks using the communication interface more quickly. Alternatively and/or additionally, it may be appreciated that the disclosed subject matter may assist the user in keeping track of received emails and/or unanswered emails. Implementation of at least some of the disclosed subject matter may lead to benefits including, but not limited to, a reduction in screen space and/or an improved usability of a display (of the client device) (e.g., as a result of determining activity patterns based upon user activity with the communication interface, as a result of automatically performing actions associated with an activity pattern responsive to detecting activity associated with the activity pattern and/or determining that conditions associate with the activity pattern are met, wherein the user may not need to perform the actions manually, etc.). Alternatively and/or additionally, implementation of at least some of the disclosed subject matter may lead to benefits including the user's experience being improved (e.g., as a result of automatically performing actions responsive to detecting activity associated with activity patterns which may make it easier for the user to perform tasks by improving an operating efficiency of the communication interface and/or the user, etc.). Alternatively and/or additionally, implementation of at least some of the disclosed subject matter may lead to benefits including a reduction in screen space and/or an improved usability of the display (of the client device) (e.g., as a result of automatically identifying unanswered messages, as a result of displaying a notification of an unanswered message automatically, wherein the user may not need to open a separate window and/or navigate through messages to identify unanswered messages, etc.). Alternatively and/or additionally, implementation of at least some of the disclosed subject matter may lead to benefits including reducing a probability that the user forgets about emails comprising requests for information and/or questions (e.g., as a result of automatically identifying unanswered messages, as a result of displaying a notification of an unanswered message automatically, etc.). In some examples, at least some of the disclosed subject matter may be implemented on a client device, and in some examples, at least some of the disclosed subject matter may be implemented on a server (e.g., hosting a service accessible via a network, such as the Internet). FIG.8is an illustration of a scenario800involving an example non-transitory machine readable medium802. The non-transitory machine readable medium802may comprise processor-executable instructions812that when executed by a processor816cause performance (e.g., by the processor816) of at least some of the provisions herein (e.g., embodiment814). The non-transitory machine readable medium802may comprise a memory semiconductor (e.g., a semiconductor utilizing static random access memory (SRAM), dynamic random access memory (DRAM), and/or synchronous dynamic random access memory (SDRAM) technologies), a platter of a hard disk drive, a flash memory device, or a magnetic or optical disc (such as a compact disc (CD), digital versatile disc (DVD), or floppy disk). The example non-transitory machine readable medium802stores computer-readable data804that, when subjected to reading806by a reader810of a device808(e.g., a read head of a hard disk drive, or a read operation invoked on a solid-state storage device), express the processor-executable instructions812. In some embodiments, the processor-executable instructions812, when executed, cause performance of operations, such as at least some of the example method400ofFIG.4A, and/or the example method450ofFIG.4B, for example. In some embodiments, the processor-executable instructions812are configured to cause implementation of a system, such as at least some of the example system501ofFIGS.5A-5F, the example system601ofFIGS.6A-6C, and/or the example system701ofFIGS.7A-7D, for example. 3. Usage of Terms As used in this application, “component,” “module,” “system”, “interface”, and/or the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Unless specified otherwise, “first,” “second,” and/or the like are not intended to imply a temporal aspect, a spatial aspect, an ordering, etc. Rather, such terms are merely used as identifiers, names, etc. for features, elements, items, etc. For example, a first object and a second object generally correspond to object A and object B or two different or two identical objects or the same object. Moreover, “example” is used herein to mean serving as an instance, illustration, etc., and not necessarily as advantageous. As used herein, “or” is intended to mean an inclusive “or” rather than an exclusive “or”. In addition, “a” and “an” as used in this application are generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Also, at least one of A and B and/or the like generally means A or B or both A and B. Furthermore, to the extent that “includes”, “having”, “has”, “with”, and/or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising”. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing at least some of the claims. Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. Of course, many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter. Various operations of embodiments are provided herein. In an embodiment, one or more of the operations described may constitute computer readable instructions stored on one or more computer and/or machine readable media, which if executed will cause the operations to be performed. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein. Also, it will be understood that not all operations are necessary in some embodiments. Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The disclosure includes all such modifications and alterations and is limited only by the scope of the following claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. | 119,741 |
11943311 | It is to be appreciated that elements in the figures are illustrated for simplicity and clarity. Common but well-understood elements, which may be useful or necessary in a commercially feasible embodiment, are not necessarily shown in order to facilitate a less hindered view of the illustrated embodiments. DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of example embodiments or aspects. It will be evident, however, to one skilled in the art, that an example embodiment may be practiced without all of the disclosed specific details. The system and methods disclosed herein may be used for hidden-identity location sharing, including in social networking and instant messaging environments, on a mobile communication device, other computing device, engine or processor-based system or device. In accordance with the above-described desired system and method, the disclosed system and method addresses issues that social media face with existing applications. Many users at times may want to know who may be close by or perhaps, is in a certain location (eg. mall, restaurant, store, etc.) or geographical area (i.e. outside of the country) while maintaining one's identity private and/or forego location sharing, at least temporarily. Existing systems and methods, known or standard forms of location sharing between friends can be or has proven to be in certain circumstances an intrusive and/or uncomfortable form of communication. A user's location may be set to either “on” or “off”, and there is generally no grey or hybrid form of communication with respect to guarding one's privacy of information. The disclosed system and method are directed to a novel location sharing based software especially in circumstances when generally, a majority of users would be or are uncomfortable with sharing their location. For example, Snapmap is usable, but it is set either “on or off”. When it's set to off, it renders itself a non-useful and at time, even intrusive service, and when it's set to “on”, it divulges your location to everyone, sparking the feeling, sense or at least the perception to users, of intrusion. Hence, there is a desire for a novel system and method in which users can comfortably share their location and communicate. In accordance with an embodiment of the disclosed system and method, using the Wave Dynamic Communication Protocol (WDCP), users may establish dynamic flows of communication over a mapped or other regional/locale based layout. In certain aspects or embodiments, each user begins as an anonymous “marker” on the map. It is noted that the user's location may still being shared, however the identity of the user is not (at least yet or at the outset) displayed or known to other user(s). In an embodiment, one or more other friends would send a protocol request to the user in order to reveal the identity of the anonymous user and to enter the flow/protocol using the WDCP. If the user accepts, they are both entered into a flow of communication that lasts for a dynamic period of time. Once the timer expires for the ephemeral-based communication, the flow terminates along with all of its contents. In certain embodiments, users may wish to find and chat with a specific friend. Using WDCP users may also send protocol requests in which they can specifically select individual friends via for example, a friends list. In certain embodiments, the WDCP is not limited to anonymity and users can also send protocol requests to specific friends. One method implements for example BAPR (Being Anonymous Protocol Requests) for fun and comfort, and the other mode is BSPR (Being Specified Protocol Requests) used for fun, comfort, and any specific interest in talking with a specific friend. In accordance with the WDCP system and method, users can easily & comfortably gain access to their friends' locations and communicate without having to worry about the possibility of being intrusively, unknowingly, or unwillingly “Tracked” by others, using known social media platforms. Since the known systems and method associated with location sharing and communication over a mapped layout is generally considered standard/basic in such social media platforms, there generally is left no room for users to explore a more refined and creative world of social media interactions. The disclosed system and method associated with WDCP communications is directed to refined, creative and hybrid form of social media interactions that are not conventional to such known social media platforms. In certain aspects or embodiments, friends or users can communicate in the way that they always wanted but never were able to depending upon their desired level of communications and privacy at a certain moment in time. For example, many possible “Meetup” situations are passed up on simply because users implementing known systems and methods decide to keep their location “off” and further, they are depending on others to keep their locations “on”. Often when this is the case, it creates and snowballs into a situation where everyone turns their location “off” hoping that the next person will have their location “on”. This produces a potential “stalker-like” mentality or perception between and among users, which may to lead to the downside of social media communications, i.e. potentially toxic and undesired behavior and/or communications, which are often not intended as such. Therefore, in accordance with an embodiment of the disclosed system and method, users are able to communicate dynamically, on a more refined level or caliber of wave dynamic communication protocol social media communications. In certain aspects or embodiments this includes sending conditionally-based communications, and/or a graduated or calibrated revelation of one's identity and/or other calibrated information sharing. Since a user generally can begin as an anonymous user on the map, users will be inclined to keep their location on, when they otherwise would be inclined to not keep location sharing feature in “on” mode. Hence, this creates and facilitates a positive environment in which users will be able to send out protocol requests in accordance with their comfort level, and in order to start a dynamically timed location sharing session. In addition, the users can progress in accordance with the social calibration states, and eventually reveal the user's identity while sending calibrated information communications for the duration of the flow/protocol. As mentioned above, when the flow terminates, all of the associated content in such WDCP protocol will also terminate with it. This is designed to enable and facilitate a user's comfort level while location or other information sharing in a calibrated “social media state” in accordance with an embodiment of the disclosed system and method. It is noted that users are generally and naturally more comfortable when such communications are not permanent, hence users will be more inclined to communicate on such social media platform with greater integrity of social communications and in accordance with their desired comfort level. Essentially, the calibrated social states of the disclosed system and method mimics a natural human to human interaction which usually includes a mystery factor when not initially knowing a person very well. Mystery/curiosity has been a driver for the human brain since the beginning of time. Because of this, the user will naturally want to know which friend might be nearby (or perhaps in another geographical region or locality) when the user finds the anonymous map marker. In accordance with an embodiment of the disclosed system and method, the user can simply send a protocol request and easily get in-touch/communicate with your “friend” more comfortably or when one is ready to reveal their social state and hence, calibrated level of identity. The disclosed system and method are directed to facilitating both a fun and mysterious form of social media communication that provides greater confidence and overall positivity to users thereof while avoiding general social media pitfalls. As an example, if there is a person that you/a user wants to meet up and/or communicate with, you/a user may not (in some cases) want to text them directly. However, with the WDCP platform, a user can send protocol requests to the anonymous friends on the map, anticipating or hoping that one of these anonymous (i.e., mystery) map markers is the person they wish to communicate with. It also works in reverse as well. The requesting user can also send you (i.e., the requested user) a protocol request when that requesting user is just trying to determine which friends are around them. In such case, you (i.e., the requested user) can then accept the request, commence a conversation, and see where it goes from there!) Even though studies show that users are generally not fond of snapmap, users still implement such feature every day and commonly. While users enjoy the concept of connecting with their friends via a map so much to the point where they're willing to sacrifice their comfort level to use the software and communicate, the disclosed system and method addresses any such issues in a novel platform. Users generally have no other options since it's the only basic social media platform with tracking software included but, limited to stringent “On” & “Off” modes with respect to location tracking. However, an avid user of such social media platforms understand that tracking is one common variable of these known social media platforms, that are preventing them from growing and reaping the reward and benefits of engaging in social media communications via such known platforms. Essentially many potential users will not location share since they do not want to be tracked. Hence, in accordance with the WDCP platform, the human mystery factor as specified previously is present, and your friends' identities are not revealed on the map at least not at the outset of such calibrated social media communications. The user begins with one or even many identical looking map markers. Such map markers for example, can be in the form of an icon, such as a balloon or other contemplated symbolic icon. This WDCP platform permits transmitting of a requesting user protocol requests with an almost impersonal perception. There is no room for any user to second guess and permit negativity such as “Why did that person call me?”. If the premise is based on no particular reason besides pure fun and possible coincidence, as such a request is only based on that premise since anonymous from the outset. With WDCP, each request received seems “lighter” and more “innocent” creating a positive atmosphere where users can engage in positive communication. Hence, the WDCP fosters a fun and effective social media platform with calibrated and veiled identity which is only revealed and to the degree a user desires to the requesting user. In certain aspects and embodiments, the first user/you may send WDCP Protocol Requests to other users visible on the first user's/your map, locality (or other desired region), can determine who has accepted such request, and then can implement the WDCP system and method to communicate with that second user for a dynamic period of time as such communications are ephemeral based. FIGS.1A-1Eshow an exemplary embodiment of on-boarding features, including screenshots of the disclosed system. A user will upon download of the application to his computing device (or mobile computing device), be presented with the on-boarding screens as shown inFIGS.1A-1E. The user commences the session, using the dynamic wave communication protocol (DWCP) session, as anonymous. Once the user beings, it is noted inFIG.1B, that the user can send a dynamic “wave” or “wayve” signal request to one or more “anonymous friends” in order to seek reveal their veiled identity. Once a friend accepts the wave request as shown inFIG.1C, the user can then reveal their friend's identity, chat, and explore, in one or more active wave states as described in connections withFIGS.21A-D. InFIG.1D, an example onboarding screen indicates that the user does not or may not want to be shown on the map at all, and can choose to enter or has entered shade mode by selecting and enabling of the “Shade Mode” button feature shown. In an exemplary embodiment shown inFIG.1E, the system indicates that the system will only use the user's location while using the application, in accordance with an embodiment of the disclosed dynamic wave communication protocol (DWCP) system and method. FIGS.1F-1Mshow an exemplary embodiment of account setup and login features of the disclosed subject matter including various screenshots advancing to various features of the application. A user may skip the phone number process, in which case the user will be directly in the mapped layout. FIGS.2A-2Bshow an exemplary identity request feature in a mapped layout shown in example screenshots.FIGS.2A-Bshows how the initial screen appears for all new users including user location on the map. The user icon is a small colored circle3(for example, light blue circle) with a colored pointer5(for example, an orange pointer) that swivels around depending on which specific direction the user is in. In this regard, the map marker associated with the user, is used to represent the user's current location and/or whereabouts. The map maker will be faded or dimmed when the user enters shade mode. This map marker3displays to the user that their location shown by direction arrow5is not being shown in their friend(s)' Maps along with the shade mode icon25as shown inFIG.10B, which map marker3with directional arrow5, will be faded or dimmed further than as shown inFIG.10A, during such shade mode. (However, in accordance with an embodiment, only the user can see this specific icon, in the shown example). In certain embodiments, there may be one or alternatively, two anonymous balloons4,7shown inFIG.2A, on the map when a user begins or commences the wave session. The user must send identity requests to these one or more balloon(s)4,7in order to find out who they are (for example, uncover their veiled identity). There is shown also a colored dot1(for example, a red dot) at the top of the screen inFIG.2A, which indicates that the user is not involved in any wayves (and/or wave/dynamic protocol communications) and no one can actually see the user's location on their device or computing device as further described hereinbelow inFIGS.21A-D. For the purposes of the present disclosure, the term “wayve” or “wave” refers to an identity request (or alternatively, request to engage in wave communications with an anonymous friend or friend set in another wave state) in accordance with the wave dynamic protocol communication (WDCP) as described further hereinbelow. The home button (for example, balloon (8)) is shown inFIG.2A, on the bottom of the screen as well as a re-center button10. The re-center button10, operates such that when the user scrolls away off the map or anywhere on their map, the user can quickly press this button to re-center the map back to their current location map marker shown as directional arrow5. As shown inFIG.2B, in certain embodiments, a user clicks on or selects any colored (for example, orange) anonymous friend(s) balloon(s)7,4on the map. The user is represented, for example, by a circular map marker icon3onFIGS.2A-3Bthat includes a directional arrow5. The selected balloon9enlarges to signify that the user has selected it. Next, the user swipes the small balloon located in the “swipe bar”11from left to right (for example, towards the lighter (for example, yellow) rightmost line portion) in order to send out to the selected balloon9, a request (for example, a wayve, wave, wave-shake, wave-chat, wave-request, and/or identity request). When the user wayves to the friend, the friend is generally still anonymous at such point until they choose to accept the received wayve or wave request. The friend that received the wave will be able to see the name only of the person who waved at them in certain embodiments. Once the friend accepts the wave, they are then engaged in a wave or wayve request or exchange in accordance with the WDCP protocol. In certain embodiments, the user will not see strangers that the user does not know and are friends in the map. In certain aspects or embodiments, the user/you will see strangers you do not know and/or are not friends or otherwise, familiar with in the application. In the shown example, the request has a duration of 15 minutes, which time is indicated on the map, below the swipe bar11. When the user3wayves to a friend, that friend cannot yet identify the user's location (perhaps, just the user's identity/name), in an example embodiment. The “wayved at” user-friend9would first accept the user's wayve in order to determine or identify the user's location. The wayved at user-friend (eg. balloon9) remains anonymous until that user so chooses to accept the received wave and even so, there are available options in certain embodiments, for calibrated user-states and hence, revelation of that user-friend's (balloon9) full identity and/or other private information based on the user's comfort level associated with a privacy-level state (as described further in connection withFIGS.21A-D). The user-friend (balloon9) that received the wave, will be able to see the name only of the user3who actually waved to them via the WDCP platform in example embodiments. Once the waved-at user9accepts, both user3and user balloon9can then engage in a wave communication associated with the WDCP based platform. Accordingly, in the WDCP system and method, if the user3sees, for example an “orange balloon”4,7appear on the map display, this would indicate that the user3and the orange balloon(s)4,7have been through the exchange of the friending process at some earlier point in time, in accordance with an embodiment. In certain embodiments, all balloons that are visible on the map are confirmed friends who's identities were not yet revealed until a wave request was sent, accepted, and the waved-at user-friend (i.e. balloon9) accepted the request which would reveal their identity to user3, placing both users into an active wave state as further shown and described in connection withFIGS.21A-Dhereinbelow. In other contemplated embodiments, the system can be configured so that a user may also see users who may not yet be confirmed friends. Also, the user's wavyes page button8, shown inFIGS.2A-3B, provides a means for the user3to navigate to the wayves page as shown and described in connection withFIG.5A. As shown inFIG.3A, once the wayve (example, identity request) is sent by user3located at directional arrow5, the orange balloon9inFIG.2B, representing the waved friend (i.e., waved at previously inFIG.2B), represented as the orange balloon9, will change to a different color, for example, a yellow balloon13, to indicate that it is pending/waiting for an acceptance of the wave request. The user is represented inFIG.3Aas user3, in a direction (shown as directional arrow5). Another anonymous “friend” is represented inFIG.3A, as orange balloon7. The dot20at the top of the screen inFIG.3A, will change color for example, from red dot1inFIGS.2A-2B, to yellow dot20, corresponding with the status of presently pending wayve. A separate countdown timer begins once the wave request becomes pending and enters status of presently pending wayve (or wave) state. In the shown embodiment, the countdown timer is set at 15 minutes and begins to countdown to zero (“0”) accordingly. However, other timers can be used and is shown below the user represented by yellow balloon13. This shown time represents the “Pending Timer”. This timer represents the duration that the request will stay valid (in the friend's inbox/wayves page). Once the timer expires, the request also expires/disappears. The waved user represented by yellow balloon13, generally will choose to accept or ignore before the timer expires, as shown in exampleFIG.3B. Once the friend the user wayved at accepts the request, his or her balloon changes from yellow to green, as shown in exampleFIG.3B, indicating that the wayve is accepted. The user3and the wavye friend, now represented by green balloon17, can now see each other's locations and identities, if divulging such information is agreed to by user shown as Green balloon17. More particularly, the user represented as green balloon17can also decide to divulge a calibrated set of information, dependent upon what level of privacy the user so chooses. Once the yellow balloon13turns to green, the dot21located at the top of the screen will also turn from a yellow dot20to green dot21. Both the yellow13and green17balloons include respective time limits indicated thereunder, which symbolize how much time there exists before the wayve expires. The time limit in the shown example, is set at 15 minutes. It is noted that in certain embodiments, the countdown timer may be replaced or otherwise, replenished with the new desired time of the active wayve. In the shown embodiment, such time limit is 15 minutes. The pending timer will generally differ from the active timer. Next, either user involved in the active wayve request, may now select (eg. click) the green balloon17to proceed to the direct messaging page to message each other (also referred to as the message thread), as further described below inFIGS.6A-6C. The chat bubble18shown inFIG.3B, appears next to the waved at friend's name (under the friend's active balloon shown as green balloon17). When the user's friend sends the user a message that chat bubble18will light up and appear as another color, for example orange with the number of new unread messages appearing next to it. This is further described in connection withFIGS.13A-B. When the user enters a message thread via the possible ways or methods, and reads the messages, the bubble18will no longer have the number affixed next to it or may be filled in solid color, for example, orange. This is essentially because the user has now read the messages. The orange dot15inFIG.3Brepresents that a pending message is awaiting the user. Turning next toFIGS.4A-B, shown is embodiments of an exemplary friends section feature of wayves page as it may appear when the application (i.e. app) is first downloaded or otherwise initiated. The scenario here is a wayve has been sent to one of the “bots” as described below. The user3has not added any friends yet, so there are only two bots shown in the example, as placeholders. When there are no wayves, a wayve bubble, as shown for example, inFIG.4A, will appear at the top to tell the user that with a red dot to symbolize that nothing is happening at that moment. In this example shown inFIG.4A, when the wayve is sent to bot kevin, there is a wayve bubble that next appears (thereby replacing previous “no wayves now” wayve bubble) with a time limit and yellow dot, as shown inFIG.4B. The bots are represented in example embodiments as a colored balloon (for eg. orange “anonymous” balloons) shown as bot Kevin and Bot Brett inFIGS.4A-B. Such bots are anonymous, similar to the user's friends that also appear as anonymous balloons at the outset, at least. In order to further protect users' privacy, the system may include smart bots that appear around any stray friends' balloons, and this is in accordance with the WDCP's system and method of using calibrated privacy states (i.e., wave-states as further described in exampleFIGS.21A-D) in accordance with a user's comfort level when implementing the system. The smart bots are designed to keep anonymity between all wayve users. Examples of such smart bots are shown inFIGS.4A-4B. The user may wayve to these bots when the app is first downloaded to test and learn the app's functionality. Under normal use (when the user has added their user's friends while using the application) the two bots may just surround any stray, singled-out balloons in order to keep that person's identity private. Bots blend in with the real friends. For example, the user may have a friend named, Mike who the user knows always goes to Starbucks every afternoon. If the user checks the user's map and notices a singled-out balloon at a Starbucks, the user will deduce that it probably has to be Mike. Although Mike's name/identity is not shown because Mike and the user are not in a wayve, the user can easily surmise that it is Mike because the user does know that Mike frequents a certain location. In order to prevent Mike from being so easily identified, two bot balloons will surround or be proximate to Mike's balloon, which will actually camouflage Mike's representative balloon. FIGS.5A-5Bshow an exemplary wayves page feature in accordance with certain aspects or embodiments of the present system and method. In the shown example of the wayves page inFIG.5A, the user has friends and the app is in normal use. Joe Coore and the user are in an active wayve in bubble30and can see each other's location and/or chat with each other, which is also symbolized by the colored dot (eg. green dot) located next to and left side of his name. Underneath the Joe Coore wayve bubble30, there is another bubble31that shows a wayve has been sent out to an anonymous person on the map and is marked with a pending color dot (for example, yellow dot). Underneath that “wavye sent” bubble31, there is shown another third bubble32that shows a wayve has been sent to Anna and states “Wavye to Anna . . . ”. Her name, Anna, appears then, because the user directly wayved at her from the friends section (located on the wayves page, for example inFIGS.5A-5B). In the shown embodiment, the friends section is located at the lower half of the screen and below and indicates to the user, that the user can swipe the balloons to wayve to one or more of the friends listed below or the user can select or click on the directory icon, which is the shown home button in order to find other friends in the user's/your directory. In certain embodiments, such pending wave state is also marked with a color dot (for example, yellow or other color) to represent a pending dot. Below that third bubble, is shown a fourth bubble33that indicates the user's friend, Eva Shay is wayving at the user. The user can choose to accept or ignore this wayve. Each shown bubble includes a timer and once in action, the bubble will move up in rank, when in action or active status state rather than pending. Further, in the shown example ofFIG.5A, there is no time limit shown to accept a wayve request (although one could be implemented and shown in the user's screen, in accordance with other embodiments). However, the bubble has a faint progress bar built into it. This progress bar represents the timer. There is shown in the example, a yellow pending dot next to the name31because the “waved at” user has not accepted the wave request as of yet. On the top right of Joe Coore's bubble30, there is an empty message bubble35(shown in orange color) which signifies that the user can message him (and vice versa, in certain embodiments). The message bubble34(in the shown embodiment, the message bubble is the small bubble34located on the top-right of the wayve bubble30in orange color34) can also be empty, which also symbolizes that there are no new messages from user, Joe. User Joe's last message will still appear in the “wayve” bubble30. In the shown embodiment, the wayve bubble includes the name, progress bar, the timer and the last message sent/received. However, when user Joe sends a new message, that new message will be shown and the message bubble on the top right of the wayve bubble34will be filled in with a 1 next to it as shown next to bubble34inFIG.5A. The user has to select or click on this message either on the wayve bubble34or message bubble30to go into the direct message thread, for example shown inFIGS.6A-6C. Selecting or clicking the settings cog wheel37icon/button (for example, the icon may be an arrow, settings cog wheel37or other wheel) located in the top right ofFIGS.5A-5B, the system will transition the user directly back to the settings page(s), for example, as shown inFIGS.9A-9B, in which wave privacy states and/or other privacy settings can be configured by the user depending on the calibrated wave-state the user prefers. In other embodiments, clicking or selection of the map icon38will return the user to the map, for exampleFIGS.2A-3B, depending on which wave-state the users are engaged in with other user(s). For example, if a user selects an active (for eg. green) balloon17on the map, the user will proceed into the message thread page shown for example inFIGS.6A-6C. Then if the user selects/clicks the map button38located on the bottom center of the screen, the system will transition the user back to the map for example, as shown inFIGS.2A-3B. However, in certain embodiments, if a user enters the message thread through the wayves page—for example, by selection/clicking a wayve bubble, the system will proceed back to the wayves page. InFIG.5B, the message bubble at the top right of Joe Coore's wayve bubble30is filled in and has a “1” next to it, which signifies that there is one new message from Joe. If user, Joe Coore had sent five (5) messages, then the number “5” will be indicated next to the message bubble. Furthermore, it is shown on the friends section (in the lower half ofFIG.5B) that the user can directly wayve to Luke Finley by swiping the balloon next to his name all the way into the yellow area of the line as shown in screen. When the user selects (or clicks) and starts to slide Luke's balloon, the wayve line appears as a red to yellow drag line35. FIGS.6A-6Cshow an exemplary embodiment of a direct messaging feature of the disclosed subject matter.FIG.6Ashows a direct message thread between the user and Joe Coore. In the shown example, the name and timer still appears at the top of the screen, so the user is aware of how much time is left to actually see Joe's location and perhaps, message him. As shown inFIG.6A, Joe has messaged the user and his message is indicated/noted with a colored line40(for eg., orange line) besides it or alternatively, beneath it. The user's message is transmitted as a certain color (for example, yellow) to symbolize that it is pending/sent and has not been read yet. If the signal is low and the message is having difficulty transmitting during a send request, the line of certain color (for example, yellow line) and text indicated for the sent message will appear dull/faded, until it is sent. In an embodiment, these lines which may appear in different colors, for example, both orange and yellow, only appear beside (or alternatively beneath) the most recent messages. Selection of the arrow in the top right will navigate the user back to the wayves page in certain embodiments. As shown inFIG.6B, continuing fromFIG.6A, the user's message now includes a line in color41(for example, green) besides it or alternatively, beneath it, which means that in the shown example, Joe has seen/read the user's message. Timestamps are also included with each message, when messages are sent or transmitted as shown inFIG.6B. Turning toFIG.6C, when Joe is typing a message, his last message43(that would normally have a full colored (eg. orange) line will change and appear as a for example, three animated dots to the right of Joe's name44(or other suitable animation(s) or symbol(s) representing when the friend/user (i.e. Joe Coore in this embodiment) is typing a/an additional message. In certain embodiments, once the new message that Joe Coore has newly typed is sent, there will be a solid-colored line (for example, orange line43) beside it or alternatively, beneath it. In certain embodiments, no line beneath (or besides) appears near the last sent last message, since it is no longer the most recent one. In some embodiments, the user's friend's messages appear in a certain color, for example, orange from the user's point of view. From Joe's point of view, his messages are also for example, yellow and green and the user's messages are all orange for him. With the user's message, the user's last read message has a green line beneath it or besides it. The user's first message will have a green line besides it. Once the time limit/wayve duration expires, the wayve chat (direct messages and all messages sent within the timed session) will automatically be deleted. In some embodiments, the user is notified if any screenshots of the message chat have been taken by the other participant(s). Additionally, the system is configured to optionally prevent users from taking screenshots of the chat to ensure greater privacy in use thereof. FIG.6Dshows how a user can enter a pending wayves message thread for the pending duration and preemptively send messages to the friend. This way the friend will see new messages coming in from a pending thread and may be inclined to accept the incoming wayve as seen for example, inFIG.5Bwith “Eva Shay” (With “1 message”33). Also note, in the pending wayve message thread ofFIG.6D, (in which the user sent the shown wayve out) the user can cancel the outgoing wayve as well as indicated by selection of “cancel wayve” button inFIG.6D. Should the user enter a pending chat for the incoming wayve (prior to accepting it), such as the scenario shown inFIG.5A, with the incoming wayve from Eva Shay with “1 message”33, the user will then navigate to the screen shown inFIG.6E. The user cannot see the specific contents of the message, however can see that they have “1 message” waiting for the user upon their acceptance of the wayve from Eva Shay. Once the user chooses to accept the wayve (either from the message thread screen shown inFIG.6Eor from the wayves page shown inFIG.5A), the user can then view the message from Eva and can also send a message to Eva. In other embodiments, the pending message thread may be completely open for users to chat similar to the behavior of an active wayve. In addition, there would not be any location sharing. In this fashion, the users can carry out a full conversation before deciding if they want to engage in an active wayve and share their identities and/or location. FIGS.7A-7Bshow an exemplary embodiment of an add friends feature of the disclosed subject matter. In the example, “Luke Finley” and the username “mdivyy12” is used multiple times for illustration purposes. Embodiment of the system may have different friends' names and usernames. As shown inFIG.7A, there is a list or even a long list of friends that are in the user's contacts. Once a user base is built up there will be mutual friends to add. The app will have access to the user's contacts after the user enters the user's phone number in the sign-up process. If the user skipped that, a “please enter phone number to sync contacts” message may appear on this screen or in certain embodiments, appears on the settings page as shown for example, inFIG.9A. In some embodiments, there is no need to enter phone number in order to sync contacts. Instead, there is a “would you like to sync contacts” permissions message that pops up when the user enters the add friends page for the first time. The people in the user's contact list that have accounts will appear first in alphabetical order as shown with the names with the balloons50(for example, orange balloons50appearing to the left of their names) and the add friend icon51to the right. The names are kept the same for this example but would be random names in normal app use. The username of the friend will also appear under their name. The people that do not have an account will have “invite” messages beneath their names as well as an invite button to the right of their name. If the user presses this, a text message will be sent out to that person in their contacts for them to get started with wayve. The user is also able to use a search bar to find specific usernames and more friends. It is noted that “added friends” feature refers to when a user “adds” a friend that already may have a wayve account in certain embodiments. The “invite” feature refers to when a user enters or navigates to the “add friends page” and selects a friend from their contacts list to invite to exchange communications using wayve, for example via a text message invite. FIG.7Bshows an exemplary pending friend54requests section of the “add friends” page. This “add friends” feature will display the other users that have sent the user incoming friend requests, as well as other users that the user may have outgoing friend requests with. The balloons53to the left of the names are indicated in a color, yellow, for example, to signify that they are pending because the user has not yet accepted the incoming requests and the user's one outgoing request has not been accepted either in the shownFIG.7B. Once the user accepts the two incoming requests, the balloons may switch to a different color, for example, green. Once the user's outgoing request is accepted, the originally colored balloon53, (for example, yellow balloon) will change and appear as a different color, (for example, green color as well). A small circle in another color (for example, an orange circle) will appear near the top right of the word “pending”54, in order to signify that there are incoming friend requests. Once the request is accepted, it may disappear in certain embodiments, from the pending section since no longer in pending state. FIG.8shows an exemplary embodiment of the dot system feature of the disclosed system. As shown in this example, a single dot21appears on the ‘Map’ module in the top center above the title of the town. The user may navigate into settings and remove or toggle colored dots. The color of this dot will signify the user's engagement with other identity requests/balloons/friends. In some embodiments, the following color dot scheme is used: Red: Symbolizes that user is not engaged in any identity requests (pending or active); Yellow: Symbolizes that there is one or more pending identity request(s); and Green: Symbolizes that the user is engaged in at least one active request (wayve) and the user's marked location and/or identity is viewable (among other potential scenarios in accordance with the wave states as described in exampleFIG.21D). There will be a number next to the green dot to signify how many active requests (wayves) the user is currently engaged in (i.e. how many people the user is exchanging location information with). For example, if there is a number one “1” next to the green dot, then this means that the user is currently engaged in one (1) active request (wayve). In the example shown inFIG.8, the user is sharing location with and can see the location and identity of one user/friend. In some embodiments, if someone has an active wayve and a pending wayve, the dot on the map would be green as the active wayve is dominant. Users may select or click the yellow and green dots and it will act as another button to navigate to and view the wayves page. The various waves states of the user are shown as orange balloon7, green balloon17, bubble18, and yellow balloon13as previously described below in connection withFIGS.2A-3B. In addition, the home button8with pending dot15is also shown inFIG.8. This indicates that there is an incoming notification. Similarly, there is an orange dot23shown next to add friend icon inFIG.8, also indicating an incoming notification. FIGS.9A-Bshow an exemplary embodiment of a settings page including the shade mode setting feature in accordance with the disclosed subject matter. In certain embodiments, if users do not want their anonymous balloons displayed on the map for friends to see and wayve at, they can simply activate the “shade mode” in the settings page as shown inFIGS.9A and9B.FIG.9Ashows the settings page with the shade mode deactivated (“Shade Mode off”) with the shade mode umbrella icon dimmed or somewhat appearing opaque.FIG.9Bshows the settings page with the shade mode activated (“Shade Mode on”) with the shade mode umbrella icon more visibly apparent. In certain aspects or embodiments, Shade Mode allows the user to remove their balloon from the map so their friends cannot see their balloon on the map screen. In Shade Mode, friends cannot wayve at the user. However, if the user is in “the shade” the user can still use the map and view other anonymous balloons/friends. In some embodiments, if a shaded user decides to wayve at one of those balloons (or directly wayve at a friend from the friends/wayves page) while in shade mode (“the shade”), shade mode will automatically be deselected or deactivated for the user once the wayve is transmitted. In other embodiments, shade mode will remain activated until the friend accepts, or deactivated only for that friend. Other suitable arrangements or embodiments will be apparent. It is noted that in the settings page as shown inFIGS.9A-9B, the user may configure the system to operate in accordance with calibrated privacy wave-states (or wave states), for example, as shown and described in connection withFIGS.21C-Dembodiments, in which the user may set which “wave state” that user prefers to operate, communicate and interact in, while exchanging information over the WDCP platform with certain one or more other user “friends”. The user's wave of communications in the WDCP platform will be set in accordance with such setting, in accordance with certain aspects or embodiments of the disclosed system FIGS.10A-Bshow an exemplary embodiment of a shade mode map feature in accordance with the disclosed subject matter. In embodiments, when a user3has activated shade mode, the map location marker5(for the user) will become shaded as shown. Specifically,FIG.10Ashows the normal map (with Shade Mode disabled) for user3at map marker location5.FIG.10Bshows the map with Shade Mode enabled25. It is noted that the map marker for user3in location shown by directional arrow5(i.e. user's map marker3location5), will become dimmed when shade mode is activated inFIG.10B. Once a user has activated shade mode, the shade mode icon25will appear on the map page. If the user selects (or clicks) this shade mode icon25(or button) (represented by an umbrella) it will deactivate shade mode and the button will disappear from the map page. In some embodiments, the shade mode button appears on the map page when shade mode is enabled, so when shade mode is disabled (for example, set by default) there is no shade mode button on the map (only in the settings page in its defaulted “off” state, for example). FIGS.11A-Bshow an exemplary embodiment of an alternative mapped layout in accordance with the disclosed subject matter. As shown inFIG.11A, in this embodiment, the bottom-center home balloon button is the home button8as shown inFIG.2A. In this embodiment, when selecting a balloon, the swipe to wayve will be an alternate to the wayves page button, until the wayve is either sent or the user otherwise licks/selects away another portion of the screen, for example, that is not the swipe to wayve slider. Further, as shown inFIG.11B, in this embodiment, similar toFIGS.4A-5B, there is a map button at the bottom-center that returns the user to the map layout. Other suitable arrangements and layouts will be apparent to those of skill in the art. FIGS.12A-12Fshow exemplary screenshots of in-app drop-down notifications to inform the user when “someone”, another user, has sent a wave request or “waved” to a first user, in accordance with an embodiment of the disclosed system and method. In-app drop down Notifications, when a user is receiving a wayve, the user may not know the name or identity of the person who wayved at you/“a user” until the user navigates to the wayves page, and in certain embodiments, only if the “waved at user” actually set his wave state to divulge their own identity. Hence “Someone” when wayves at you/a user, certain privacy settings may be set via the user's wave state privacy settings, as described for example inFIG.21D. As shown inFIGS.12A-F, the window may drop down from the ceiling similar to other apps. Such windows will appear there for a few seconds. When selecting (or clicking) the notification pop-up it will navigate the user the correct or desired destination. The user can also swipe it up to remove the pop-up. FIGS.13A-13B, show exemplary screenshots of indicators of incoming message(s) received by a second user, when sent from a first user, Joe, in accordance with an embodiment of the disclosed system and method. When a user is in an active wayve the message bubble from the wayes page appears also in the active balloons on the map page. Both of these message bubbles correlate with one another when in an active wayve with a friend. The User has no incoming messages from Joe as shown in bubble inFIG.13A. On the right inFIG.13B, the user can see that Joe has sent the user, one new message shown in message bubble58with the number “1” indicated next to the bubble58. The user may either select or click the green balloon17to view the message in the message thread or he may select or click the wayves page button8located at the bottom center of the map page and select the active wayve/friend (in order to view the message thread) from that page. The orange dot15inFIG.13Bindicates a pending message awaits the user, in an embodiment. FIG.14Ashows an exemplary screenshot of message thread indicators with a balloon icon functionality during an active, anonymous, and/or pending wave dynamic protocol communication state, in accordance with an embodiment of the disclosed system and method. In the top left of the message thread there is shown a green balloon inFIG.14A. This balloon can be clicked/selected when the wayve is active. When the user selects/clicks this balloon, the user navigates to the map and centers the map on their friends location. The user can press and hold a balloon on the map in any wave state (for example, anonymous, pending, active, etc.) to generate a pop up “notification” indicating whether the user would like directions to this location? If a user selects “yes”, the system navigates the user out of wayve, into their default map application on their iphone or PDA, in which the system will generate the directions as set to the destination (according to that balloon's location). The user can also select “no” in wayve not to activate this additional map feature functionality. InFIG.14B, shown is an example add friends page. When User A adds User B, User B will immediately enter into User A′s friends list/wayves page. However, User B is only a “half” friend or other wave state friend, until they formally accept the request, with the full wave state reveal of their identity/location, for example as described inFIGS.21C-D. A half friend is a user that has not yet accepted the friend request, however, has received it. Therefore, the user can wayve at a half friend from their wayves page, but cannot see this user's anonymous balloon on the map until the user formally accepts the friend request (and in accordance with a certain wave-state). This feature permits for a potential first contact to be made between users. This feature facilitates adding friends as a faster and more scalable feature and in certain embodiments in accordance with calibrated wave-states. When selecting “find friends” and the keyboard appears, it will automatically begin with the “no results” and it will also indicate when adding friends they must accept before they appear on your/a user's map, in order to reinforce this feature as shown inFIG.14B. FIGS.15A-15B, show exemplary screenshots of various message thread indicators during an active, anonymous, and/or pending wave dynamic protocol communication state, and examples showing options to accept, ignore. “idle state to add friends” and/or “remove friend”, in accordance with an embodiment of the disclosed system and method. When receiving a wayve from a user that has just added you/a second user (for example, the user has not yet accepted their friend request) the display would appear, as shown inFIG.15A. (Ex. look at Eva). If Eva just added a user, and the user has not taken action on the add friends page, if the user accepts Eva's wayve the user will also accept their friend request. If the user let the wave request time out or even “Ignore”, the friend request will remain in the add friends “requests” area until accepted or declined. When adding a friend who already has a wayve account the system will no longer say “sent” it indicate “added”, as shown in example embodiment inFIG.15B. In accordance with an embodiment, if user A adds User B and user B hasn't accepted or declined yet, User A will have User B on their friends list/wayves page (but, not their map). User B will appear on the map if User B accepts the friend request. If user B “Declines” the friend request then user A will no longer have user b on their friends list. The user B will be auto removed—similar to a removal of a friend. In certain aspects or embodiments, there is a wave party feature. A button located on the map under the Settings icon is available for selection by a user Hosting a wayve party Parties may last for 48 hours by default or otherwise, as predetermined by the system or user. The user can configure this in the parties optional setting under time (duration). The user will click-hold (i.e. select-hold) and drag the wave party button and place it on the map at the desired party location. The User will now have a window pop up on the screen where they are prompted to name the party. Once the party is named you/a user can enter an optional info: description, time/duration, date, the party host can choose a header photo for the party. Now the user can “Check off” or select which friends they want to invite to their party. Since the user can sync contacts—the user may also select friends that don't have the application downloaded. If they invite a friend via contacts that does NOT have wayve, the system will send that user a text message indicating they invited you/a user to a party. The user may download the application with the download link below the message. (They must select at least one person to confirm the party). The User selects “Confirm Party”. Once the party host has selected a friend, that selected friend can now visibly see the party icon (on the map) in the host's chosen party location. The user can select it and see all the information as well. The selected friends will also receive a notification indicating “A User A has invited you to a party!”. This notification will appear in the wayves page under “Wayves” just as a normal incoming wayve. These party invites do not expire until the party has expired. The friend can choose to accept the party in two different ways: Map/Party Icon/Accept Invite or Wayves page/Incoming party invite under “Wayves”/Accept Invite. The user will receive the same party page pop-up window when selecting the party invite in the wayves page vs. selecting the party icon on the map—displaying all of the party info. Once the friend has accepted the party invite, The party will look similar to an active wayve in the active wayves area (Wayves page). The Host will be notified, and the friend will join a group chat with all of the people who have accepted the friends party invite (whether Wayve-Friends or not, everyone who accepts the party invite gets placed in the party group chat). The friends may select the top of the group message thread to see all of the accepted party paticipants. Users may choose to silence notifications in the party group chat with the click (or selection) of a button. The party expires once the host either manually ends it or the party's duration comes to an end with a default duration of for example, 48 hours or other pre-determined time-segment. InFIG.16shown is a block diagram providing an overview of the components of the wave dynamic communication platform that implements novel wave dynamic communication protocol communications including among other components, associated virtual micro-service application farms, in accordance with an embodiment of the disclosed system and method. In particular,FIG.16diagram shows a wave dynamic communication platform system100that serves to summarize a client application interaction through a public, private and/or hybrid network. The client application102may run on many different types of Operating Systems, hardware types, mobile phones, tablets, and PCs. Each client application will traverse the network104to connect to an application server farm106. The application server farm consists of several servers that may be either private or public cloud-based106. The servers share many virtual characteristics, enabling them to work together in a seamless fashion to support an unrestricted number of simultaneously connected client devices. Within the application server farm, several virtual micro-services manage the connectivity and content in the network106. The messaging micro-service108is responsible for managing all text messages sent and received amongst the client applications. The graphic micro-service110works closely with the messaging micro-service and manages all fixed (i.e. non-video) graphics sent and received amongst the client applications. Likewise, the video micro-service112works closely with the messaging micro-service and manages all video graphics sent and received amongst the client applications. Also working closely with the messaging micro-service are the Location Prox116and Grouping micro-services114. The Location Prox micro-service manages all of the location parameters connected to each client application, which includes a map with various views, angles and topographies associated with both friends and groups. The Grouping micro-service114is the group stream manager, overseeing and maintaining the parameters and functions associated with grouped communication stream functions, which usually consist of three or more friends. The shown embodiment of the Machine Learning micro-service118inFIG.16, is responsible for all aspects of machine learning and artificial intelligence in the application. Through this micro-service, the application will learn about characteristics and trends of the user community and allow the application to adapt to provide a better user experience. An example of such a trend pertains to the length of each communication stream403as shown inFIGS.19-20Based on communication patterns between users over time stored in the Machine Learning table312as shown inFIG.18, the application can optimize or fine-tune the length of each individual stream appropriately in example implementations. Managing all micro-services and enabling a close and consistent method of interaction amongst all micro-services is the Social micro-service120. The Social micro-service120also converts all of the individual characteristics of each micro-service into a single uniform and consistent communication methodology between all clients and the application server farm106. This allows for consistency in both current and future development process, and also improvements in application performance to support economies of scale. In addition, the Social micro-service120manages and oversees all functional security across all aspects of the application architecture. The application server farm also consists of a number of managed cloud-based database servers122. The nature and functions of these example servers are further specified in connection withFIG.18hereinbelow FIG.17is a block diagram providing an overview of the wave dynamic communication protocol micro-service modules, in accordance with an embodiment of the disclosed system and method. More particularly,FIG.17depicts the software service modules that comprise the client application and/or module(s)202. This application may reside on any mobile device, tablet, or PC. In the example embodiment, the Stream Management Service module204inFIG.17, manages all client based streams which consist of both individual (1:1) and group (1:many) connections. The Stream Management Service module204in certain embodiments, is responsible for establishing the client stream connections, managing the connection while in an active state, and also terminating the stream. This service204also maintains state for all stream-based parameters. In certain embodiments, the Data Collection Service206module works closely with the stream management module204to maintain all data traffic that traverses through the client application. Data traffic types include but are not limited to for example, text messages, standalone graphics, and video clips. The management of this data is ephemeral, and upon the termination of a stream will remove all references to this data on the client, client server and/or computing device of the client. In an embodiment, the Media Management Service208closely ties into the Data Collection Service206by managing all of the specific parameters and functions for video input and output during an active stream. The Media Management Service208in example implementations, will defer to the Data Collection Service206to remove all video specific data upon stream termination. The Location Prox Management Service210as shown inFIG.17, is responsible for all elements of the location interface on the client application in an exemplary implementation. The location interfaces associated with the Location Prox Management Service210include, but are not limited to friend proximity, real time location coordinates, background topographies, notifications, advertisements, and other aspects that comprise the location functionality in the exemplary client application. In accordance with disclosed embodiments, the Security Management Service212shown inFIG.17, is responsible for many or all security functions within the client application. These functions include but are not limited to login, authentication, filtering, and user profiles. Each of the application client modules202works closely on an ongoing basis with their associated server modules. There is a mix and match of client to server function load depending on the specific functions being utilized at any point by the client. For example, some client functions may not require a server handshake or acknowledgement, whereas others will require it. Referring toFIG.18, shown is a block diagram providing an overview of various databases structures including tables and sub-structures as implemented by the wave dynamic communication protocol in a cloud computing environment, in accordance with an embodiment of the disclosed system and method. In particular,FIG.18depicts an exemplary database structure for the application302. In the shown example embodiment, the database is comprised of several tables and sub-structures that work together in a cloud-based environment. The Friends Table304stores information relevant to the friend community within the application. All friend profile information is also stored in this database. Requests transmitted or sent from the client application202(as shown inFIG.17), for example, requests such as friends data, will be populated or transmitted into the Friends Table304. Furthermore, the Location Learning Table306inFIG.18, stores information pertaining to the location specifics for each application client202ofFIG.17, in example embodiments. This information includes, but is not limited to, the real-time location of a particular friend or friends during an active communication stream403, as shown inFIGS.19-20. The Communication Story Table308inFIG.18, stores information relevant to the historical data of all communication streams. This includes but is not limited to data and text messages that were shared amongst friends during a former active communication stream403as shown inFIGS.19-20. The Communication Stream Table310stores information that pertains to data for active communication streams. This includes but is not limited to message, text, graphic and video data that is currently in an example active communication stream403shown inFIGS.19-20. The Machine Learning Table312stores information pertinent to what the application has learned or is currently learning about the trends and characteristics of client application usage. This information includes but is not limited to messaging trends, common video types, graphics types and locations. Shown in each ofFIGS.19-20is an overview of the wave dynamic communication protocol with related processes, including associated access to ephemeral content subject to discrete epochs of time, in accordance with an embodiment of the disclosed system and method. More particularly,FIGS.19-20provides a schematic diagram illustrating a data and message flow process400that provides users access to ephemeral content that may be time limited in nature and/or transitory in nature. In certain embodiments, such ephemeral content is subject to certain conditions in gauging the scope or length of time and even the reach of the content in the dynamic wave platform system. In example embodiments, the User Access Plane402pertains to a user's enabled access to the application, through either a login process or an automatic entry process via an existing queued login state. From the User Access Plane402, a user can access any of the processes and tools that the system application offers using the wave dynamic communication protocol. From the User Access Plane402, the user will often move into the Friend Management Process404to perform specific functions that pertain to that user's friend list, including access to all elements of adding, deleting and other functions that relate to the maintenance of a user's friends. In addition, in certain aspects or embodiments, the Friend Management Process404allows the user to create, edit and delete communication streams between or more friends. Through the Friend Management Process404there is no limit to the number of friends that a user can have, although there may be restrictions on which users can be added to a user's friend list in certain embodiments. In certain aspects or disclosed embodiments, if a user has existing friends, the user via the User Access Plane402may choose to initially go through the Location Management Process416instead of the Friend Management Process404in order to reach the Message Connector Process406and to establish a communication stream directly between existing friends. The Location Management Process416is responsible for processing the actions, elements and configurations that pertain to the locations, often in real time, of the user's friends. The use of this bypass method is primarily designed to allow the user to determine if, based on a friend's location, he or she wishes to establish a communication stream. The Message Connector Process406allows a user to actively attempt to connect to one or more friends. This process involves two or more devices and database input/output. This connection requires that the receiving users acknowledge and accept the request to connect. On a successful message connection406, the Timed Communication Process408controls and oversees the ephemeral communication stream between two or more users. There is no limit to the number of timed connections408that can occur between users in the network. In some embodiments, more particularly as shown inFIG.19, during the timed communication, any user involved in this communication may opt to disconnect410. In this situation, if only two users are engaged in a communication stream, the entire stream will abort. If there are more than two users, the stream will remain active for the remaining active users for the duration of that communication stream. Accordingly, throughout the duration of the Timed Communication Process408an unlimited amount of data and data types may be exchanged at step412between members of the communication stream, in accordance with an embodiment of the dynamic wave communication protocol shown inFIGS.19-20. The location of each user may also be shared amongst the members of the communication stream403transmitted via the user access plane402. In some embodiments, users of that stream may have the option to hide their respective locations at any time. The Connector Termination Process414controls and enforces the termination of a user's message stream. A stream termination may occur when a user decides to terminate a connection during an active stream or when a stream times out. Upon termination of a message stream, all data exchanged during that message stream will be deleted and will no longer be accessible to users in certain aspects or embodiments of the disclosed dynamic wave communication system and method. Accordingly,FIG.20is a block diagram providing an overview of the wave dynamic communication protocol with related processes, including associated access to ephemeral content subject to discrete epochs of time all as described hereinabove with respect toFIG.19with the exception thatFIG.19further provides such ephemeral content may be subject to user cancellation at step410, in accordance with an embodiment of the disclosed system and method. FIGS.21A-Cis a flowchart illustration providing an overview of the wave dynamic communication protocol with related processes, wayve states, including associated access to ephemeral content subject to discrete epochs of time, in accordance with an embodiment of the disclosed system and method. FIG.21Aprovides an overview of wave dynamic communication protocol (WDCP) which is a novel and more refined approach to social communications, in particular social communications transmitted over a computing environment, platform, engine or otherwise, a social media platform. In step1, the novel communication flow is established. The system establishes novel Wave Dynamic Communication “Flows” between users and groups. “Flows” are established by both sender and receiver accepting and thus initiating the “Flow”. Each “Flow” has its own attributes—such as location, security, confidentiality, Messaging and/or communication, Time-to-live, etc. Once a “Flow” terminates, all communications within that “Flow” generally will also terminate. The “Phone Call” of Messaging in which users can now securely maintain an active stream of communication for a dynamic period of time. A user must send out a request for permission to engage in a communication stream/thread for a dynamic period of time. Once the active time period has expired, the stream of messaging/communication terminates deleting all of its contents. The dynamic period of time can be anywhere from “1 second to 50 hours” or even longer, as desired by the users, there is no specific time we are referring to but the “Flow” is dynamic and/or temporary. An example flow is described as follows: Step1, A user selects a peer; Step2: User sends dynamically timed invite/request to friend (May be configurable by the user); Step3: Friend receives request, and decides to either accept or ignore the invite/request; Step4: If friend decides to accept invite/request he/she will send an acknowledgement back to User; Step5: User receives acknowledgement, and the software will now place them in a “Flow” for the dynamically set period of time; Step6: The User and Friend now maintain a communication with each other for the dynamically set period of time. Once the dynamically set timed communication expires, all of its contents will also be deleted/terminated. Another exemplary “flow” proceeds as follows: Step1: User selects an anonymous map marker. The anonymous map marker represents one of the User's Friends, however the User does not know which Friend it is; Step2: User sends Identity request/“Flow” Invite to anonymous map marker; Step3: The Friend associated with the anonymous map marker receives the identity request/“Flow” Invite, and decides to either accept or ignore; Step4: If friend decides to accept he/she will send an acknowledgement to User; Step5: User receives acknowledgement, and then the application will then reveal the User and Friend identities and their locations to each other for a dynamic period of time; Step6: The User and Friend have entered a “Flow” and now maintain a communication stream in conjunction with each others' locations for the dynamic period of time. In yet another exemplary embodiment, a Location-Request)Sub-flow2proceeds as follows: Step1: User selects a specific friend; Step2: User sends request/“Flow” Invite to friend; Step3: Friend receives request/“Flow” Invite, and decides to either accept or ignore; Step4: If friend decides to accept request/“Flow” Invite, the user-friend will send an acknowledgement back to User; Step5: User receives acknowledgement, and the software will now reveal the User and Friends' identities in conjunction with their locations to each other for a dynamic period of time; Step6: The User and Friend have entered a “Flow” and now maintain a communication stream in conjunction with each others' locations for the dynamic period of time It is noted that each “Flow” has its own set of parameters with parameters being configurable. Example parameter types include (but are not limited to):LocationTime-to-Live (i.e. “Flow” duration)Security and filteringNumber of participants in a “Flow”Upon termination of a “Flow”, all data in that “Flow” is immediately deleted Specifically, as shown inFIG.21A, the user initiates an invite to join a “flow” by sending a request notification to one or more users in step600. Next, the user revies the invite and then sends a notification back to the initiating user either accepting or rejecting the “flow” in step601. If the invite is accepted the communications are enabled in step602between the initiating user and all users that have accepted the request in the “flow”. Otherwise, if the invite is rejected, the receiving user decides not the enter the “flow” as shown in step603. Proceeding to Step2as shown inFIG.21B, of the WDCP, all users in a “flow” communicate with each other for the duration of the “flow” in steps610and611. InFIG.21B, step3of the process of termination of the wayve exchange is described. Any user in an active “flow” can terminate participation in that “flow” in step613. Two users are in a flow for a certain period of time or other parameters is checked in step614. If the time limit is met, the flow terminates in step615. If not, all users other than the terminating user remain active in a flow in step616. FIG.21Cprovides a schematic showing steps and/or states in the “Management Process”, as shown and described inFIGS.21A-B. The user elects to send a wayve request to a friend910. Once the user has sent the request, the request now enters a pending state911. At this point, the friend has received the request on their device and may choose to either accept or ignore the wave request912. If the friend chooses to ignore the request, the wayve request is canceled at step915. If the friend accepts the user's wayve request, the wayve then becomes active states between the user and the friend at step913. The active wayve request remains active until the wayve is either canceled or otherwise times out/terminates at step914. When the wayve either cancels/or times out, it is then terminated at step916, and in certain embodiments, deleting the contents. Generally in known systems, the location is being displayed and paired with the user's identity all of the time (also referred to as a communication stream with location included). In such known systems, there is no flow involved as in the disclosed WDCP flow. In alternative embodiment, the active wayve may not immediately and/or automatically involve displaying the users location to the friend (and vice versa). In this embodiment, after the wayve is active between the user and a friend913the current active wayve begins with the users' locations turned off by default917. This way the process/flow/wayve allows users to chat and communicate without necessarily sharing their location918. The User and/or friend may choose to send a location request out to the other person involved in the current wayve. At this point the receiving user may choose to accept or ignore the incoming location request. If the request is ignored or “declined”, The active wayve continues on until it terminates, hence deleting its contents. If the user decides to reveal location by “enabling” or accepting the location request919, the user's location will now be visible to the friend (and vice versa) at step920. In some embodiments, when the location request is accepted or “enabled” the wayve timer may restart giving the users involved more time to communicate and location share. At this point, the active wayve request remains active until the wayve is either canceled or times out at step914. When the wayve either cancels or times out, it is then terminated916. FIG.21Dprovides an embodiment in accordance with the WDCP platform in which wave-active states are implemented. The user may remain in an active wave-state while still choosing to remain anonymous to others in the platform as shown in step950. Such settings may be set for example inFIGS.9A-9B. Shade mode may be set, for example in certain embodiments. A user may further select Active State1and reveal that user's “wave-name” only in step951. A “wave-name” is another name, “nick name” or alias name the user(s) may prefer to use in the WDCP platform, other than their full identity, until the user(s) has reached their full comfort social level in revealing their identity to another one or more wave-friends or other users. A user may further select Active State2, in which they reveal their geographic wave proximity location only, with no specific map points being divulged in step952. Accordingly, a user may set the parameters for their geographic wave proximity location to a certain distance point from their current location so that again, no specific map points would be divulged in such Active Wave State2. Furthermore, a user may set perhaps, a zip code range, a town range, a city range, a country range as example parameters for their geographic wave proximity location in step952, in the event they prefer not to divulge specific geographic map points. In step953, a user may select Active Wave State3, in which the user does reveal their geographic wave proximity location in addition to their wave-name. In step954, a user selects Active wave State4, in which the user will indeed reveal their actual wave geographic location, which can include for example, longitude/latitude points, GPS points, or other precise map location, such as the exact address, including street address, city, and country with zip code included. In step955, the user may select Active wave-state5, in which the user reveals both the user's wave name and actual wave geographic location. Finally in step957, the user selects active wave state6, in which the user reveals their identity (i.e., first and last name or first name only, last name only, nickname only, or other iterations of their actual name) and also reveals their actual wave geographic location which is described above in step954. Such actual geographic location can include for example, longitude/latitude points, GPS points, or other precise map location, such as the exact address, including street address, city, and country with zip code included. These contemplated wave-states are in accordance with certain aspects or embodiments and are intended to provide flexibility with such calibrated wave-states in allowing a user of the WDCP platform to experience their satisfying and comfortable social media exchange of communications in which they can truly connect with other wave users in a fashion that mimics real human patterns of growing human friendships, as possible over a computing platform environment or engine, and within their social boundary comfort levels. FIG.22is a flowchart illustration providing an overview of the wave dynamic communication protocol, in particular various methods of initiating a wave dynamic communication, in accordance with an embodiment of the disclosed system and method. Using Method1, the user selects friends from the application friends list in step620. A known friend invite is sent. In step621, a request notification within the application is sent to the selected friends. Next in step622, the user receiving the request accepts or rejects the invite as described inFIGS.21A-B. Method2beings in step623, the user selects an anonymous marker from the location map. An anonymous friend invite is sent. Next in step624, a request notification within the application is sent to a friend associated with the anonymous marker. The user receiving the request accepts or rejects the invite as described above inFIGS.21A-B. Using Method3, a user selects one or more friend(s) from an external source (i.e. third party source) in step626. A known friend invite is next sent. In step627, a request notification outside of the application is sent to a friend. An example request notification is an SMS message or email. Finally, a user receiving the request downloads the application, enters the application in order to accept or reject the invite in step628. Turning now toFIG.23, shown is a block diagram of an example wave dynamic communication system featuring peer-to-peer wave communication(s), in accordance with an embodiment of the disclosed system and method. The communication system includes a wireless network820, and a plurality of devices810,830,835including a mobile device810and other communication devices830,835. There might be other devices, but they are not shown for simplicity. The mobile device810has a wireless access radio811, a GPS receiver812, a processor813, a hidden-identity location sharer814, and might have other components but they are not shown for simplicity. Details of the other communication devices830,835are omitted for simplicity. Such details are well known to those of skill in the art and it should be understood that various alternatives may be used to implement the systems and methods in accordance with the present disclosure. There are a plurality of GPS satellites840(only one shown for simplicity) for those devices that are GPS-enabled, for example the mobile device810. The operation of the communication system will now be described by way of example. Communication between the devices810,830,835is through the wireless network820. The mobile device810uses its wireless access radio811for communicating wirelessly over a wireless connection821, while the other communication devices830,835communicate over respective connections822,823. The connections822,823can be wireless or wired depending on whether the communication devices830,835are mobile. For this example, it is assumed that the communication between the devices810,830,835is performed in a peer-to-peer manner. However, alternative implementations are possible. The mobile device810generates location information using GPS technology, which involves receiving GPS signals841from the GPS satellites840using its GPS receiver812. Location sharing involves the mobile device810sending the location information to another device, for example one of the other communication devices830,835. This can allow another device to track the geographic location of the mobile device810. In embodiments, the foregoing user interface, including the map layout for example as shown inFIGS.2and3, is constructed from user and wayve data inputted by users, which is stored in the memory of the mobile phone and/or in a cloud database (e.g., Firestore). In these embodiments, the system parses these data and transforms them into User, Request (Wayve), and (BalloonAnnotation) objects for use in the system described above. The (BalloonAnnotation) objects are inputted into the (MapBox) Map API, along with the custom styling code to create the customized map. Other suitable processes and arrangements will be apparent to those of skill in the art. For example, in some embodiments, a user's identity requires an identity request (wayve), however the user on the receiving end of the request need not accept it. Instead, the request (wayve) gets auto-accepted and the receiving user immediately gets notified that they are in a location share with the requesting user. Thus, in order to see the identity of the friend the user can immediately find out. This allows other users to immediately see one's location and identity. In other alternative methods, when a user has a friend added and is in the same proximity as that friend, the software can automatically initiate a wayve between the users that are close (notifying them of each other's identity and location). FIG.24illustrates a system block diagram including an example computer network infrastructure, in accordance with an embodiment of the disclosed veiled-identity wave dynamic communication system. In particular,FIG.24is a block diagram of an illustrative embodiment of a general computing system700. The computing system700can include a set of instructions that can be executed to cause the computing system700to perform any one or more of the methods or computer based functions disclosed herein. The computing system700, or any portion thereof, may operate as a standalone device or may be connected, e.g., using a network722or other connection, to other computing systems or peripheral devices. The computing system700may also be implemented as or incorporated into various devices, such as a personal computer (PC), a tablet PC, a personal digital assistant (PDA), a mobile device, a palmtop computer, digital smart watch, heads-up display/device and/or other virtual/augmented reality display devices, a laptop computer, a desktop computer, a communications device, a control system, a web appliance, or any other machine capable of executing a set of instructions (sequentially or otherwise) that specify actions to be taken by that machine. Further, while a single computing system700is illustrated, the term “system” shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions. As illustrated inFIG.24, the computing system700may include a processor704, e.g., a central processing unit (CPU), a graphics-processing unit (GPU), or both. Moreover, the computing system700may include a main memory and/or program memory706and a static memory, neural network storage, and/or data memory708that can communicate with each other via a bus710. As shown, the computing system700may further include a video display unit712, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid state display, or a cathode ray tube (CRT). Additionally, the computing system700may include an input device714, such as a keyboard, and a cursor control device716, such as a mouse and/or any other device or method that implements or simulates touch input. The computing system700can also include a disk drive unit718and/or other storage devices are contemplated, a signal generation device719, such as a speaker or remote control, and/or a network interface device724. In a particular embodiment or aspect, as depicted inFIG.24, the disk drive unit718or neural network storage may include a machine-readable or computer-readable medium720in which one or more sets of instructions702, e.g., software, can be embedded, encoded or stored. Further, the instructions702may embody one or more of the methods or logic as described herein. In a particular embodiment or aspect, the instructions702may reside completely, or at least partially, within the main memory706, the static memory708, and/or within the processor704during execution by the computing system700. The main memory706and the processor704also may include computer-readable media. In an alternative embodiment or aspect, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement one or more of the methods described herein. Applications that may include the apparatus and systems of various embodiments or aspects can broadly include a variety of electronic and computing systems. One or more embodiments or aspects described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the present system encompasses software, firmware, and hardware implementations. In accordance with various embodiments or aspects, the methods described herein may be implemented by software programs tangibly embodied in a processor-readable medium and may be executed by a processor. Further, in an exemplary, non-limited embodiment or aspect, implementations can include distributed processing, component/object distributed processing, and parallel processing. Alternatively, virtual computing system processing can be constructed to implement one or more of the methods or functionality as described herein. It is also contemplated that a computer-readable medium includes instructions702or receives and executes instructions702responsive to a propagated signal, so that a device connected to a network722can communicate audio, voice, video or data over the network722. Further, the instructions702may be transmitted or received over the network722via the network interface device724. While the computer-readable medium is shown to be a single medium, the term “computer-readable medium” includes a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions. The term “computer-readable medium” shall also include any tangible medium that is capable of storing or encoding a set of instructions for execution by a processor or that cause a computing system to perform any one or more of the methods or operations disclosed herein. In a particular non-limiting, example embodiment or aspect, the computer-readable medium can include a solid-state memory, such as a flash memory, memory card or other package, which houses one or more non-volatile read-only memories and/or volatile memory. Further, the computer-readable medium can be a random access memory or other volatile re-writable memory. Additionally, the computer-readable medium can include a magneto-optical or optical medium, such as a disk or tapes or other storage device to capture and store carrier wave signals, such as a signal communicated over a transmission medium. In certain embodiments, a digital file attachment to an e-mail or other self-contained information archive or set of archives may be considered a distribution medium that is equivalent to a tangible storage medium. Accordingly, any one or more of a computer-readable medium or a distribution medium and other equivalents and successor media, in which data or instructions may be stored, are included herein. In accordance with various embodiments or aspects, the methods described herein may be implemented as one or more software programs running on a computer processor. Dedicated hardware implementations including, but not limited to, application specific integrated circuits, programmable logic arrays, and other hardware devices can likewise be constructed to implement the methods described herein. Furthermore, alternative software implementations including, but not limited to, distributed processing or component/object distributed processing, parallel processing, neural network, or virtual machine processing can also be constructed to implement the methods described herein. It should also be noted that software that implements the disclosed methods may optionally be stored on a tangible storage medium, such as: a magnetic medium, such as a disk or tape; a magneto-optical or optical medium, such as a disk; or a solid state medium, such as a memory card or other package that houses one or more read-only (non-volatile) memories, random access memories, or other re-writable (volatile) memories. The software may also utilize a signal containing computer instructions. A digital file attachment to e-mail or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium. Accordingly, a tangible storage medium or distribution medium as listed herein, and other equivalents and successor media, in which the software implementations herein may be stored, are included herein. In yet another example,FIG.25provides a block diagram representation of an exemplary machine in the form of a computer system within which an application or other program, providing a set of instructions are executed and performed using for example, the wave dynamic communication protocol-based platform, according to one or more embodiments of the disclosed wave system and method. More particularly,FIG.25is an overview representation of the machine500within which instructions508(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine500to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions508may cause the machine500to execute any one or more of the methods described herein. The instructions508transform the general, non-programmed machine500into a particular machine500programmed to carry out the described and illustrated functions in the manner described. The machine500may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine500may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine500may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions908, sequentially or otherwise, that specify actions to be taken by the machine500. Further, while only a single machine500is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions508to perform any one or more of the methodologies discussed herein. The machine500may include one or more processors502, memory505, and I/O component(s)542, which may be configured to communicate with each other via a bus544, and may implement the novel wayve/wave dynamic communication protocol with ephemeral-type streams of communications or wavye request(s). In an example embodiment, the one or more processor(s)502(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor506and a second processor512that execute the instructions508. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.25shows multiple processors502, the machine500may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory505includes a main memory514, a static memory516, and a storage unit518, both accessible to the one or more processor(s)502via the bus544, and communicate in certain embodiments in accordance with the disclosed dynamic wave communication protocol. The main memory504, the static memory516, and storage unit518store the instructions508embodying any one or more of the methodologies or functions described herein. The instructions508may also reside, completely or partially, within the main memory514, within the static memory516, within machine-readable medium520within the storage unit518, within at least one of the processors502(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine500. The I/O components542may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components542that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components542may include many other components that are not shown inFIG.25. In various example embodiments, the I/O components542may include output components510and input components530. The output components510may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components530may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further example embodiments, the I/O components542may include biometric components532, motion components534, environmental components536, or position components538, among a wide array of other components. For example, the biometric components532include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components534include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components536include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components538include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components542further include communication components540operable to couple the machine500to a network522or devices524via a coupling526and a coupling528, respectively. For example, the communication components540may include a network interface component or another suitable device to interface with the network522. In further examples, the communication components540may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), WiFi® components, and other communication components to provide communication via other modalities. The devices524may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components540may detect identifiers or include components operable to detect identifiers. For example, the communication components540may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components940, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (e.g., memory504, main memory514, static memory516, and/or memory of the one or more processors502) and/or storage unit518may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions508), when executed by processors502, cause various operations to implement the disclosed embodiments. The instructions508may be transmitted or received over the network522, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components540) and using any one of a number of well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions508may be transmitted or received using a transmission medium via the coupling528(e.g., a peer-to-peer coupling) to the devices524. FIG.26is an overview of an embodiment of the dynamic wave communication protocol system including one or more users identified by mobile devices on a map, as shown for example, as one or balloons as described in connection withFIGS.2A-2B, and in accordance with an embodiment of the disclosed system and method. More particularly,FIG.26provides an embodiment of the dynamic wave protocol system351for locating users within a proximity of another user in a location or anywhere geographically in any geographic region on the map, including city, state, country or otherwise, worldwide. Shown in user352communication with the system using a portable electronic computing device344such as a smart phone, tablet, laptop, or notebook computer. The computing device344may include one or more processors347that may be configured to communicate with and are operatively coupled to some peripheral subsystems via bus subsystem349. These peripheral subsystems may include a storage subsystem/memory346, one or more user input devices, in this embodiment a touch screen as part of the display349, which also provides a user output interface, a network interface, also referred to as an adapter342, and one or more Global Positioning System transmitter(s)/receiver(s)340. The bus subsystem349may provide a mechanism for enabling the various components and subsystems of computing device344to communicate with each other. AlthoughFIG.26shows bus subsystem349as a single bus, alternative embodiments of the bus subsystem may utilize additional or multiple busses. The network interface342may provide an interface to other device systems and networks. The network interface342may serve as an interface for receiving data from and transmitting data to other systems from the computing device344. The network interface342may include a wireless interface that allows the device344to communicate across wireless networks through a wireless access point, and may also include a cellular connection to allow the device to communicate through a cellular network. The network interface will allow the computing device344to communicate with one or more servers such as358and359, and system data storage355. The device data store/memory346may include one or more separate memory devices. It may provide a computer-readable storage medium for storing the basic programming and data constructs that may provide the functionality of at least one embodiment of the disclosure here. The data store346may store the applications, which include programs, code modules, and instructions, that, when executed by one or more processors, may provide the functionality of one or more embodiments of the present disclosure. The data store346may comprise a memory and a file/disk storage subsystem. In addition, the computing device344may store data on another computer such as server358accessible through the network354via the network interface342, and in accordance with certain disclosed embodiments, using the dynamic wave communication protocol350. The device may also include separate control buttons, or may have integrated control buttons into the display if the display consists of a touch screen. The device in this embodiment has a touch screen and possibly one or more buttons, not shown, on the periphery of the touch screen/display353. Alternative user input devices may include buttons, a keyboard, pointing devices such as a mouse, trackball, touch pad, etc. In general, the use of the term ‘input device’ is intended to encompass all possible types of devices and mechanisms for inputting information into the device344. User output devices, in this embodiment the display/touch screen353, may include all display subsystems, audio output devices, etc. The output device may present additional user interfaces to facilitate user interaction with applications performing processes described here and variations thereof. The description herein refers to ‘servers’ or ‘server computing devices.’ In general, the server referred to herein inFIG.26, is a server computing device with a processor and a memory that operates on the user information and provides information back to the users relating to other users in the community of users that subscribe to the same service. Typically, there will be an application server computing device and a global positioning service server that will communicate with the user device or the server. Many of the functions described here may be performed by the application server, the user device, the GPS server. No limitation of any function being performed by one of the devices is intended nor should it be implied. In the environment of the system351the computing device344may be a client device. The network may consist of any appropriate network, including an intranet, the Internet, a cellular network, a local area network, a satellite network, or any other such network and/or combination of these. Components used for such a system can depend at least in part upon the type of network and/or environment selected. Protocols and components for communicating via such a network are well known and will not be discussed here in detail. Wired or wireless connections, or combinations of these, may enable communication over the network. In the embodiments here, the network includes the Internet and/or cellular communications. The application server358receives the requests and serves information in response to those requests via the network354, and in certain embodiments in accordance with the dyamic wave communication protocol350. The application server358in combination with data store355, or by itself, may consist of multiple servicers, layers or other elements, processors, or components, which may be chained, clustered, or otherwise configured, performs the tasks such as interacting with the client computing device and the data store355. In some embodiments, some or all of the data store24may be housed in or integrated with the wave application server359. Servers, as used here, may be implemented in various ways, such as hardware devices or virtual computer systems. Server may refer to a programming module executed on a computing system in certain disclosed embodiments. The term ‘data store’ refers to any device or combination of devices capable of storing, accessing, and retrieving data. This may include any combination and number of data servers, databases, data storages devices and media, in any standard, distributed, or clustered environment. The wave application server358may include any appropriate hardware, software, and firmware for integrating with the data store as needed to execute aspects of one or more applications for the client device, handling some or all the data access and logic for an application. The application server may provide access control services in cooperation with the data store and is able to generate content including, but not limited to, text, graphics, audio, video and/or other content usable to be provided to the user, which may be served to the user by the web server in the form of HyperText Markup Language (“HTML”), Extensible Markup Language (“XML”), JavaScript, Cascading Style Sheets (“CSS”), or another appropriate client-side structured language. Content transferred to a client device such as344in the form of for example, waves, wavyes, other wavye related requests and/or communications, may be processed by the client device to provide the content in one or more forms including, but not limited to, forms that are perceptible to the user audibly, visually and/or through other senses including touch, taste, and/or smell. The handling of all requests and responses, as well as the delivery of content between the client device344and the application server358, can be handled by the web server using PHP: Hypertext Preprocessor (“PHP”), Python, Ruby, Perl, Java, HTML, XML, or another appropriate server-side structured language in this example. It should be understood that the web and application servers are not required and are merely example components, as structured code discussed herein can be executed on any appropriate device or host machine as discussed elsewhere herein. Further, operations described herein as being performed by a single device may, unless otherwise clear from context, be performed collectively by multiple devices, which may form a distributed and/or virtual system. The wave database data store355can include several separate data tables, databases, data documents, dynamic data storage schemes and/or other data storage mechanisms and media for storing data relating to an aspect of the present disclosure, including the database302structures and tables304,306,308,310,312, as described in connection withFIG.18hereinabove. For example, the data store355illustrated inFIG.26may include mechanisms for storing temporary data and general data storage. The data store also is shown to include a mechanism for storing alert data, which can be used to generate one or more alerts as described above. It should be understood that there can be many other aspects that may need to be stored in the data store, such as page image information and access rights information, which can be stored in any of the above listed mechanisms as appropriate or in additional mechanisms in the wave database data store355. The wave database data store355is operable, through logic associated therewith, to receive instructions from the wave application server358and obtain, update, or otherwise process data in response thereto. The wave application server358may provide static, dynamic or a combination of static and dynamic data in response to the received instructions. Dynamic data, such as data used in web logs (blogs), data streams, and other such applications may be generated by server-side structured languages as described herein or may be provided by a content management system (“CMS”) operating on, or under the control of, the application server. Each wave server typically will include an operating system that provides executable program instructions for the general administration and operation of that server and typically will include a computer-readable storage medium, such as a hard disk, random access memory, read only memory, etc., storing instructions that, when executed by a processor of the server, allow the server to perform its intended functions. Suitable implementations for the operating system and general functionality of the servers are known or commercially available and are readily implemented by persons having ordinary skill in the art, particularly in light of the disclosure here. The environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all the computers across the network. In a set of embodiments, the information may reside in a storage-area network (“SAN”) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (“CPU” or “processor”), at least one input device (e.g., a mouse, keyboard, controller, touch screen or keypad) and at least one output device (e.g., a display device, printer, or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices and solid-state storage devices such as random-access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc. The environment, in one embodiment, is a distributed and/or virtual computing environment utilizing several computer systems and components that are interconnected via communication links, using one or more computer networks or direct connections. However, it will be appreciated by those of ordinary skill in the art that such a system could operate equally well in a system having fewer or a greater number of components than are illustrated inFIG.1. Thus, the depiction of the system344inFIG.26should be taken as being illustrative in nature and not limiting to the scope of the disclosure. When the user activates, or launches, program code associated with the dynamic wave protocol-based application described in these embodiments, the program code coordinates the device's location devices such as the GPS, Wi-Fi, Bluetooth, and a third-party Indoor Location API to determine the current location of the device and the user, which is transmitted to one or more application servers358. During the location coordination process, the program code interacts with the device's GPS component to acquire the GPS location from one or more (n) number of GPS satellite(s)356and use it to determine the best relative location of the device before sending it to the application server. An application server is the server that executes the code-based instructions and stores the data associated with the location application under discussion here. The application server358, upon receiving notification that a user has signed in or become active in the system, uses the location coordinates, such as latitude and longitude, of the user to locate other users who are within the proximity set by the app user. The wave system locates a first user in a community of users, such as subscribed users to a service. When a user activates the application or program code on a user device as described for example, in connection withFIGS.1A-1M, the wave system sends a signal to a server computing device. The wave server computing device contains user data that identifies the first user and the user-defined proximity radius in certain example embodiments. The system retrieves the position of the first user from the user device directly or from a global positioning system server. The server also includes locale information. Local information is information about the area around the user, such as a building or map. This will more than likely involve accessing a location server computing device to retrieve geographic information relating to the locale including a map either at that point or having been previously accessed. The system then determines other users in the community of users that have positions within the user-defined proximity or in alternative embodiments, in any geographic location. This information is then displayed on a user interface on the user device of the first user, the other users superimposed on a map of the locale or geographic region. In certain embodiments, there are other wave parameters that dictate which anonymous users will appear on the user device and it's not related to proximity to the user or alternatively, the request is set forth by the user to find other anonymous wayve users that are in a certain geographical region, city, country or geographical radius of mileage from the user. Locating other users here means that the system performs a search in the system using a unique wayve-based algorithm to find other active users, map users, in a region and calculate the distance between these users and the app user using their location coordinates. The system then checks against the privacy settings of all the users in the proximity to make sure they want to be exposed in that proximity before sending them to the app user. The system also checks the app user's filters to ensure that only users that match the app users' filters criteria will be sent to the app user. When the app user receives location coordinates of the map users and their information from the server, the program code then overlays those coordinates and their associated information on layout. A layout is more than just a map. For example, if one uses an application that locates users, such as Waze® or Google Maps®, the application shows the location of a user in a city grid. Even if the user were in a building, the location is shown with streets and geographical features such as rivers, mountains, etc. The application here identifies the user's location, and then associates that location with a building, a venue, or the streets mentioned above, all of which will be referred to as a venue, with the layout of the venue drawn as background to the user. For example, the user may be in an exhibition hall that has fixed elements, such as restaurants or coffee shops around the perimeter. The layout would include those fixed elements, allowing people to see the user's location relative to those elements. When users are trying to meet up in a large, crowded venue, the layout becomes invaluable. The information for the layout may come from many sources. The application here could contain a library of venues and their associated layouts, keyed to GPS coordinates of the elements of the layout. The application may access another application server, for example such as wave application server359for the layouts that has an application programming interface that provides the building or other kinds of geographical layouts that are in digitized format, and notifies the application when a user enters a venue, geographic region, building, structure or otherwise, that has a provided layout in digitized format, or other contemplated electronic or magnetic format. One should note that except for the GPS signals used by the application server, most if not all information transmitted and received is generally communicated through the network354and the application server(s)358,359, and in certain embodiments subject to the dynamic wave communication protocol350. Hence, exposure of any phone numbers or email addresses or other personal identifying information, is barred when users communicate through the wave application and related system351. None of this information used to identify users includes any private elements or personal identifying information, other than what the user chooses to reveal via for example, toggling or modifying features such as anonymous mode, shade mode and other related wave parameters including wave-states that can tailor the user's visibility and privacy when using the application (for example, as described hereinabove in connection withFIGS.21A-21D. FIG.27provides an exemplary representation of the dynamic wave protocol system architecture within which the present disclosure may be implemented, in accordance with an embodiment of the disclosed system and method. More particularly,FIG.10includes block diagram1000illustrating an exemplary dynamic wave protocol system architecture1004, including related code, instructions and/or applications, which can be installed on any one or more of the devices described herein. The dynamic wave protocol system architecture1004is supported by hardware such as a machine1002that includes processors1020, memory1026, and I/O components1038. In this example, the software architecture1004can be conceptualized as a stack of layers, where each layer provides a particular functionality. The dynamic wave protocol system architecture1004includes layers such as an operating system1012, libraries1010, frameworks1008, and applications1006. Operationally, the applications1006invoke API calls1050through the software stack and receive messages, wave(s), other requests or wave DCP-based communications1052, in response to the API calls1050. The operating system1012manages hardware resources and provides common services. The operating system1012includes, for example, a kernel1014, services1016, and drivers1022. The kernel1014acts as an abstraction layer between the hardware and the other software layers. For example, the kernel1014provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services1016can provide other common services for the other software layers. The drivers1022are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers1022can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH ® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. In certain aspects or embodiments, the libraries1010provide a low-level common infrastructure used by the applications1006. The libraries1010can include system libraries1018(e.g., C standard library) that provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries1010can include API libraries1024such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), High Efficiency Video Coding (HEVC), Theora, RealVideo RV40, VP8, VP9, AOMedia Video 1 (AV1), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries1010can also include a wide variety of other libraries1028to provide many other APIs to the applications1006. The frameworks1008provide a high-level common infrastructure that is used by the applications1006. For example, the frameworks1008provide various graphical user interface (GUI) functions, high-level resource management, and high-level location services. The frameworks1008can provide a broad spectrum of other APIs that can be used by the applications1006, some of which may be specific to a particular operating system or platform, for example the disclosed dynamic wave communication protocol-based platform. In an example embodiment, the applications1006may include a home application1036, a contacts application1030, a browser application1032, a book reader application1034, a location application1042, a media application1044, a messaging application1046, a game application1048, and a broad assortment of other applications such as a third-party application1040. The e applications1006are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications1006, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application1040(e.g., an application developed using the ANDROID® or IOS®. software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS®, ANDROID®, WINDOWS®, iPhone®, or another mobile operating system. In this example, the third-party application1040can invoke the API calls1050provided by the operating system1012to facilitate functionality described herein. The foregoing description of exemplary embodiments is not intended and should not be construed to limit the scope of the of the present disclosure, and various modifications and improvements may be made by those skilled in the art without departing from the scope. Thus, the disclosed subject matter includes all modifications and improvements that are withing the scope of the following claims and their equivalents. Such embodiments or aspects of the inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” or “embodiment” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed. Thus, although specific embodiments or aspects have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments or aspects shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments or aspects. Combinations of the above embodiments or aspects, and other embodiments or aspects not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description. In the foregoing description of the embodiments or aspects, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting that the claimed embodiments or aspects have more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment or aspect. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate example embodiment or aspect. It is contemplated that various embodiments or aspects described herein can be combined or grouped in different combinations that are not expressly noted in the Detailed Description. Moreover, it is further contemplated that claims covering such different combinations can similarly stand on their own as separate example embodiments or aspects, which can be incorporated into the Detailed Description. Although the present specification describes components and functions implemented in the embodiments with reference to particular standards and protocols, the disclosed embodiment are not limited to such standards and protocols. The illustrations of embodiments described herein are intended to provide a general understanding of the structure of various embodiments, and they are not intended to serve as a complete description of all the elements and features of apparatus and systems that might make use of the structures described herein. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. Other embodiments may be utilized and derived there from, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. Figures are also merely representational and may not be drawn to scale. Certain proportions thereof may be exaggerated, while others may be minimized. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. Such embodiments of the inventive subject matter may be referred to herein, individually and/or collectively, by the term “embodiment” merely for convenience and without intending to voluntarily limit the scope of this application to any single embodiment or inventive concept if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description. The Abstract is provided to comply with 37 C.F.R. § 1.72(b), which requires an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter. Those skilled in the relevant art will appreciate that aspects of the invention can be practiced with other computer system configurations, including Internet appliances, cellular or mobile phones, smartphones, tablets, mobile communication devices, digital/smart watches, a heads up, holographic, virtual and/or augmented reality display/device, handheld devices and/or wearable devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, or client-server environments including thin clients, mini-computers, mainframe computers and the like. Aspects of the invention can be embodied in a special purpose computer or data processor that is specifically programmed, configured or constructed to perform one or more of the computer-executable instructions or modules explained in detail below. Indeed, the term “computer” or “computing device” as used herein refers to any data processing platform or device. Aspects of the invention can also be practiced in distributed computing environments, where tasks or modules are performed by remote processing devices, which are linked through a communications network. In a distributed computing environment, program modules or sub-routines may be located in both local and remote memory storage devices, such as with respect to a wearable, handheld and/or mobile computing device and/or a fixed-location computing device. Aspects of the invention described below may be stored and distributed on computer-readable media, including magnetic and optically readable and removable computer disks, as well as distributed electronically over the Internet or over other networks (including wireless networks). Those skilled in the relevant art will recognize that portions of the invention may reside on a server computer or server platform, while corresponding portions reside on a client computer. For example, such a client server architecture may be employed within a single mobile computing device, among several computers of several users, and between a mobile computer and a fixed-location computer. Data structures and transmission of data particular to aspects of the invention are also encompassed within the scope of the invention. Although specific example embodiments have been described, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader scope of the inventive subject matter described herein. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof, show by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. Such embodiments of the inventive subject matter may be referred to herein, individually and/or collectively, by the term “embodiment” merely for convenience and without intending to voluntarily limit the scope of this application to any single embodiment or inventive concept if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description. In the foregoing description of the embodiments, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting that the claimed embodiments have more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate example embodiment. Although preferred embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the disclosure is not limited to those precise embodiments and that various other changes and modifications may be affected herein by one skilled in the art without departing from the scope or spirit of the embodiments, and that it is intended to claim all such changes and modifications that fall within the scope of this disclosure. | 133,877 |
11943312 | DETAILED DESCRIPTION OF THE INVENTION In various embodiments, a user can request access to a resource, sometimes referred to herein as an asset, hosted at a server device. The resource or asset can be an audio, video, text, or multimedia file, a program file such as a Hypertext Markup Language (HTML) or JavaScript Object Notation (JSON) file, any of various downloadable or hosted tools, for example a source code tool, a compiler or other programming tool, an editing tool, a website development tool, a media broadcasting tool, or the like. The resource request can be received by a proxy, which determines whether or not to forward the request to the server hosting the resource. The determination made by the proxy can include performing a multi-stage authentication and authorization process that uses directory information obtained from one source, and management information obtained from another source. The directory information can be used by the proxy to determine whether the requestor is an authorized network user. If the requestor is an authorized network user, the proxy can use the management information to determine whether or not the requestor is authorized to access the requested resource. Assuming that both portions of the test are satisfied, the proxy can forward the request to the resource host for servicing. In some embodiments, once the request has been forwarded to the resource host, the proxy can be removed from the process, and the requestor and the host can communicate directly with each other without requiring proxy intervention. In other embodiments, however, the resource host provides the requested resource to the requestor via the proxy. In some such implementations, the proxy can transform the resource from a format or configuration used by the resource host into a different format or configuration used by the device that sent the request. For example, the proxy can generate a transformed resource file by transforming a base resource file received from a resource host. The base resource file can be transformed by using pipeline language parameters included in a URL associated with the access request to create a set of functions to be applied in series. In some embodiments the URL to which the request is sent can include embedded pipeline language parameters or instructions. These pipeline language parameters can include information indicating a necessary format, or a format in which the requesting device would prefer to receive the resource. Requesting access to a resource can include requesting write access, so that the resources stored in one or more resource hosts can be modified or added by individual users, and shared by other users. The resources can be stored using a versioning system in some embodiments. Furthermore, the management information that indicates a user's authorization to access particular resources can be shared with other user's in a “what you have you can grant” paradigm, without requiring the user granting access to have elevated access authority. For example, even if a user does not have administrator-level access, that user can grant to users designated as his subordinates some or all of the access assigned to the user. This can facilitate non-centralized granting and restricting access authority. Referring toFIG.1, a system100will be discussed according to various embodiments of the present disclosure. System100includes proxy110, Relay120, Resource Host A130, Resource Host B140, Station A150, Station B160, and Global Directory Services170. Relay120can include Processing Module124, Management Information Data Store122, and Routing Information Database126. Resource Host A130can include processing module134and resource storage132. Resource Host B140can include Processing Module144and HTTP Database (DB)142. Station A150can include Local Directory Services A156, User1152, and User2154. Station B160can include Local Directory Services B166, User3162, and User4164. Station A150and Station B160can be affiliated entities of a larger organization, entity, corporation, company, etc. For example, Station A150and Station B160can be local broadcast or Internet radio stations affiliated with a regional, national, or international radio broadcasting company or another media entity. In some embodiments, Station A150and Station B160can be affiliated with each other in a broad, or general sense, even if not considered to be “affiliates” in the strictest sense. For example, Station A150and Station B160may be loosely connected by a formal or informal agreement or contract. In some embodiments, Station A150and Station B160can be considered to be affiliated if they share access to common resources hosted by Resource Host A130or Resource Host B140, or if they share common directory services, such as Global Directory Services170. In at least one embodiment, Station A150and Station B160each uses its own local directory services, Local Directory Services A156and Local Directory Services B166, to control access to internal network resources. For example, Station A150can use Local Directory Services A156, which can include a Lightweight Directory Access Protocol (LDAP), for example a Microsoft® Active Directory®, to control the access of User1152and User2154to various network resources. Similarly, Station B160can use Local Directory Services B156can use a Lightweight Directory Access Protocol (LDAP), for example, Apache Directory®, to control the access of User3162and User4164to various network resources. As used herein, the term “directory services” refers to a set of one or more processes or services used to authenticate and authorize users in a network or subnetwork by assigning and enforcing security policies. For example, when a user attempts to log onto a computer domain, a directory service can be used to verify that the user is an authorized network user by verifying a password entered by the user. The directory service can also be used to determine a user type, for example “administrator” or “user.” In some embodiments, Global Directory Services170is controlled by Station A150, by Station B160, by proxy110, or by another entity (not illustrated), for example a parent organization, of which Station A150and Station B160are affiliates. Global Directory Services170can, in some embodiments, obtain and store directory information from either or both of Local Directory Services A156and Local Directory Services B166. In some implementations, Global Directory Services170also includes additional directory information not included in either Local Directory Services A156or Local Directory Services B166. For example, Global Directory Services170can include directory information associated with other affiliates or related entities, in addition to directory information associated with parent entity users. In various embodiments, the information in Global Directory Services170is unique to itself, and does not replicate information included from Local Directory Services A156and Local Directory Services B166. Other embodiments can use directory information in Global Directory Services170to populate the directory information in Local Directory Services A156and Local Directory Services B166. In some such embodiments, some or all of the directory information in Global Directory Services170can be transmitted to Local Directory Services A156for replication and storage, and some or all of the directory information in Global Directory Services170is transmitted to Local Directory Services B166for replication and storage. In some embodiments, Proxy110can obtain directory information from any or all of Global Directory Services170, Local Directory Services A156, and Local Directory Services B166for use in determining whether a user requesting access to a shared resource is an authorized network user. In some embodiments, only authorized network users are permitted to submit requests for access to resources, and if directory information does not indicate that a requestor is an authorized network user, any request submitted can be ignored, rejected, generate an error message delivered to the unauthorized requestor, or initiate an access-violation response. For example, a URL associated with a request by an unauthorized user can be reported to Global Directory Services170, Local Directory Services A156, Local Directory Services B166, or to another network monitoring device. In some embodiments, in response to receiving one or more unauthorized access notifications from Proxy110, any of the directory services can freeze the account of the requestor for a predetermined period of time, or initiate an elevated authentication protocol. Proxy110can, in some embodiments, maintain its own directory services database (not separately illustrated), and synchronize the proxy's directory services database with one or more other directory services databases, and use the information in the synchronized proxy directory services database for decision making purposes. For example, Proxy110can maintain Global Directory Services170by periodically obtaining updated directory information from Local Directory Services A156and Local Directory Services B166, and storing the combined directory information in a database used by Global Directory Services170. Rather than maintaining its own directory services database, Proxy110can, in some embodiments, query Global Directory Services170, Local Directory Services A156, or Local Directory Services B166each time a resource request is received at Proxy110. Proxy110can also obtain management information from Relay120. The management information can be used by Proxy110to grant or deny the resource request. In at least one embodiment, the user management information includes role information indicating roles assigned to one or more users, subordinate user information indicating subordinate users, and by extension supervisors to which the subordinate users are assigned, access authorization information indicating resources to which particular users have access, and source-of-rights information indicating whether a user's access authorizations were granted by another user, and if so which user granted that access authorization. In some embodiments that use multiple, non-duplicated directory services for different users or entities, management information can also include information indicating which directory service the proxy should contact for information about whether a requestor is an authorized network user. The management information obtained from Relay120can also be used to determine whether a user attempting to grant access to, or remove access from, another user is allowed to do so. As mentioned above, some embodiments operate on a “what you have you can grant” paradigm, referring to the ability of a first user to grant a second user access to resources the first user is authorized to access. In some such embodiments, the ability to grant access can be limited to supervisor-subordinate relationships. For example, a supervisor can be allowed to grant his subordinates authorization to access to any or all of the resources the supervisor herself is authorized to access. In some such embodiments, Proxy110can use the role information and the subordinate user information to determine whether a particular user is allowed to grant another user access to a particular resource, or to assign another user to a particular role, or conversely whether the user is allowed to restrict another user. In addition to obtaining management information from Relay120, and directory information from Global Directory Services170, Local Directory Services A156, or Local Directory Services B166, Proxy110can also obtain routing information from Routing Information Database126. Routing information can include a network address from which a particular resource is available, such as the network address of Resource Host A130or Resource Host B140. For example, if a message received at Proxy110requests access to a particular version of a particular media file, Proxy110can use the routing information to determine that the requested resource, or asset, is maintained in HTTP Database142, hosted by Resource Host B140. If Proxy110determines that the requested access is authorized, Proxy110can route the request to Resource Host B140. In some embodiments, Proxy110can temporarily store the request and forward the request to an appropriate address determined based on the routing information. In other embodiments, Proxy110can repackage, revise, or encapsulate and re-transmit the request so that the destination host, in this example Resource Host B140, delivers its response to Proxy110. Referring next toFIG.2, a system200including a Proxy210in communication with Requestor Device250and Asset Host260will be discussed in accordance with various embodiments of the present disclosure. Proxy210includes Communications Interface240, which can be a wired or wireless interface configured to allow Proxy210to communicate via communications network such as the Internet; Request Handling Module230, which in turn includes LDAP Module235, Routing Module233, and User Management Module231; and Resource Transformation Module220, which includes Extraction Module225, Transformation Module223, and Parameter/Function Storage221. Requestor Device250can be a computer terminal, smart phone, laptop, tablet, or other device used by a requestor to send an Asset Request251to Proxy210, where the asset request can include a request access to an asset available from Asset Host260, which can be a collection of one or more server device that stores assets, or resources, in a database or other storage construct within physical memory. The asset request can include a uniform resource locator (URL), or other network address associated with the requested asset. The URL can be part of the request, but does not necessarily designate the current network address at which the requested asset is located. In at least some embodiments, the URL can be used to designate a particular asset, and a particular version of the asset. In some embodiments, the URL can also include pipeline language parameters that specify a format, encoding, and/or encryption type in which Requestor Device250expects to receive the asset. Asset Request251also includes, in some embodiments, information identifying a requestor associated with Asset Request251, which can include information about the requestor's network credentials. Asset Request251is received at Communications Interface240, and can be routed to Request Handling Module230and Resource Transformation Module220. LDAP Module235can obtain directory information from a directory services database to determine if Asset Request251is from an authorized network user. LDAP Module235can obtain the directory information and compare the information about the requestor's credentials with the directory information to make a determination regarding whether the requestor is an authorized network user. Alternatively, LDAP Module235can query a directory services server and determine whether the user is an authorized network user based on a response from directory services server. In some embodiments, a requestor is determined to be an authorized network user if the requestor is a member of a particular computer domain, workgroup, or other security group known to the directory services server, or if the directory information otherwise indicates that the requestor is authorized to communicate with Proxy210via a network to which Communications Interface240is connected. If the requestor is determined to be an authorized network user, User Management Module231can be used to check user management information associated with the requestor to verify that the requestor is authorized to access the requested asset, or resource. For example, the management information may indicate a requestor is assigned to a role that is authorized to download a current version of an HTML document, but not to a role authorized to store changes to that same HTML document. In this example, if Asset Request251simply requests downloading the HTML document, the request will be forwarded to Asset Host260, but if Asset Request251is a write request, Asset Request251will be denied. In some cases, a denied asset request can be simply discarded. However, in other embodiments an error notification, or a “Request Denied” response can be delivered from Proxy210to Requestor Device250. In yet other embodiments, denied requests are tracked, and when a threshold number of requests are denied, heightened authentication or other security measures can be implemented. For purposes of this example, assume that Asset Request251passes the multi-level authentication and authorization process implemented LDAP Module235and User Management Module231. In that case, Routing Module233can be used to obtain information about the current location and status of the requested asset, and Routed Asset Request241can be transmitted to Asset Host260, via Communications Interface240. In response to receiving Routed Asset Request241, Asset Host260processes the request, and in some embodiments delivers Requested Asset261to Proxy210. In other embodiments, Asset Host260can transmit the requested asset directly to Requestor Device250via an Alternate Communications Link262, bypassing Proxy210. For embodiments in which Asset Host260delivers Requested Asset261to Proxy210, Requested Asset261can be transformed using Resource Transformation Module220. Resource Transformation Module220can use Extraction Module225to extract pipeline language parameters from the URL included in the original request. For example, if the request URL includes the following: “http://subdomain.hostdomain.com/fit(100,100)/fit(75,75)/bar(5,5)/ . . . , Extraction Module225can read the URL, extract the pipeline parameters “fit(100,100),” “fit(75,75),” and “bar(5,5),” and store the extracted pipeline parameters in Parameter/Function Storage221. Upon receiving Requested Asset261from Asset Host260, Transformation Module223can retrieve the stored pipeline parameters and apply the transform commands to Requested Asset261, in the same order indicated in the URL, to generate Transformed Asset242. In this example, Transformation Module223will fit the image into a 100-pixel square; then a 75-pixel square; then run the “bar” function with parameters 5 and 5. Proxy210can then transmit Transformed Asset242to Requestor Device250. Referring next toFIG.3, a method300will be discussed in accordance with various embodiments of the present disclosure. As illustrated by block303, a user request for access to a resource is received, for example at a proxy. As illustrated by block305, directory information for the requestor is obtained or otherwise accessed from a directory service. As illustrated by block307, a check is made to determine whether the user sending the request is an authorized network user who is allowed to communicate with the proxy or other device fielding the user request. If the user is not an authorized network user, the request is denied, as illustrated at block309. If the determination at block307indicates that the user is an authorized network user, the proxy or other device handling the user request obtains management information, as illustrated by block311. As illustrated by block313, a check is made to determine whether the management information indicates that the user requesting the resource is authorized to access the requested resource. Authorization can be determined at a per-resource level in some embodiments, so for example, a requesting user can have access to product ID10, but not product ID11. Access can be further granulized to account for differing versions of a particular asset. If the check at block313indicates that access is authorized, the request can be routed to the resource host, as illustrated at block315. Routing the request, as illustrated by block315, can include obtaining routing information from a relay database. The routing information can include specific routes to a host, host resolution, security information indicating security requirements for particular addresses, or the like. In some embodiments, the determination made at block313can include determinations at both an application level and a resource level. For example, the management information may indicate that a requesting user is assigned a role that allows the user to access a particular hosted application that can be used to modify media files of a given type. For purposes of this example, assume further that the user has been assigned a role that permits modification of some, but not all media files. The check made at block313can check to see if the request requires use of the hosted application, and further check to see if the request requires modification of a modification-prohibited asset. If both the application-level and resource-level (or asset-level) check are favorable, the request can be routed to the resource host, as illustrated by block315. In this example, if either the application-level or the resource-level determinations are unfavorable, the request is denied, as illustrated by block309. Referring next toFIG.4a method400will be discussed in accordance with various embodiments of the present disclosure. As illustrated at block401, a request for authorization to use a resource or asset is received, for example at a proxy device. As illustrated by block403, pipeline language parameters can be extracted from a URL associated with the request for authorization. These pipeline language parameters can be included in the URL used to transmit the request to the proxy, or can be included in a URL that is part of the transmitted request. In at least one embodiment, the pipeline language parameters can be executed in order to transform a requested asset into a desired form. As illustrated at block405, the extracted parameters can be stored for later use in transforming the requested asset. In some embodiments, all or a portion of the URL itself can be stored, and extraction of the pipeline language parameters can be performed at the time the requested asset is received from the host providing the asset. As illustrated by block407, the request is processed and routed to the host of the requested asset, as discussed with reference toFIG.3. The host processes the request, and returns the requested asset to the proxy, as illustrated by block409. In at least one embodiment, the requested resource or asset is delivered to the proxy in a standardized format, having particular characteristics, selected based on asset type, or in a file specific format and resolution. The characteristics of the requested asset can be compared to the characteristics indicated by the pipeline language parameters, as illustrated by block411. A check can then be performed at block413, based on the results of the comparison, to determine if the requested resource requires transformation. In some cases, no pipeline language parameters may be specified, in which case the result of the determination at block413could indicate that no transformation was required. Alternatively, the results of the comparison at block411may indicate that the characteristics of the requested resource do not match the characteristics specified by the pipeline language parameters, causing the determination at block413to indicate that transformation of the requested resource is required. For example, the pipeline language parameters extracted from the URL at block403might specify that a requested image file should fit within a particular number of pixels. The comparison at block411and the check at block413could indicate that the requested image already fits within the number of pixels specified by the pipeline language parameters, and that no transformation is necessary. If, however, the comparison at block411indicates that the requested image file cannot be fit within the number of pixels indicated by the pipeline language parameters, the check at block413would indicate that transformation was required. As illustrated by block415, if it is determined at block413that transformation is required, the requested asset can be transformed in accordance with the pipeline language parameters included in the URL. Note that in some embodiments, pipeline language instructions are executed to transform the requested asset or resource using the pipeline language parameters included in the URL, regardless of whether the requested asset needs to be transformed or not. If no transformation is required, as indicated by the determination at block413, the untransformed resource can be transmitted to the requestor at block417. Similarly, the transformed resource can be transmitted to the requestor. In various embodiments, transmission of the transformed resource occurs after transformation of part, or all of the requested resource has been completed. Referring next toFIGS.5-7, decentralized delegation of access authority will be discussed according to various embodiments of the present disclosure. Beginning withFIG.5, a graphical user interface (GUI)500presenting management information associated with a first user is discussed in accordance with various embodiments of the present disclosure. GUI500includes User Info Area510, Manages Area520, Roles Area550, Tool Access Area570, and Delegation Object580. Various embodiments of the present disclosure can be implemented using any of various GUIs displaying various categories and sub-categories of management information that can be used by a proxy, in conjunction with directory information and routing information, to conditionally route resource requests to servers or other devices hosting shared resources and assets. Management information displayed in User Info Area510can include information about a particular authorized network user, as indicated by directory information obtained from global or local directory services. This information can include standard contact information, including a name, a picture, a job title, a job area, an email address, a work address, one or more phone numbers, and other similar information. Manages Area520can be used to display a portion of the subordinate user information identifying subordinate users, i.e., users that the currently displayed user supervises or otherwise manages. Generally, although not always, these users are also authorized network users as determined by directory information obtained from a directory information server. Although illustrated as a simple list, Manages Area520can be displayed in some embodiments as an organizational chart, showing not only subordinate users, but also supervising users. In the illustrated embodiment, Manages Area can display a list of all, or some subset of all, users that could potentially be assigned as subordinates of the currently displayed user, with actually assigned subordinates being indicated by checkmarks next to a user's name. Roles Area550can be used to display a listing of all possible roles to which the currently displayed user is assigned. As illustrated, checkmarks are used to indicate roles to which the currently displayed user is actually assigned. Tool Access Area570can be used to display a list of all potential tools to which the currently displayed user has authority to access. Again, the illustrated embodiment uses checkmarks to indicate actually assigned tool access. Other techniques of highlighting or displaying information can be used consistent with various embodiments of the present disclosure. Delegation Object580can be used to configure GUI500to receive user input associated with assigning a listed subordinate a role or access to a tool. In some embodiments, Delegation Object580can be displayed on GUI500when the currently displayed user has at least one subordinate. Selecting Delegation Object580can, in some embodiments, allow a user displaying his own management information to select one or more of his assigned subordinates for delegation or assignment of one or more roles or tool access. It will be appreciated that various well-known display and input mechanisms can be used in place of, or in addition to, Delegation Object580without departing from the spirit and scope of the present disclosure. GUI500, as illustrated, displays some, but not all, management information that can be used by a proxy to conditionally route requests to a resource or asset host. The significance and use of the different types of information displayed in GUI500will now be discussed in more detail, according to various embodiments. In general, each known user is assigned zero or more roles; is seeded in the system with zero or more subordinate users; and is granted access to zero or more tools. Assume for purposes of this discussion that the currently displayed user, John Doe, has been seeded with the illustrated subordinates, roles, and tool access. Roles can be used to provide particular multiple different users with similar access to particular assets or groups of assets. For example, the role of Developer, can be assigned to users who require access to multiple different versions of programs, websites, media items, or other assets in various stages of development. For example, a developer may require access to program code in each of the following environments: unit test, development, product test, regression, and production. By contrast a Standard User might have no need at all for access to program code, but could require the ability to insert or remove standardized content into or from a website. Tool Access can refer to applications or components that various users may need to build websites, insert advertisements, create or modify contests, manage hosted assets, or the like. In general, the roles can be used to refer to what a user does, and the tool access can refer to the tools needed to perform certain actions. In various embodiments, access to a particular file can depend on both assigned roles and tool access. Referring next toFIG.6, a graphical user interface (GUI)600displaying management information associated with a second user, who is a subordinate of the first user, John Doe, associated with the information displayed inFIG.5, is discussed in accordance with various embodiments of the present disclosure. GUI600includes User Info Area610, Manages Area620, Roles Area650, and Tool Access Area670, each of which is similar to corresponding portions of GUI500, which have been previously described. Assume for purposes of the following examples that James Kirk was initially seeded into the system with zero roles, zero subordinate users; and zero authorized tools. In at least one embodiment, a supervisor is able to confer or grant authority to one or more of his subordinate users to the extent the supervisor himself has been granted authority. For example, as shown inFIG.5, John Doe has been assigned roles as follows: AB Tester Power Users, Developers, Power Users, SMT Power Users, Schedules Power Users, Standard Users, and System Administrators. Thus, John Doe can confer any of these roles to James Kirk, because James Kirk is a subordinate of John Doe. InFIG.6, it can be seen that John Doe has conferred some, but not all, of his own roles to James Kirk, his subordinate: AB Tester Power Users, Developers, Power Users, and SMT Power Users. Similarly, John Doe has granted James Kirk access to the following tools: AB Tester, AMP Hosts Tool, AMP KT Dev, Ads Wizz Configurator, Buckets, and Catalog. Conferring, or granting, access to a Role or Tool Access can be a permanent until deleted in some embodiments, and can remain active even if the supervisor's roles or tool access changes. Thus, if John Doe is removed from the AB Power User role, James Kirk can keep that role, even though John Doe no longer has the authority to grant that access. In other embodiments, however, access to tools or roles by a subordinate lasts only as long as the supervisor that granted access maintains access himself. Thus, in some embodiments if John Doe's access to the Buckets tool is removed, James Kirk also loses access to the Bucket tool, and access to any resources that require use of the Bucket tool. It should be appreciated that, in some embodiments, no elevated access is required to grant roles or tool access. That is to say, it is not necessary in some implementations to have administrator or other elevated rights to assign a subordinate roles or tool access. This “what you have you can grant” paradigm can help avoid the necessity of having to wait for a limited group of users having “super-access” rights before a subordinate can be provided with the necessary access to perform his job. This paradigm can be valuable, in loose hierarchical situations, or in situations where broadly scattered and remotely located entities each desire the ability to grant access to their own assets without seeking approval from a centralized authority. Referring next toFIG.7, a graphical user interface (GUI)700displaying management information associated with a third user, who is a subordinate of the second user, James Kirk, ofFIG.6, in accordance with various embodiments of the present disclosure. GUI700includes User Info Area710, Manages Area720, Roles Area750, and Tool Access Area770, each of which is similar to corresponding portions of GUI500, which have been previously described. As illustrated byFIG.7, Wesley Crusher, who is a subordinate of James Kirk (FIG.6), has been granted the roles of SMT Power Users and Standard Users, and access to the AB Tester tool, by James Kirk. Note that in some embodiments, because Wesley Crusher is not assigned any subordinates, and he therefore cannot delegate roles or tool access to any other user. In some embodiments, access to particular assets, can also be granted to particular users in addition to roles and tool access, and assigned by those users to their subordinates. A proxy can be used to control a supervisor's delegation, or assignment of roles, tools, or other assets. For example, GUIs500,600, and700can each be configured to accept user input by adding an “assign” or “delegate” button, dropdown menu, sub-screen, pop-up window, or the like. In response to a user attempting to assign rights to another user, an assignment request message can be sent to the proxy, requesting the proxy to send the management information changes to a relay or other device used to store management information. The proxy can check the subordinate information included in the management information to determine if the target of the delegation or assignment of rights is assigned as a subordinate of the requesting user. If so, the proxy can store the changed management information. If the user receiving the assignment is not an assigned subordinate of the requesting user, the request can be denied. Referring next toFIG.8, an information flow800between a Client Device810, a Proxy820, and a Resource Host830will be discussed in accordance with various embodiments of the present disclosure. In various embodiments some or all of the resources or assets are accessed via service components such as software containers or virtualized user spaces, such as those implemented by Docker®. In some such embodiments, Proxy820knows the state (e.g., running, available, offline, busy, etc.) of each instance of each container and container type, along with an alias to the primary instance of each container, which can be used for request routing. Thus, Proxy820can verify that resource requests comply with various security policies, verify the container and/or component to which the resource request is directed, locate the primary instance of that container and/or component, and dispatch the resource request to the appropriate instance of the appropriate component. Information flow800illustrates source and version control techniques that can be used in conjunction with various shared assets and resources. For example, a Client Device810can be used to send a Load Resource Request803to Proxy820, which processes and forwards the request to verify that a requestor is authorized to perform the requested action in conjunction with the resource, for example, read a resource, modify a resource, delete a resource, or add a resource. For purposes of this discussion, it will be assumed that a requestor has the required level of resource access via Client Device810. As illustrated in information flow800, Client Device810transmits Load Resource Request803to Proxy820, which in turn routes Load Resource Request803to Resource Host830. In this example, the resource requested in Load Resource Request803is Asset833, which can be identified in the system using a reference identifier derived using a hash function, for example “a3021f” in the case of Asset833. Assets and resources can also be tagged, for example to maintain version control. In the illustrated embodiment, Asset833is associated with Tag840, which identifies Asset833as a HEAD version, referring to the fact that Asset “a3021f”833is the newest, or most recently updated version of the requested resource. Other possible tag types include “LIVE,” “PENDING_APPROVAL,” “BETA,” or similar tags indicating a particular state of the asset. Tags can be used to provide platform-wide auditing of resources. In some embodiments, a developer or other user may choose not to keep a versioning history of various resources. In some such embodiments, a “WORKING COPY” tag can be associated with a resource, indicating to the Resource Host830or other users that the resource is not a “versioned resource.” When a user submits a change to a resource associated with a WORKING COPY tag, the Resource Host830can simply overwrite the previous version, without saving and tracking a new version. Resource Host830can respond to Load Resource Request803by providing access to Asset833via a Payload Response805to Proxy820, which can forward the Payload Response805to Client Device810with or without first transforming the requested resource. Client device810can modify the requested resource, and transmit a Modify and Save Request807to Proxy820. The Modify and Save Request807can be processed by Proxy820, and routed to Resource Host830. A new identifier for the modified version of the resource can be calculated, for example using a Hash function to arrive at “f32gde”, and Resource Host830can store the modified resource as Asset835. Note that because Asset835is now the most recent version, Asset835is associated with Tag842, indicating that Asset835is now the HEAD. Although not illustrated, when Asset835, which in this example is a modified version of the requested asset, is tagged as a HEAD version, TAG840, which is associated with the unmodified version of the requested asset can be modified to indicate, for example, “VERSION −1”, to indicate that Asset835is the version of the requested resource immediately preceding the current HEAD version of the requested asset. Various embodiments also allow a user, via client device810, to manually add a reference tag to an asset. For example, Add Ref Instruction809can be sent from client device810to Resource Host830. In response to Add Ref Instruction809, Resource Host830can associate Ref Tag844with Asset837. Ref Tag844can be a standard version identifier, or custom reference identifier, for example a tag identifying Asset837as a “working copy.” It will be appreciated by those of ordinary skill in the art that although the present disclosure refers to radio stations in describing various embodiments, other embodiments, not limited to radio stations, can also be implemented consistent with the teachings set forth herein. As used herein, the terms “resource” and “asset” are used interchangeably, and unless otherwise explicitly indicated or required by context, refer to files, programs, applications, data, and other items that can be stored in a tangible computer readable storage medium. As may be used herein, the terms “substantially” and “approximately” provide an industry-accepted tolerance for its corresponding term and/or relativity between items. Such an industry-accepted tolerance ranges from less than one percent to fifty percent and corresponds to, but is not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, and/or thermal noise. Such relativity between items ranges from a difference of a few percent to magnitude differences. As may also be used herein, the term(s) “configured to”, “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via an intervening item (e.g., an item includes, but is not limited to, a component, an element, a circuit, and/or a module) where, for an example of indirect coupling, the intervening item does not modify the information of a signal but may adjust its current level, voltage level, and/or power level. As may further be used herein, inferred coupling (i.e., where one element is coupled to another element by inference) includes direct and indirect coupling between two items in the same manner as “coupled to.” As may even further be used herein, the term “configured to,” “operable to,” “coupled to,” or “operably coupled to” indicates that an item includes one or more of power connections, input(s), output(s), etc., to perform, when activated, one or more its corresponding functions and may further include inferred coupling to one or more other items. As may still further be used herein, the term “associated with,” includes direct and/or indirect coupling of separate items and/or one item being embedded within another item. As may be used herein, the term “compares favorably,” indicates that a comparison between two or more items, signals, etc., provides a desired relationship. For example, when the desired relationship is that signal1has a greater magnitude than signal2, a favorable comparison may be achieved when the magnitude of signal1is greater than that of signal2or when the magnitude of signal2is less than that of signal1. The term “module” is used in the description of one or more of the embodiments. A “module,” “processing module,” “processing circuit,” “processor,” and/or “processing unit” may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions. The processing module, module, processing circuit, and/or processing unit may be, further include, memory and/or an integrated memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of another processing module, module, processing circuit, and/or processing unit. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. Note that if the processing module, module, processing circuit, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network). Further note that if the processing module, module, processing circuit, and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. Still further note that, the memory element may store, and the processing module, module, processing circuit, and/or processing unit executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the Figures. Such a memory device or memory element can be included in an article of manufacture. One or more embodiments of an invention have been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claims. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claimed invention. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules, and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof. The one or more embodiments are used herein to illustrate one or more aspects, one or more features, one or more concepts, and/or one or more examples of the invention. A physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein. Further, from figure to figure, the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones. Unless specifically stated to the contra, signals to, from, and/or between elements in the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential. For instance, if a signal path is shown as a single-ended path, it also represents a differential signal path. Similarly, if a signal path is shown as a differential path, it also represents a single-ended signal path. While one or more particular architectures are described herein, other architectures can likewise be implemented that use one or more data buses not expressly shown, direct connectivity between elements, and/or indirect coupling between other elements as recognized by one of average skill in the art. While particular combinations of various functions and features of the one or more embodiments have been expressly described herein, other combinations of these features and functions are likewise possible. The present disclosure of an invention is not limited by the particular examples disclosed herein and expressly incorporates these other combinations. | 48,040 |
11943314 | DETAILED DESCRIPTION All examples and illustrative references are non-limiting and should not be used to limit the claims to specific implementations and embodiments described herein and their equivalents. For simplicity, reference numbers may be repeated between various examples. This repetition is for clarity only and does not dictate a relationship between the respective embodiments, unless noted otherwise. Finally, in view of this disclosure, features described in relation to one aspect or embodiment may be applied to other disclosed aspects or embodiments of the disclosure, even though not specifically shown in the drawings or described in the text. Customers may want to store data from a small edge data center into the cloud and then access that data from one or more of their other data centers. As the pool of data in the cloud grows, interest in leveraging it for a variety of projects across a geographically distributed organization may grow. To allow customers to have easy and fast access to their data, caches may be provided for prioritizing retention of the working dataset (e.g., most recently used data). A cache may work in operation with a data fabric technology that enables automated tiering of data to low-cost object storage tiers either on or off premises. If the cache receives a request for data and the data is not stored in the cache (e.g., cache miss), then the cache may fetch the data from an origin source. The origin source may be part of the data fabric technology that enables data tiering out to the cloud. If a portion of the requested data is tiered out the cloud, the origin source may read the data from the cloud and then return the data to the cache. If a first subset of the requested data is stored at the origin source and a second subset of the requested data is stored at the cloud storage endpoint, then it may be desirable to allow the cache to retrieve the first subset from the origin source and the second subset from the cloud storage endpoint. Accordingly, the cache may access and retrieve the data from multiple locations (e.g., the cloud storage endpoint and the origin volume). The present application provides techniques for a cache to retrieve data from the origin volume/or and from the cloud storage endpoint to satisfy a single data request. An infrastructure that would enable such data retrieval may provide improved performance by providing load distribution, reduced latency by locating data closer to the point of client access, and/or enhanced availability by serving cached data in a network disconnection situation. FIG.1is a block diagram illustrating a clustered network environment100in accordance with one or more aspects of the present disclosure. The clustered network environment100includes data storage systems102and104that are coupled over a cluster fabric106, such as a computing network embodied as a private InfiniBand, Fiber Channel (FC), or Ethernet network facilitating communication between the data storage systems102and104(and one or more modules, components, etc. therein, such as, nodes116and118, for example). The data storage systems102and104may be computing devices that interact with other components via, for example, the cluster fabric106. It will be appreciated that while two data storage systems102and104and nodes116and118are illustrated inFIG.1, any suitable number of such components is contemplated. In an example, nodes116,118include storage controllers (e.g., node116may include a primary or local storage controller and node118may include a secondary or remote storage controller) that provide client devices, such as host devices108and110, with access to data stored within data storage devices128and130. Similarly, unless specifically provided otherwise herein, the same is true for other modules, elements, features, items, etc. referenced herein and/or illustrated in the accompanying drawings. That is, a particular number of components, modules, elements, features, items, etc. disclosed herein is not meant to be interpreted in a limiting manner. It will be further appreciated that clustered networks are not limited to any particular geographic areas and can be clustered locally and/or remotely. Thus, in an embodiment a clustered network can be distributed over a plurality of storage systems and/or nodes located in a plurality of geographic locations; while in an embodiment a clustered network can include data storage systems (e.g.,102,104) residing in a same geographic location (e.g., in a single onsite rack of data storage devices). In the example illustrated inFIG.1, one or more host devices108,110which may include, for example, client devices, personal computers (PCs), computing devices used for storage (e.g., storage servers), and other computers or peripheral devices (e.g., printers), are coupled to the respective data storage systems102,104by storage network connections112,114. A network connection112,114may include a local area network (LAN) or wide area network (WAN), for example, that utilizes Network Attached Storage (NAS) protocols, such as a Common Internet File System (CIFS) protocol or a Network File System (NFS) protocol to exchange data packets, a Storage Area Network (SAN) protocol, such as Small Computer System Interface (SCSI) or Fiber Channel Protocol (FCP), an object protocol, such as AMAZON S3®, etc. The host devices108,110may be general-purpose computers running applications, and may interact with the data storage systems102,104using a client/server model for exchange of information. For example, the host device108may request data from the data storage system102,104(e.g., data on a storage device managed by a network storage control configured to process I/O commands issued by the host device for the storage device), and the data storage system102,104may return results of the request to the host device via the storage network connection112,114. The nodes116,118on clustered data storage systems102,104may include network or host nodes that are interconnected as a cluster to provide data storage and management services, such as to an enterprise having remote locations, cloud storage (e.g., a cloud storage endpoint160may be stored within a data cloud), etc., for example. Such a node in the clustered network environment100may be a device attached to the network as a connection point, redistribution point or communication endpoint, for example. A node may send, receive, and/or forward information over a network communications channel, and may include any device that meets any or all of these criteria. An example of a node may be a data storage and management server attached to a network, where the server may include a general purpose computer or a computing device particularly configured to operate as a server in a data storage and management system. In an example, a first cluster of nodes such as the nodes116,118(e.g., a first set of storage controllers configured to provide access to a first storage aggregate including a first logical grouping of one or more storage devices) may be located on a first storage site. A second cluster of nodes, not illustrated, may be located at a second storage site (e.g., a second set of storage controllers configured to provide access to a second storage aggregate including a second logical grouping of one or more storage devices). The first cluster of nodes and the second cluster of nodes may be configured according to a disaster recovery configuration where a surviving cluster of nodes provides switchover access to storage devices of a disaster cluster of nodes in the event a disaster occurs at a disaster storage site comprising the disaster cluster of nodes (e.g., the first cluster of nodes provides client devices with switchover data access to storage devices of the second storage aggregate in the event a disaster occurs at the second storage site). As illustrated in the clustered network environment100, nodes116,118may include various functional components that coordinate to provide a distributed storage architecture for the cluster. For example, the nodes may include network modules120,122and disk modules124,126. The network modules120,122may be configured to allow the nodes116,118(e.g., network storage controllers) to connect with host devices108,110over the storage network connections112,114, for example, allowing the host devices108,110to access data stored in the distributed storage system. Further, the network modules120,122may provide connections with one or more other components through the cluster fabric106. For example, inFIG.1, the network module120of the node116may access a second data storage device by sending a request through the disk module126of the node118. Disk modules124,126may be configured to connect one or more data storage devices128,130, such as disks or arrays of disks, flash memory, or some other form of data storage, to the nodes116,118. The nodes116,118may be interconnected by the cluster fabric106, for example, allowing respective nodes in the cluster to access data on data storage devices128,130connected to different nodes in the cluster. Disk modules124,126may communicate with the data storage devices128,130according to the SAN protocol, such as SCSI or FCP, for example. As seen from an operating system on nodes116,118, the data storage devices128,130may appear as locally attached to the operating system. Accordingly, different nodes116,118, etc. may access data blocks through the operating system, rather than expressly requesting abstract files. It should be appreciated that, while the clustered network environment100illustrates an equal number of network and disk modules, other embodiments may include a differing number of these modules. For example, there may be a plurality of network and disk modules interconnected in a cluster that does not have a one-to-one correspondence between the network and disk modules. That is, different nodes may have a different number of network and disk modules, and the same node may have a different number of network modules than disk modules. Further, host devices108,110may be networked with the nodes116,118in the cluster, over the storage networking connections112,114. As an example, respective host devices108,110that are networked to a cluster may request services (e.g., exchanging of information in the form of data packets) of nodes116,118in the cluster, and the nodes116,118may return results of the requested services to the host devices108,110. In an embodiment, the host devices108,110may exchange information with the network modules120,122residing in the nodes116,118(e.g., network hosts) in the data storage systems102,104. In an embodiment, the data storage devices128,130include volumes132, which may include an implementation of storage of information onto disk drives or disk arrays or other storage (e.g., flash) as a file system for data, for example. In an example, a disk array may include all traditional hard drives, all flash drives, or a combination of traditional hard drives and flash drives. Volumes may span a portion of a disk, a collection of disks, or portions of disks, for example, and typically define an overall logical arrangement of file storage on disk space in the storage system. In an embodiment a volume may include stored data as one or more files that reside in a hierarchical directory structure within the volume. Volumes are typically configured in formats that may be associated with particular storage systems, and respective volume formats typically include features that provide functionality to the volumes, such as providing an ability for volumes to form clusters. For example, a first storage system may utilize a first format for their volumes, and a second storage system may utilize a second format for their volumes, where the first and second formats are different from each other. In the clustered network environment100, the host devices108,110may utilize the data storage systems102,104to store and retrieve data from the volumes132. For example, the host device108may send data packets to the network module120in the node116within data storage system102. The node116may forward the data to the data storage device128using the disk module124, where the data storage device128includes a volume132A. In this example, the host device108may access the volume132A, to store and/or retrieve data, using the data storage system102connected by the storage network connection112. Further, the host device110may exchange data with the network module122in the node118within the data storage system104(e.g., which may be remote from the data storage system102). The node118may forward the data to the data storage device130using the disk module126, thereby accessing volume132B associated with the data storage device130. While host device108is illustrated as communicating with data storage system102, and similarly host device110with data storage system104, the host devices108,110may communicate via the network (e.g., via fabric106) with other storage systems without requiring traversal through storage systems102,104respectively (as just one example). Thus, if storage system102is down, then the host device108may still access data via storage system104or some other cluster at another site. The data storage system102,104may further provide automated tiering of data to lower-cost object storage tiers either on or off premises to aid in lowering the cost of storage. For example, the data storage system102,104may deliver the benefits of cloud economies by tiering to the cloud storage endpoint160(e.g., public clouds and/or private clouds). The data storage system102,104may be associated with a cloud tier including a cloud storage endpoint160. The cloud storage endpoint160may be an external object store that is associated with a local tier (e.g., the data storage device128including the volume132A), creating a composite collection of discs. The external object store may store one or more objects. The term “object” may refer to a chunk of data (having one or more blocks of data and/or metadata) that is written together in an object storage tier. Additionally or alternatively, the term “object” may refer to content or a data object. The cloud storage endpoint160may store cloud objects in any size. In some examples, the cloud storage endpoint160may store data as 4K blocks, and each object stored in the cloud storage endpoint160may be composed of 1,024 4 kilobyte (KB) blocks. To illustrate an example, the node116may tier data stored in the volume132A to the cloud storage endpoint160. For example, the node116may identify infrequently used data stored in the volume132A and move that data to a lower-cost object storage in the cloud storage endpoint160, leaving frequently used data on the higher-performing, data center storage system. For example, the frequently used data may remain in high-performance solid state drive (SSDs) or hard disk drives (HDDs) of the volume132A, allowing the system to reclaim space on the primary storage (e.g., volume132A). Although the examples provided may discuss a volume including one or more SSDs and may discuss SSD addresses, it should be understood that this discussion extends to a volume including one or more HDDs and HDD addresses. Volumes may take advantage of the tiering by keeping active (“hot”) data on the local tier and tiering inactive (“cold”) data to the cloud storage endpoint160. The volume132A may include one or more SSDs168(e.g., SSD168A, . . . ,168N). When a block166is written to an SSD168A of the volume132A, the node116may assign the block a temperature value indicating that it is hot. Over time, the node116may scan the blocks stored in the SSD168(e.g., SSD168A, . . . , SSD168N) and based on the tiering policies, may keep the scanned block as hot (indicating that the block is frequently accessed), may change the block from hot to cool (indicating that the block is infrequently accessed), may mark cold blocks for tiering to the cloud storage endpoint160, and/or may tier marked blocks to the cloud storage endpoint160. The node116may concatenate marked blocks stored on the volume132A (e.g., SSD168A, . . . , SSD168N) into an object170and when the number of blocks in the object170reaches a threshold number (e.g., 1,024), the node116may write the object170to the cloud storage endpoint160. After the block166is moved to the cloud storage endpoint160, the block166may be removed from the SSD168A. As shown inFIG.1, the object170is stored at a cloud address172and includes a plurality of blocks including the block166. The node116may continue to scan blocks stored in the volume132A to determine whether to tier data stored in the volume132A out to the cloud storage endpoint160. Although the following example describes the data storage system102tiering data from the volume132A to the cloud storage endpoint160, it should be understood that the data storage system104may perform similar actions as those discussed in the present disclosure in relation to the data storage102to tier data from the volume132B to the cloud storage endpoint160or to another cloud storage endpoint (or other cluster) not shown. FIG.2is an illustrative example of a data storage system200(e.g., data storage system102,104inFIG.1), in accordance with one or more aspects of the present disclosure. The data storage system200includes a node202(e.g., nodes116,118inFIG.1), and a data storage device234(e.g., data storage devices128,130inFIG.1). The node202may be a general purpose computer, for example, or some other computing device particularly configured to operate as a storage server. A host device205(e.g., host device108,110inFIG.1) may be connected to the node202over a network216, for example, to provide access to files and/or other data stored on the data storage device234. The node202may include a storage controller that provides client devices, such as the host device205, with access to data stored within data storage device234. The data storage device234can include mass storage devices, such as disks224,226,228of a disk array218,220,222. It will be appreciated that the techniques and systems, described herein, are not limited by the example illustrated inFIG.2. For example, disks224,226,228may include any type of mass storage devices, including but not limited to magnetic disk drives, flash memory (e.g., SSDs), and any other similar media adapted to store information, including, for example, data (D) and/or parity (P) information. The node202includes one or more processors204, a memory206, a network adapter210, a cluster access adapter212, and a storage adapter214interconnected by a system bus242. The network adapter210may correspond to and/or be an example of the network module120inFIG.1. The storage adapter214may correspond to and/or be an example of the disk module124inFIG.1. The data storage system200also includes an operating system208installed in the memory206of the node202that can, for example, implement a Redundant Array of Independent (or Inexpensive) Disks (RAID) optimization technique, or error correction coding (to name just a few examples), to optimize a reconstruction process of data of a failed disk in an array. The operating system208may manage communications for the data storage system200, and communications between other data storage systems that may be in a clustered network, such as attached to a cluster fabric215(e.g., cluster fabric106inFIG.1). Thus, the node202, such as a network storage controller, can respond to host device requests to manage data on the data storage device234(e.g., or additional clustered devices) in accordance with these host device requests. The operating system208may include several modules or “layers” executed by one or both of the network module120or the disk module124. These layers may include a file system240that keeps track of a hierarchical structure of the data stored in the storage devices and manages read/write operations (e.g., executes read/write operations on storage in response to client requests). The operating system208may establish one or more file systems on the data storage system200, where a file system can include software code and data structures that implement a persistent hierarchical namespace of files and directories, for example. The file system may logically organize stored information as a hierarchical structure for files/directories/objects at the storage devices. Each “on disk” file may be implemented as a set of blocks configured to store information, such as text, whereas a directory may be implemented as a specially formatted file in which other files and directories are stored. These data blocks may be organized within a volume block number (VBN) space that is maintained by a file system of the storage operating system208. The file system may also assign each data block in the file a corresponding “file offset” or a file block number (FBN). The file system may assign sequences of FBNs on a per-file basis, whereas VBNs may be assigned over a larger volume address space. The file system may organize the data blocks within the VBN space as a logical volume. The file system may be composed of a contiguous range of VBNs from zero to n, for a file system of size n−1 blocks, where n is a number greater than 1. In an example, when a new data storage device (not shown) is added to a clustered network system, the operating system208is informed where, in an existing directory tree, new files associated with the new data storage device are to be stored. This is often referred to as “mounting” a file system. In the example data storage system200, memory206may include storage locations that are addressable by the processors204and network adapter210, cluster access adapter212, and/or storage adapter214for storing related software application code and data structures. The processors204, the network adapter210, the cluster access adapter212, and/or the storage adapter214may, for example, include processing elements and/or logic circuitry configured to execute the software code and manipulate the data structures. The operating system208, portions of which are typically resident in the memory206and executed by the processing elements, functionally organizes the storage system by, among other things, invoking storage operations in support of a file service implemented by the storage system. It will be apparent that other processing and memory mechanisms, including various computer readable media, may be used for storing and/or executing application instructions pertaining to the techniques described herein. For example, the operating system may also utilize one or more control files (not shown) to aid in the provisioning of virtual machines. The network adapter210includes the mechanical, electrical and signaling circuitry for connecting the data storage system200to the host device205over the network216, which may include, among other things, a point-to-point connection or a shared medium, such as a LAN. The network adapter210may also connect the data storage system200to the cloud tier (e.g., cloud storage endpoint160inFIG.1). The host device205may be a general-purpose computer configured to execute applications. As described above, the host device205may interact with the data storage system200in accordance with a client/host model of information delivery. The storage adapter214cooperates with the operating system208executing on the node202to access information requested by the host device205(e.g., access data on a storage device managed by a network storage controller). The information may be stored on any type of attached array of writeable media such as magnetic disk drives, flash memory, and/or any other similar media adapted to store information. In the example data storage system200, the information may be stored in data blocks on the disks224,226,228. The storage adapter214can include input/output (I/O) interface circuitry that couples to the disks over an I/O interconnect arrangement, such as a storage area network (SAN) protocol (e.g., Small Computer System Interface (SCSI), iSCSI, hyperSCSI, Fiber Channel Protocol (FCP)). The information may be retrieved by the storage adapter214and, in some examples, processed by the one or more processors204(or the storage adapter214itself) prior to being forwarded over the system bus242to the network adapter210(and/or the cluster access adapter212if sending to another node in the cluster) where the information is formatted into a data packet and returned to the host device205over the network216(and/or returned to another node attached to the cluster over the cluster fabric215). In some examples, the network adapter210may format the information into a data packet and forward the data packet to the cloud tier (e.g., cloud storage endpoint160inFIG.1). In an embodiment, storage of information on disk arrays218,220,222can be implemented as one or more storage volumes230,232that include a cluster of disks224,226,228defining an overall logical arrangement of disk space. The disks224,226,228that include one or more volumes may be organized as one or more groups of RAIDs (while in other examples, error correction coding may be used). As an example, volume230includes an aggregate of disk arrays218and220, which include the cluster of disks224and226. In an example, to facilitate access to disks224,226,228, the operating system208may implement a file system (e.g., write anywhere file system) that logically organizes the information as a hierarchical structure of directories and files on the disks. Accordingly, respective files may be implemented as a set of disk blocks configured to store information, whereas directories may be implemented as specially formatted files in which information about other files and directories are stored. Whatever the underlying physical configuration within this data storage system200, data can be stored as files within physical and/or virtual volumes, which can be associated with respective volume identifiers, such as file system identifiers (FSIDs), which can be 32-bits in length in one example. A physical volume corresponds to at least a portion of physical storage devices whose address, addressable space, location, etc. does not change, such as at least some of one or more data storage devices234(e.g., a Redundant Array of Independent (or Inexpensive) Disks (RAID system)). In some examples, the location of the physical volume does not change in that the (range of) address(es) used to access it may generally remain constant. A virtual volume, in contrast, may be stored over an aggregate of disparate portions of different physical storage devices. The virtual volume may be a collection of different available portions of different physical storage device locations, such as some available space from each of the disks224,226, and/or228, and is not “tied” to any one particular storage device. Accordingly, a virtual volume may be said to include a layer of abstraction or virtualization, which allows it to be resized and/or flexible in some regards. Further, a virtual volume may include one or more logical unit numbers (LUNs)238, directories236, and/or Qtrees235. Among other things, these features may allow the disparate memory locations within which data is stored to be identified, for example, and grouped as a data storage unit. For example, the LUNs238may be characterized as constituting a virtual disk or drive upon which data within the virtual volume may be stored within the aggregate. LUNs may be referred to as virtual drives, such that they emulate a hard drive from a general purpose computer, while they actually include data blocks stored in various parts of a volume. One or more data storage devices234may have one or more physical ports, where each physical port may be assigned a target address (e.g., SCSI target address). To represent respective volumes stored on a data storage device, a target address on the data storage device234may be used to identify one or more LUNs238. For example, when the node202connects to a volume230,232through the storage adapter214, a connection between the node202and the one or more LUNs238underlying the volume is created. Additionally or alternatively, respective target addresses may identify multiple LUNs, such that a target address may represent multiple volumes. The I/O interface, which may be implemented as circuitry and/or software in the storage adapter214or as executable code residing in memory206and executed by the processors204, for example, may connect to volume230by using one or more addresses that identify the one or more LUNs238. Data stored in a volume (e.g., volume230,232) may also be stored in a cache, which may store frequently accessed portions of a source of data in a way that allows the data to be served faster and/or more efficiently than it would be by fetching the data from the source. Referring back toFIG.1, data stored on the volume132may be cached at a cache volume. A cache volume may provide a remote caching capability for an origin volume (e.g., volume132A, volume132B, etc.), simplifying file distribution, reducing WAN latency, and/or lowering WAN bandwidth costs. In some examples, the cache volume may be beneficial in read-intensive environments where data is accessed more than once and is shared by multiple hosts. The cache volume may be populated as the host device reads data from the origin volume. For example, the host device may request data from the cache volume. On a first read of any data, the cache volume may fetch the requested data from the origin volume. The requested data may be returned to the cache volume, stored in the cache volume, and then passed back to the host device. As reads are passed through the cache volume, the cache volume may fill up by storing the requested data. In an example, the cache volume may write the data locally in the cache volume. If the host device requests data that is stored in the cache volume, the cache volume may serve the requested data back to the host device without spending time and resources accessing the original source of the data (e.g., the original volume). Accordingly, the cache volume may serve frequently accessed data directly to the host device without fetching the data from the origin volume again. The cache volume may serve data faster, if for example, the data storage device on which the cache volume resides is faster than the data storage device on which the origin volume resides. In an example, the cache volume may have faster storage (e.g., FC versus SATA), increased processing power, and/or increased (or faster) memory compared to the origin volume. In another example, the storage space for the cache volume may be physically closer to the host device, such that it does not take as long to reach the data. The cache volume may provide improved performance by providing load distribution, may provide reduced latency by locating data closer to the point of client access, and/or may provide enhanced availability by serving cached data in a network disconnection situation. In some examples, the cache volume may be aware of the cloud storage endpoint160(in the example ofFIG.1) and retrieve at least some data from the cloud storage endpoint. For example, the cache volume may receive a request for data and satisfy the data request by retrieving a first subset of the data from the origin volume and a second subset of the data from the cloud storage endpoint160. In this example, the cache volume may bypass requesting the second subset of data from the origin volume and request the second subset of data directly from the cloud storage endpoint160, potentially reducing latency and the time it would take to satisfy the data request. For example, the origin volume may receive a large number of requests from host devices, and by requesting the second subset of data directly from the cloud storage endpoint160rather than through the origin volume, the cache volume may reduce the load on the origin volume and spread the load across portions of the network. FIG.3is a schematic diagram300of a cache volume that retrieves data from an origin volume and data from a cloud storage endpoint to satisfy a single data request according to one or more aspects of the present disclosure. In the example illustrated inFIG.3, a data storage system302is coupled to a data storage system304over a network305. The network305may include, for example, a LAN or WAN. Additionally, the nodes306and308may communicate over the network305and/or over the cluster fabric106(shown inFIG.1). The data storage system302and the data storage system304may be examples of data storage systems102,104inFIG.1and/or data storage system200inFIG.2. The data storage system302includes a node306, and the data storage system304includes a node308. The node306may be an example of node116inFIG.1and/or node202inFIG.2. The node308that may be an example of node118inFIG.1and/or node202inFIG.2. The nodes306and308may be in the same cluster or a different cluster from each other. Additionally, the node306,308may allow other nodes to access data on data storage devices310,312. In the present disclosure, reference to a node306performing an action (e.g., receiving, transmitting, determining, storing, etc.) may refer to the data storage device310and/or the origin volume320performing such action. Likewise, reference to the data storage device310and/or the origin volume320performing an action may refer to the node306performing such action. Similarly, reference to a node308performing an action (e.g., receiving, transmitting, determining, storing, etc.) may refer to the data storage device312and/or the cache volume322performing such an action. Likewise, reference to the data storage device312and/or the cache volume322performing an action may refer to the node308performing such action. The node306,308may be coupled to data storage device310,312, which may be examples of data storage device128,130inFIG.1and/or data storage device234inFIG.2. The data storage device310includes an origin volume320that is mapped to a cache volume322, with the origin volume320storing the original source of data. The data storage device312includes the cache volume322, which may be a temporary storage location that resides between a host device314and the origin volume320. The host device314may be an example of host devices108,110inFIG.1and/or host device205inFIG.2. The origin volume320stores one or more blocks of data, and the cache volume322may be a destination volume that provides a remote cache of the origin volume320. The origin volume320may include one or more SSDs334(e.g.,334A, . . . ,334N), which may refer to the local tier. The SSDs334may be examples of the SSDs168A, . . . ,168N inFIG.1and/or the disk arrays218,220,222inFIG.2. Each of the SSDs334may store data in, for example, 4K blocks. The SSD334A stores a first data subset330at an SSD address336in the origin volume320. The first data subset330may include one or more 4 KB blocks included in the data316. The cache volume322may be a writable, persistent cache of the origin volume320in a location remote from the data storage device310on which the origin volume320resides. The cache volume322may be a sparse copy of the origin volume320and may store a cached subset of the data stored in the origin volume320. Storage in the cache volume322may be used efficiently by prioritizing retention of the working dataset (e.g., most recently used data). The cache volume322may use a protocol to communicate with the origin volume320, where the protocol links the cache volume322to the origin volume320. In an example, the protocol may be remote access layer (RAL), which may include a feature that enables the cache volume322to have a revocable read/write or read-only cache granted on an inode by the origin volume320to the cache volume322. As illustrated inFIG.3, the cloud storage endpoint160may include one or more cloud storage devices338. The cloud storage device338may store an object340at a cloud address342of the cloud storage device338in cloud storage endpoint160. The object340may be an example of the object170inFIG.1. The object340may include a plurality of blocks including a second data subset344. The second data subset344may include one or more blocks (e.g., 4 KB blocks) included in the data316(e.g., “cold” data). As discussed above, the cache volume322may be populated as the host device314reads data from the origin volume320. The host device314may desire to retrieve the data316and may transmit a data request318for the data316to the node308. The node308(as a cache target) may receive the data request318and search the cache volume322for the data316. It should be understood that the data316may include one or more data subsets stored at the origin volume320and/or the cloud storage endpoint160. The node308may determine, based on an identifier of the requested data, whether the requested data is stored in the cache volume322. For example, one or more blocks included in the data316may be stored at the cache volume322, one or more blocks included in the data316may be stored at the origin volume320, and/or one or more blocks included in the data316may be stored at the cloud storage endpoint160. In the example illustrated inFIG.3, the cache volume322does not yet store any portions (or blocks) of the data316, and the requested data316may include the first data subset330and the second data subset344. In the illustrated example, the first data subset330is stored at the origin volume320, and the second data subset344is stored at the cloud storage endpoint160. FIGS.3and4may be discussed in relation to each other to better explain data retrieval by the cache volume322from the origin volume320and the cloud storage endpoint160.FIG.4is a signaling diagram illustrating a method400of retrieving data from an origin volume and data from a cloud storage endpoint to satisfy a single data request according to one or more aspects of the present disclosure. The method400may be implemented between the host device314, the nodes306and308, and the cloud storage endpoint160(e.g., located in the network100). The method400may employ similar data retrieval techniques as described with respect to aspects ofFIGS.1,2,3, and/or5. As illustrated, the method400includes a number of enumerated actions, but embodiments of the method400may include additional actions before, after, and in between the enumerated actions. In some embodiments, one or more of the enumerated actions may be omitted or performed in a different order. At action402, the host device314may transmit a data request for data. The data request transmitted at action402may be an example of the data request310for the data316inFIG.3. A file stored in the cache volume322and/or the origin volume320may be implemented as a set of blocks configured to store information (e.g., text). For example, the node306,308may assign each data block in the file a corresponding “file offset” or file block number (FBN). The data request318may include an identifier of the data316. For example, the data request318may include a file handle (e.g., file identifier), a file offset, and a length value of the data316that together may identify the data316. The data request318may be a request for X blocks (or more generally bytes) of data, where X is a positive number. The node306,308may understand the same file handle, offset, and length value of the data and understand the request as being for X blocks of data in the file identified by the file handle, starting at the offset of the file and spanning the indicated length value. Data may be addressed or organized in other ways while remaining within the scope of the present disclosure. As discussed above, the node308may determine whether the cache volume322stores a portion (e.g., one or more blocks) of the requested data316. In response to determining that the cache volume322stores at least some blocks included in the requested data316, the node308(e.g., a cache target) may return the applicable blocks to the host device314. In response to determining that the cache volume322does not store all the blocks included in the requested data316, the node308may attempt to retrieve such portions of the requested data316from the origin volume320. It may be desirable to retrieve only those portions of the requested data316that are stored at the origin volume320(e.g., SSDs334A, . . . ,334N) and request the remaining portion of the requested data316directly from the cloud storage endpoint160. Referring back toFIG.4, at action404, the node308may transmit a local data request for the data316to the node306. In response to determining that the cache volume322does not store all the blocks included in the requested data316, the node308may transmit the local data request for the data to the node306. The node306may receive the local data request from the node308. The local data request may specify to the receiving node306to retrieve those portions of the requested data316that are stored locally at the data storage device310(e.g., stored in the SSDs334A, . . . ,334N of the origin volume320) and provide an object name of the object and one or more cloud addresses at which the remaining portion(s) of the requested data316are stored at the cloud storage endpoint160. Referring back toFIG.3, the node306may determine that the first data subset330is stored at the SSD address336and that the second data subset344(the remaining portion of the requested data316that is not stored in the origin volume320) is stored at the cloud address342of the cloud storage endpoint160. In an example, the SSD address336may be an SSD physical volume block number (PVBN) in the origin volume320, and the cloud address342may be a cloud PVBN. In some examples, the file system240allocates blocks, frees blocks, to and from a virtual volume of an aggregate. The aggregate, as discussed above, may be a physical volume including one or more groups of storage devices, such as RAID groups, underlying one or more virtual volumes of the storage system. The aggregate may have its own PVBN space and maintain metadata, such as block allocation bitmap structures, within the PVBN space. Each virtual volume may have its own virtual volume block number (VVBN) space and may maintain metadata, such as block allocation bitmap structures, within that PVBN space. PVBNs may be used as block pointers within buffer trees of files stored in a virtual volume. Systems and methods for creating or using PVBNs are described in further detail in U.S. patent application Ser. No. 14/994,924 filed Jan. 13, 2016, entitled “METHODS AND SYSTEMS FOR EFFICIENTLY STORING DATA,” which is incorporated herein by reference in its entirety. Referring toFIG.4, at action406, responsive to the local data request from the node308, the node306may transmit a response including the first data subset330, an SSD address of the first data subset330, a name of the object storing the second data subset344, and a cloud address of the second data subset344. Referring back toFIG.3, an SSD address of the first data subset330may identify the location at which the first data subset330is stored at the SSD334A. The node306may use the SSD address336of the SSD334A to find the first data subset330. The object name may provide an indication to the node308regarding how to read the object. A cloud address of the second data subset344may be, for example, a cloud PVBN that identifies the location at which the second data subset344is stored at the cloud storage endpoint160. In some examples, the origin volume320may use the cloud address to retrieve the second data subset344from the cloud storage endpoint160. The node308may receive the response from the node306in relation to the action406. Responsive to receiving the response from the node306, the node308may determine that the first data subset330is stored at the SSD address of the SSD334A and that the second data subset344is stored at the cloud address at the cloud storage device338, which is in the cloud storage endpoint160. It should be understood that the node306may include one or more SSD addresses of the first data subset330, one or more names of objects storing the second data subset344, and one or more cloud addresses of the second data subset344. Additionally, it should be understood that the node306may include in a response at least (but not all) the first data subset330, an SSD address of the first data subset330, a name of the object storing the second data subset344, and a cloud address of the second data subset344. In an example, the node306may transmit each of the data subsets of the requested data stored at the origin volume320and an SSD address of each of the respective data subsets. The data316may also, as already noted, include one or more data subsets of the data316stored at the cloud storage endpoint160. In an example, the node306may transmit each of the cloud addresses at which a respective data subset of the data316is stored at the cloud storage endpoint160, and for each of these cloud addresses, the node308may transmit a request for the data stored at the respective cloud address to the cloud storage endpoint160(e.g., as noted at action410below). Referring back toFIG.4, at action408, the node308may store the first data subset330into the cache volume322and store the object name and cloud address of the second data subset344into the cache volume322. In some examples, the cache volume322may track the first data subset330by writing the data that the cache volume322receives at the correct offset for the inode it is attempting to fetch the data from. The cache volume322may determine the local address in the cache volume322and may disregard the origin address. The cache volume322may accordingly return the data to the host device314in response to a request from the host device314for the data. In some examples, a cloud PVBN may contain a bin number (e.g., three bits) indicating that the PBVN is for the cloud storage endpoint160, an object identifier providing a unique identifier (e.g., 34-bit value) of the object, and a slot number represented as a P-bit value (e.g., P=10). The slot number may indicate the location of a block within the object. In some examples, an object may contain 1,024 4K blocks such that 1,024 object PVBNs may have the same object identifier, but different slot numbers from each other. An object identifier used on the origin volume320may already be used by an aggregate on which the cache volume322resides. As a result, the cache volume322may assign the second data subset344a local object identifier that identifies the second data subset344in the aggregate on which the cache volume322resides. The cache volume322may create a mapping from the object identifier received from the origin volume320to the local object identifier and save the object name under the local object identifier. The mapping may allow the cache volume322to determine whether it has the object name stored for other cloud addresses (e.g., cloud PVBNs) using the same object identifier received from the origin volume320. The information about the object may then be stored under the local object identifier in the object information metafile of the cache volume322's aggregate. Based on receiving the cloud address, the node308knows where the second data subset344is stored at the cloud storage endpoint160and may directly request the second data subset344from the cloud storage endpoint160. At action410, the node308may transmit a request for the second data subset408stored at the cloud address of the cloud storage device338located at the cloud storage endpoint160. If the cache volume322is scheduled to read blocks directly from the cloud storage endpoint160but has lost connectivity to the object store, then the cache volume322may instead transmit to the origin volume320a request to fetch the block from the cloud storage endpoint160rather than returning an error message that the data was unavailable. In an example, if the aggregate of the mirror volume is cloud mirrored, the cache volume322may send the request only if connectivity was lost to both the primary and the mirror. The cloud storage endpoint160may receive the request and retrieve the second data subset344stored at the cloud address. At action412, responsive to receiving the request for the second data subset344stored at the cloud addresses, the cloud storage endpoint160may transmit a response including the second data subset344to the node308. The node308may receive the response including the second data subset344from the node306. The node308may determine, based on a set of cloud storage policies, whether to store the second data subset344received from the cloud storage endpoint160at the cache volume322. A user may establish the set of cloud storage policies for one or more cache volumes. For example, a cloud storage policy may specify that data associated with random reads, but not sequential reads, are to be moved back and stored in the cache volume322. In this example, a cloud storage policy may specify that data associated with sequential reads are read, but not to be moved back and stored in the cache volume322. If data is moved back and stored in the cache volume322, the data may reside in both the cloud tier and in the cache volume. When the data is requested at a later point, the cache volume322may retrieve the data locally rather than request the data from the cloud tier. In another example, a cloud storage policy may specify that data associated with sequential reads, but not random reads, are to be moved back and stored in the cache volume322. In this example, a cloud storage policy may specify that data associated with random reads are read, but not to be moved back and stored in the cache volume322. In another example, a cloud storage policy may specify that data associated with sequential reads and random reads are moved back and stored in the cache volume322. In another example, a cloud storage policy may specify that neither data associated with sequential reads nor random reads are moved back and stored in the cache volume322. If sequential reads of cloud data from the cache volume322only trigger reads from cloud and are not stored in the origin volume320, the I/O patterns may be the same and the cache volume322may consume less SSD space. If a cloud storage policy specifies the node308to store the second data subset344at the cache volume322, then the node308may store the second data subset344at the cache volume322. The node308may store the second data subset344to an appropriate inode offset in the cache volume322. The node308may thereafter serve a request for the second data subset344to the host device314without requesting the second data subset344from the origin volume320or the cloud storage endpoint160. Additionally, the node308may remove the cloud address of the second data subset344returned by the origin volume320(that had been received as part of action406ofFIG.4). At action414, responsive to the data request received in relation to action402, the node308may transmit a response including the first data subset330and the second data subset344to the host device314. In an example, the first data subset330and the second data subset344includes all of the blocks included in the requested data. In another example, the first data subset330and the second data subset344do not include all of the blocks included in the requested data. In this example, the cache volume322may retrieve additional data subsets of the requested data (e.g., one or more data subsets stored on the origin volume320and/or the cloud storage endpoint160) using aspects discussed in the present disclosure. The node308may wait until it has received each of the requested blocks before transmitting the response to the host device314. It should be understood that the requested data316may include one or more data subsets stored at the origin volume320, the cache volume322(e.g., fetched by the node308based on an earlier request), and/or the cloud storage endpoint160. In an example, the data316may include the first data subset that is referred to in action406and may also include another data subset that is stored in the origin volume320. In this example, the response at action406may also include the other data subset that is stored in the origin volume320along with an SSD address of the other data subset. In other words, for any given data subset that is requested by the client and stored in the origin volume320, the node306may transmit the given data subset and an SSD address of the given data subset in a response to the node308. At action408, the node308may accordingly store each of the given data subsets that were stored at the origin into the cache volume and track the SSDs address of the given data subsets. In another example, the data316may include the second data subset that is referred to in actions406,408, and410and that is stored at a cloud address. The data316may also include another cloud data subset that is stored in the cloud tier. In this example, the response at action406may include a name of an object storing the other cloud data subset that is stored in the cloud tier along with a cloud address of the other cloud data subset. In other words, for any given cloud data subset that is requested by the client and stored in the cloud tier, the node306may transmit the name of the object storing the given cloud data subset and a cloud address of the given cloud data subset in a response to the node308. At action408, the node308may accordingly store each of the object names and the cloud addresses associated with the cloud data sets that were provided in the response. At action410, the node308may accordingly transmit a request for each of the given cloud data subsets stored at their respective cloud addresses. An advantage of identifying the cloud addresses is that the node308may reuse these addresses to satisfy later requests from the host device314by retrieving data from the cloud storage endpoint160without requesting the data from the node306. As discussed above, the cache volume322may assign the second data subset344a local object identifier that identifies the second data subset344in the aggregate on which the cache volume322resides. The cache volume322may create a mapping from the object identifier received from the origin volume320to the local object identifier and save the object name under the local object identifier. An object may be stored in a filesystem and store multiple blocks (e.g., 1,024 blocks). Accordingly, many (e.g., more than 1,024 references) may point to the object from the filesystem. The cache volume322may realize than an object is invalid through various means and accordingly redirect reads to the origin volume320. In some examples, as the data stored in the origin volume320is tiered to the cloud storage endpoint160, blocks of an object stored on the origin volume320may be removed, and the object may become fragmented. Over time, references to the object may be decrease and reach a threshold. If the number of references reaches the threshold, the node306may determine to free the object. To free the object, the node306may send a command to the cache volume322to invalidate its information about that object. The cache volume322may receive the command and in response, mark the object as invalid in the cache volume322's aggregate object information metafile and remove the mapping entry of that object identifier received from the origin volume320to the local object identifier. On a subsequent read request for the object, the cache volume322may determine that the object is invalid, and the read may be redirected to the origin volume320. The command to the cache volume322to invalidate its information about an object may be performed to speed up the redirection process and may be unnecessary for correctness. If the origin volume320determines that the cache volume322is unavailable, the origin volume320may proceed with freeing the object. If the cache volume322tries to read the object, then the cache volume322will receive an object not found error, mark the object as invalid in its metadata, remove the object identifier mapping, and direct the read to the origin volume320. In some examples, if an object is marked as invalid, the node308does not immediately free the object in the cache volume322or clean up the corresponding object information metafile because the container file may still contain object PVBNs referencing the object. If the node308were to free the object information, then the object identifier may be reused and those old PVBNs may direct the cache volume322to the wrong data. To avoid this, the cache volume322may wait until those stale PVBNs are freed before freeing the object information and the object identifier. To expedite this process, the node308may scan the cache volumes and free the PVBNs of invalid objects rather than performing tiering and defragmentation work. In some examples, if the cache volume322does not receive the command to invalidate an object, then the origin volume320may both free and reuse an object identifier before the cache volume322has realized that the original object is no longer valid. While the object is reused, the name of the new object may be different because an object name may contain a monotonically increasing sequence number. If the cache volume322transmits a request to the origin volume320to fetch a block and the origin volume320responds with a reused object identifier, then the cache volume322may determine that the object identifier mapping that it has is stale by comparing the sequence number received from the origin volume320with the sequence number that it stored in the object information metafile for the corresponding local object identifier. If they do not match, then the node308may determine that the object identifier mapping is stale. The node306may then mark the old object as invalid in its object information metafile, allocate a new local object identifier for the new object, and update the mapping. FIG.5is a flow diagram of a method500of retrieving data from an origin volume and from a cloud storage endpoint according to one or more aspects of the present disclosure. Blocks of the method500can be executed by a computing device (e.g., a processor, processing circuit, and/or other suitable component). For example, a data storage system such as the data storage system304may utilize one or more components, such as the node308, data storage device312, and/or the cache volume322, to execute the blocks of method500(as also discussed above with respect toFIG.4). As illustrated, the method500includes a number of enumerated blocks, but embodiments of the method500may include additional blocks before, after, and in between the enumerated blocks. In some embodiments, one or more of the enumerated blocks may be omitted or performed in a different order. At block502, the method500includes receiving, by a cache from a client, a request for data. At block504, the method500includes determining, by the cache, that a first subset of the data is stored on a storage device and that a second subset of the data is stored at a cloud address located at a cloud storage endpoint. At block506, the method500includes receiving, by the cache from the storage device, the first subset of data in response to transmitting a local data request for the data stored on the storage device. At block508, the method500includes receiving, by the cache from the cloud storage endpoint, the second subset of data in response to transmitting a request for the second subset of data stored at the cloud address to the cloud storage endpoint. At block510, the method500includes transmitting, by the cache to the client, the first and second subsets of data in response to the request for data. The present embodiments can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment containing both hardware and software elements. Accordingly, it is understood that any operation of the computing systems of computing architecture100may be implemented by the respective computing system using corresponding instructions stored on or in a non-transitory computer readable medium accessible by the processing system. For the purposes of this description, a tangible computer-usable or computer-readable medium can be any apparatus that can store the program for use by or in connection with the instruction execution system, apparatus, or device. The medium may include non-volatile memory including magnetic storage, solid-state storage, optical storage, cache memory, and RAM. Thus, the present disclosure provides a system, method, and machine-readable storage medium for retrieving data in a clustered networking environment. In some embodiments, the method includes receiving, by a cache from a client, a request for data; determining, by the cache, that a first subset of the data is stored on a storage device and that a second subset of the data is stored at a cloud address located at a cloud storage endpoint; receiving, by the cache from the storage device, the first subset of data in response to transmitting a local data request for the data stored on the storage device; receiving, by the cache from the cloud storage endpoint, the second subset of data in response to transmitting a request for the second subset of data stored at the cloud address to the cloud storage endpoint; and transmitting, by the cache to the client, the first and second subsets of data in response to the request for data. In some examples, the method also includes storing, by the cache, the cloud address and an association between the cloud address and the second subset of data. The cloud address may include a cloud PVBN. In some examples, the method also includes storing, by the cache, the first subset of the data, the first subset being stored at an SSD address of an SSD of the storage device. In some examples, the method further includes transmitting, by the cache to the storage device, the local data request indicating to the storage device to return the first subset of data to the cache and to return a cloud address at which the second subset of data is stored at the cloud storage endpoint. In some examples, the method also includes transmitting, by the cache to the cloud storage endpoint, the request for the second subset of data stored at the cloud address. The second data subset may be stored on a cloud storage device located in the cloud storage endpoint. In some examples, receiving a request for data includes receiving, by the cache from the client, the request for one or more four kilobyte (KB) data blocks, where the first subset of data includes a first four KB data block that is stored in a first object on an SSD of the storage device, and the second subset of data includes a second four KB data block that is stored at the cloud address of the cloud storage endpoint. In some examples, the storage device includes a plurality of SSDs, and at least one SSD of the plurality stores the first subset of data. In yet further embodiments, the non-transitory machine-readable medium has instructions for performing the method of retrieving data, including machine executable code, which when executed by at least one machine, causes the machine to: receive, by a cache from a host device, a request for data; transmit, by the cache to a storage device, a local data request specifying the storage device to return a first portion of the data stored at the storage device and to return a cloud address at which a second portion of the data is stored on a cloud storage endpoint; transmit, by the cache to the cloud storage endpoint, a request for the second portion stored at the cloud address; receive, by the cache, the first and second portions of the data; and transmit, by the cache to the host device, the first and second portions of the data in response to the request for data. In some examples, the non-transitory machine-readable medium has instructions for performing the method of retrieving data, including machine executable code, which when executed by at least one machine, causes the machine to transmit, based on a storage policy specifying storage of data retrieved from sequential reads from the cloud storage endpoint, the second subset of data to the storage device based on whether the second data subset is based on a sequential read. In some examples, the non-transitory machine-readable medium has instructions for performing the method of retrieving data, including machine executable code, which when executed by at least one machine, causes the machine to transmit, based on a storage policy specifying storage of data retrieved from random reads from the cloud storage endpoint, the second subset of data to the storage device based on whether the second data subset is based on a random read. The cloud address may include a cloud PVBN. In some examples, the non-transitory machine-readable medium has instructions for performing the method of retrieving data, including machine executable code, which when executed by at least one machine, causes the machine to store, by the cache, the first subset of the data, the first subset being stored at an SSD address of an SSD of the storage device. The second data subset may be stored on a cloud storage device located in the cloud storage endpoint. The cloud storage endpoint may store objects including one or more 4 KB blocks. In yet further embodiments, the computing device includes a memory containing a machine-readable medium comprising machine executable code having stored thereon instructions for performing a method of retrieving data and a processor coupled to the memory. The processor is configured to execute the machine executable code to: receive, by a cache from a client, a request for data; determine, by the cache, that a first subset of the data is stored on a storage device and that a second subset of the data is stored at a cloud address located at a cloud storage endpoint; receive, by the cache, the first subset from the storage device and the second subset from the cloud storage endpoint; and transmit, by the cache to the client, the first and second subsets of data in response to the request for data. In some examples, the processor may be configured to execute the machine executable code to receive, by the cache, the cloud address from the storage device. In some examples, the processor may be configured to execute the machine executable code to transmit, by the cache to the storage device, a local data request indicating to the storage device to return the first subset of data and to return a cloud address at which the second subset of data is stored at the cloud storage endpoint. In some examples, the processor may be configured to execute the machine executable code to transmit, by the cache to the cloud storage endpoint, a request for the second subset of the data stored at the cloud address after receiving the cloud address from the storage device. The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. | 68,064 |
11943315 | DETAILED DESCRIPTION The present disclosure relates, in one or more embodiments, to storage efficient delivery of content over communication networks. As discussed above, a CDN can transmit frequent revalidation requests, relating to an item of content, to a mid-tier server. These revalidation requests can result in cache misses, at the mid-tier cache, and the mid-tier server can respond by retrieving the content from an origin, storing the content in the mid-tier cache, and responding to the revalidation request using the retrieved content (e.g., determining whether the content stored at the CDN cache has been modified at the origin, using the content retrieved from the origin). If the content stored at the CDN cache has not changed, this results in significant network traffic, and burden on the origin, from excess requests, and inefficient storage at the mid-tier cache (e.g., storage of content items to service revalidation requests, rather than to service content requests). In an embodiment, this can be improved by a mid-tier server retrieving, in some circumstances, only object freshness metadata from the origin, rather than full content, to address revalidation requests. For example, on receipt of a revalidation request (e.g., a conditional HTTP GET), a mid-tier server can determine that the requested content is not in cache, and can then retrieve from the origin, and store in the cache, only object freshness metadata (e.g., Etag and Last-Modified information). Where the content has not changed at the origin, object freshness metadata is sufficient to satisfy the CDN's revalidation request. The mid-tier server can respond to further checks based on the cached metadata. If the content changes (e.g., the Etag checksum no longer matches the content or the last-modified information indicates the content has changed), then the mid-tier server requests a full copy (including the content body) of the content object from the origin. As described herein, a “content body” may also be referred to as “content itself”, as it may refer to the actual contents of media that may be requested by users130(e.g., viewers of the content), such as the media content for a particular media title. In an embodiment, a mid-tier server conditionally determines whether to respond to a revalidation request, and a cache miss, by retrieving full content and metadata or only object freshness metadata. As discussed above, in some circumstances it can be beneficial to retrieve, and cache, only object freshness metadata. In other circumstances, however, it can be beneficial for the mid-tier server to retrieve and cache full content, even where the content has not changed at the origin. For example, revalidation requests from CDNs can assist a mid-tier server in keeping popular content in the mid-tier cache, to service future content requests (e.g., from other CDNs). Revalidation requests from CDNs indicate to a mid-tier cache which content is popular among users and should therefore be maintained in cache for future requests. A mid-tier server that responds to revalidation requests by caching only object freshness metadata could, in an embodiment, fail to cache popular content and fail to protect the origin from requests for popular content. As discussed further below, one or more logical processes can be used to determine when the mid-tier server should cache the full content, and when it should cache just the object metadata. This both protects the origin from excessive requests for full content (e.g., ensuring that popular content is cached at the mid-tier) and reduces the load required to handle revalidation requests (e.g., by storing only the object metadata, for less popular content). FIG.1is a block diagram illustrating a system100for content delivery using a communication network, according to at least one embodiment. The system100includes an origin102and a content library104. In an embodiment, the content library104is accessible at the origin102(e.g., stored locally at the origin or accessible at the origin from a networked storage location). Further, in an embodiment the content library104includes a complete catalog of content assets to be provided to users130(e.g., a complete copy of all assets for a streaming media service). In an embodiment, the users130may be viewers of visual content, such as videos, images, visual renderings, and text. Alternatively, or in addition, the users130may also be consumers of other forms of content, and the content referred to herein may also be audio content with no visual content. In one embodiment, the system100can include a single origin102and a single content library104(e.g., storing a complete copy of the content assets). Alternatively, the system100can include multiple origins102and multiple content libraries104. In an embodiment, where the system100includes multiple origins102and multiple content libraries104, each content library includes a complete copy of the content assets. The system100further includes a mid-tier cluster110. The mid-tier cluster110includes a number of caches112A,1128, through112N. In an embodiment, each cache112A-N includes a corresponding cache storage114A-N. For example, the cache112A includes a cache storage114A, the cache112B includes a cache storage114B, the cache112N includes a cache storage114N, etc. Further, the mid-tier cluster110includes one or more load balancers116. In an embodiment, the mid-tier cluster110shields the origin102from high load and bursts of traffic. For example, the caches112A-N can have less storage than the origin102, and may not store a complete copy of all content assets. In an embodiment, the caches112A-N are optimized for streaming and can deliver more traffic than the origin102. In one embodiment, each mid-tier cluster110corresponds with an origin102(e.g., if the system100includes multiple origins102it includes the same number of mid-tier clusters110). Alternatively, the system100includes multiple mid-tier clusters110for each origin102(e.g., more mid-tier clusters110than origins102). As another alternative, the system100can include fewer mid-tier clusters110than origins102(e.g., multiple origins102can correspond with the same mid-tier cluster). In an embodiment, the mid-tier cluster110services requests from a content delivery network (CDN)120. If the mid-tier cluster110receives a request for content not currently in storage (e.g., not stored in any of the caches112A-N), the mid-tier cluster110requests a cache fill from the origin102. In an embodiment, the CDN120performs last-mile caching and content delivery to users130(e.g., subscribers to a streaming service). The CDN120provides last-mile caching by maintaining a CDN cache122for recently viewed material (e.g., by users130). The CDN120uses the CDN cache122to quickly provide frequently requested content to the users130, avoiding repeated requests for content from the mid-tier cluster110(or the origin102). For example, the CDN120can be a public CDN operated by a third-party entity not associated with the entity, or entities, that maintain the mid-tier cluster110and origin102. Alternatively, the CDN120can be operated by the same entity, or entities, that maintain the mid-tier cluster110and origin102. In an embodiment, the CDN120receives a request for content form a user130. If the content is maintained in the CDN cache122(including the associated CDN cache storage124), the CDN120returns the content to the user130. If the requested content is not maintained in the CDN cache122, the CDN requests the content from the mid-tier cluster110. The CDN120includes a CDN ingest service126which ingests content received from the mid-tier cluster (e.g., stores the content, if appropriate, in the CDN cache122and provides the content to the requesting user130). In an embodiment, content stored at the CDN120(e.g., stored in the CDN cache122) has an associated expiration time (e.g., a time-to-live (TTL)). If the CDN120receives a request for content stored in the CDN cache122, but which has expired, the CDN120performs a revalidation request on the content (e.g., a request to the mid-tier cluster110) to determine whether the content has changed. This is discussed further below. If the content has not changed, the CDN120extends the expiration time and delivers the cached content to the user. If the content has changed, the CDN120receives the content (e.g., from the mid-tier cluster110), stores the content (e.g., in the CDN cache122, as appropriate), and delivers the content to the user130. In the illustrated embodiment ofFIG.1, the mid-tier cluster110receives content requests from the CDN120, which delivers content to the users130. The system100includes two layers of caching: mid-tier cluster110includes a cache112A-N, and the CDN includes a cache122. This is merely an example. The system100could include additional layers of caching. For example, the mid-tier cluster110could receive content requests from another intermediate layer (e.g., another mid-tier cluster) with another layer of caching. One or more of the techniques described below could be used in this scenario. FIG.2is a block diagram illustrating a cache server200for a system for content delivery using a communication network, according to at least one embodiment. The cache server200includes a processor202, a memory210, network components220, and a content cache230. The processor202generally retrieves and executes programming instructions stored in the memory210. The processor202is included to be representative of a single central processing unit (CPU), multiple CPUs, a single CPU having multiple processing cores, graphics processing units (GPUs) having multiple execution paths, and the like. The network components220include the components necessary for the cache server200to interface with components over a network (e.g., as illustrated inFIG.1). For example, the cache server200can be a part of the mid-tier cluster110, and the cache server200can use the network components220to interface with remote storage and compute nodes using the network components (e.g., the origin102and the CDN120). Alternatively, or in addition, the cache server200can be located in a different part of the system100(e.g., the origin102, the CDN120, or another suitable location). The cache server200can interface with other elements in the system over a local area network (LAN), for example an enterprise network, a wide area network (WAN), the Internet, or any other suitable network. The network components220can include wired, WiFi or cellular network interface components and associated software to facilitate communication between the cache server200and a communication network. Although the memory210is shown as a single entity, the memory210may include one or more memory devices having blocks of memory associated with physical addresses, such as random access memory (RAM), read only memory (ROM), flash memory, or other types of volatile and/or non-volatile memory. The memory210generally includes program code for performing various functions related to use of the cache server200. The program code is generally described as various functional “applications” or “services” within the memory210, although alternate implementations may have different functions and/or combinations of functions. Within the memory210, an cache service212facilitates retrieving content from a remote source (e.g., from the origin102illustrated inFIG.1), storing content in, and retrieving content from, the content cache230, and providing content to a remote destination (e.g., the CDN120illustrated inFIG.1). This is discussed further below with regard to subsequent figures. In an embodiment, the cache service212stores to, and retrieves from, the cache230using any suitable cache algorithm (e.g., a segmented least recently used (LRU) algorithm). FIG.3illustrates a content revalidation flow300in a system for content delivery using a communication network, according to at least one embodiment. As discussed above, in an embodiment a CDN320(e.g., the CDN120illustrated inFIG.1) performs periodic revalidation requests on content stored in a CDN cache (e.g., the CDN cache122illustrated inFIG.1). For example, content stored in the CDN cache can include an associated expiration time (e.g., a TTL). If the content is requested by a viewer (e.g., a user130as illustrated inFIG.1) and is maintained in the CDN cache but has expired, the CDN320can transmit a revalidation request (e.g., to a mid-tier cluster) to determine whether the content has changed since it was cached at the CDN. Alternatively, or in addition, the CDN320can perform intermittent revalidation requests on content stored in the CDN cache (e.g., after a specified time period passes, when the CDN is otherwise idle or less busy than usual, at a particular time of day, etc.). In an embodiment, the CDN320transmits a revalidation request322to a mid-tier cluster310(e.g., the mid-tier cluster110illustrated inFIG.1). For example, the CDN320can transmit a conditional HTTP GET request (e.g., an HTTP GET If-Modified-Since request) to the mid-tier cluster310. As discussed below, the mid-tier cluster310determines whether the content has changed at the origin since it was provided to the CDN. In an embodiment, when doing so the mid-tier cluster310evaluates the content itself to determine whether the content has changed, at the origin, since being provided to the CDN. If the requested content is maintained locally at the mid-tier cluster310(e.g., in a cache112A-N), the mid-tier cluster310checks to see whether the content has changed since it was cached by the CDN320. For example, the mid-tier cluster can compare a last-modified timestamp associated with the content maintained at the mid-tier cluster with a last-modified timestamp associated with the content maintained at the CDN. This is merely one example, and any suitable technique can be used to determine whether the content has changed (e.g., comparing a checksum value, a created timestamp, etc.). If the content has not changed, the mid-tier cluster310transmits a response to the CDN320indicating that the content has not been modified (e.g., an HTTP 304 “Not Modified” response). If the content has changed, the mid-tier cluster310transmits the content to the CDN320. If the requested content is not maintained locally at the mid-tier cluster310(e.g., it is not maintained in the cache112A-N), the mid-tier cluster cannot determine whether the content has changed. The mid-tier cluster must retrieve the content, or metadata associated with the content, from an origin302(e.g., the origin102illustrated inFIG.1). In an embodiment, as discussed further below, mid-tier cluster310conditionally determines whether to request from the origin302full content, or just metadata. If the mid-tier cluster310determines to request full content, it transmits an HTTP GET request to the origin302, and receives in response a message that includes both metadata associated with the requested content object (e.g., HTTP headers) and the full content object. If the content has changed, the mid-tier cluster310transmits the content retrieved from the origin to the CDN320. Always retrieving full content objects from the origin302, however, has some drawbacks. The mid-tier cluster310retrieves the full content object from the origin302, but if the content has not changed the mid-tier cluster transmits to the CDN320only a not-modified response324, and not the full content object. Because the content has not changed, the content object is retrieved from the origin302(i.e., to allow the mid-tier to determine that the content has not changed) but is not transmitted to the CDN320. In an embodiment, this can be improved by, in some circumstances, retrieving only metadata associated with the content object from the origin, rather than the full content object. The mid-tier cluster310transmits to the origin a request326for metadata associated with the content object (e.g., an HTTP HEAD request for HTTP Headers). In an embodiment, the request326seeks from the origin302only metadata for the content, which the mid-tier cluster310can use to determine whether the content has changed. The mid-tier cluster310does not request the full content object from the origin302. The origin302transmits metadata328(e.g., HTTP headers) in response to the request326. Thus, in an embodiment, in response to a revalidation request the mid-tier cluster310conditionally retrieves only metadata328from the origin302, rather than the full content object and the metadata. This can result in significant bandwidth savings and savings in compute resources (e.g., at the origin302). For systems where content does not frequently change (e.g., streaming services with a relatively static library of content), this is a substantial improvement. Further, as discussed below in regard toFIG.5, in an embodiment the mid-tier cluster310maintains the metadata for the content in cache, rather than the complete content. This results in much more efficient storage at the mid-tier cluster310, allowing for reduced storage capacity and allowing for a higher rate of cache hits in existing storage (e.g., because more entries can be stored since each is significantly smaller). For example, the metadata for a content object may be on the order of 5-10 KB in size, while the content object itself may be 20-30 MB in size (or larger). As discussed above, however, in an embodiment the mid-tier server310does not always retrieve only metadata in response to a revalidation request, and does not always retrieve full content: the mid-tier cluster310conditionally selects between these options. The mid-tier cluster310can use one or more logical processes to conditionally determine whether to retrieve and cache the full content object, in addition to the metadata. This is discussed further in relation toFIG.4, below. For example, the mid-tier cluster310can initially maintain only object metadata in its cache, and can track the number of accesses to each metadata object (e.g., HTTP Header) in the cache. If a metadata object is accessed more than a set number of times (e.g., more than 3 times), then the content is deemed popular enough to retrieve and cache the full object in the mid-tier, rather than just the metadata. This protects the origin302from excessive load by maintaining popular content in the mid-tier cache. Because the CDN320typically maintains in its own cache content that is frequently requested (e.g., by users), a revalidation request from the CDN320to the mid-tier (e.g. as opposed to a full retrieval request) indicates that the content is popular enough to be maintained in cache at the CDN320. The number of revalidation requests from a CDN320to the mid-tier cluster310can, therefore, indicate the popularity of the content. In an embodiment, the logical process used by the mid-tier server to determine whether to retrieve a full content object can be configured by a user (e.g., a system administrator) using a suitable user interface, can be set to a default value without user input, or both. This is merely one example of a logical process to determine whether to retrieve a content object. Further examples are discussed below, in relation to block420inFIG.4. FIG.4is a flowchart400illustrating content delivery using a communication network. A cache service (e.g., the cache service212illustrated inFIG.2) receives a request for content (e.g., an HTTP Get request). As discussed above, in an embodiment the cache service operates in a mid-tier cluster (e.g., the mid-tier cluster110illustrated inFIG.1). In this embodiment, the cache service receives an HTTP Get request from a CDN (e.g., the CDN120illustrated inFIG.1) or another suitable remote requestor (e.g., another network edge location). This is merely one example. Alternatively, as discussed above, the cache service could receive a request for content from another layer (e.g., another mid-tier layer instead of a CDN) with its own cache. Alternatively, or in addition, the cache service could operate on an edge location (e.g., in the CDN120illustrated inFIG.1) or an origin location (e.g., the origin102illustrated inFIG.1). Further, an HTTP Get is merely one example of a suitable request for content. Any suitable request (e.g., another type of HTTP request, another network request, an Application Programming Interface (API) call, a remote procedure call (RPC), etc.) can be used. At block404, the cache service determines whether the request is a revalidation request (e.g., a conditional HTTP Get request). If so, the flow proceeds to block406. At block406, the cache service determines whether object metadata (e.g., relating to the requested content) is located in a local cache (e.g., a cache230illustrated inFIG.2). If not, the flow proceeds to block408. At block408, the cache service determines whether a predefined policy allows for metadata only caching. In an embodiment, a user (e.g., a system administrator) can configure a policy to enable (or disable) metadata only caching. This is merely one embodiment, however, and in other embodiments block408may be omitted (e.g., the cache service may always allow for metadata only caching). If the cache service determines that metadata only caching is allowed, the flow proceeds to block410. Otherwise the flow proceeds to block416, described further below. At block410, the cache service fetches the metadata associated with the requested content from an origin (e.g., from the origin102illustrated inFIG.1) and stores the metadata in a local cache (e.g., the content cache230illustrated inFIG.2). As discussed above in relation toFIG.3, in an embodiment a cache service (e.g., in a mid-tier cluster) can cache metadata associated with content (e.g., HTTP Header metadata) in place of the content itself. The cache service can use this metadata to respond to validation requests, instead of requiring the complete content object. At block412, the cache service determines whether the object has changed. For example, in an embodiment the request received at block402can indicate when the requested content was last validated. In an embodiment, the revalidation request can include a field indicating when the content was last validated by the requestor (e.g., when the content was last validated by the CDN). The content metadata can include a last-modified timestamp, or another suitable indicator, and the cache service can analyze the content metadata to determine whether the content has changed since it was last validated by the requestor. If the content has not changed, the flow proceeds to block414. At block414, the cache service delivers a non-modified response. For example, the cache service can respond to the request received at block402with an HTTP 304 “Not-Modified” response. These is merely one example, and any suitable response, indicating that the requested content has not been modified, can be used (e.g., an API response, an RPC response, another suitable network message response, etc.). Returning to block412, if the cache service determines that the content object has changed, the flow proceeds to block416instead of block414. At block416, the cache service fetches the full content object (e.g., including the content itself in addition to the metadata fetched at block410) from an origin (e.g., the origin102illustrated inFIG.1, or another suitable source) and caches the full content object (e.g., in the content cache230illustrated inFIG.2). The flow proceeds from block416to block418. At block418, the cache service delivers the content object to the requestor (e.g., to the CDN120illustrated inFIG.1). In an embodiment, the cache service delivers the full requested content object to the requestor. Alternatively, or in addition, the cache service delivers a portion of the requested content object to the requestor (e.g., followed by one or more later transmission with the remaining portions of the object). Returning to block406, the cache service determines that metadata for the requested content object is located in the cache (e.g., in the cache230illustrated inFIG.2). The flow proceeds to block420. The cache service determines whether the metadata hit-count is greater than or equal to a pre-determined threshold value. In an embodiment, the cache-service tracks requests for the content metadata. For example, the cache-service can track requests for the content metadata since the metadata entered the cache, over a specified duration, since a last retrieval of the full content object, etc. The number of requests is the hit-count. In an embodiment, the threshold relates to a total number of hits. Alternatively, or in addition, the threshold relates to a rate of hits (e.g., a number of hits over a given period of time). In an embodiment, the hit-count is stored along with the metadata (e.g., in the cache230illustrated inFIG.2). The hit-count is cleared when the metadata is evicted from the cache (e.g., according to a cache algorithm). Alternatively, or in addition, hit-count can be stored in a separate data structure (e.g., in the cache230illustrated inFIG.2, or in any other suitable location). Further, in an embodiment, the cache service maintains a pre-determined threshold at which a full content object (e.g., as opposed to only the content metadata) will be retrieved and stored in the local cache. As discussed above, this can ensure that commonly requested content is stored in full at the cache server's local cache (e.g., the cache230illustrated inFIG.2) and can be provided to requestors. Further, as discussed above, this threshold can be set by a user (e.g., a system administrator) using a suitable user interface. The threshold can also be set to a default value (e.g., 3). Further, in an embodiment the cache service can determine whether the hit-count is greater than the threshold (e.g., as opposed to greater than or equal to the threshold). If the metadata hit-count is greater than or equal to the threshold, the flow proceeds to block416. As discussed above, at block416the cache service fetches the full object from an origin and stores the full object in a local cache. If the metadata hit-count is less than the threshold, the flow proceeds to block412. As discussed above, at block412the cache service determines whether the object has changed. Use of a pre-determined hit-count threshold is merely one example of a logical process that the cache service could use to determine when to proceed to block420and fetch the full content object. Alternatively, or in addition, the cache service could fetch the full content object based on the load on the cache service (e.g., the load on the mid-tier cluster110illustrated inFIG.1), the load on the origin, network usage statistics, etc. Further, the cache service could fetch the full content object based on the remaining capacity in the local cache (e.g., retrieving content objects more frequently if the cache has more capacity remaining), the regional popularity of the requested content (e.g., the number of plays in a given region), feedback on user experience or CDN performance, etc. These are merely example, and any suitable logical process can be used. Returning to block404, if the cache service determines that the request received at block402is not a revalidation request, the flow proceeds to block422. At block422the cache service determines whether the full content object is stored in the local cache (e.g., in the cache230illustrated inFIG.2). If so, the flow proceeds to block418and the cache service delivers the full object to the requestor (as discussed above). If not, the flow proceeds to block416and the cache service fetches the full content object from an origin and stores the full content object in the local cache (as discussed above). FIG.5illustrates a content cache512in a system for content delivery using a communication network, according to at least one embodiment. A traditional content cache500includes both metadata and content entries502, for each content object. As discussed above in relation toFIGS.3-4, in an embodiment this can be improved by storing a mix of metadata and content entries502, for some content objects, and metadata only entries504, for other content objects. In an embodiment, this allows a cache service (e.g., the cache service212illustrated inFIG.2) to respond to revalidation requests by retrieving metadata only from the cache512, assuming the content has not changed. Further, this provides for significantly more efficient storage in the cache512as compared with the cache500, because the metadata only entries504can be orders of magnitude smaller than the metadata and content entries502(e.g., which include the actual content to provide to a user). In the current disclosure, reference is made to various embodiments. However, it should be understood that the present disclosure is not limited to specific described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the teachings provided herein. Additionally, when elements of the embodiments are described in the form of “at least one of A and B,” it will be understood that embodiments including element A exclusively, including element B exclusively, and including element A and B are each contemplated. Furthermore, although some embodiments may achieve advantages over other possible solutions or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the present disclosure. Thus, the aspects, features, embodiments and advantages disclosed herein are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the invention” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s). As will be appreciated by one skilled in the art, embodiments described herein may be embodied as a system, method or computer program product. Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, embodiments described herein may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). Aspects of the present disclosure are described herein with reference to flowchart illustrations or block diagrams of methods, apparatuses (systems), and computer program products according to embodiments of the present disclosure. It will be understood that each block of the flowchart illustrations or block diagrams, and combinations of blocks in the flowchart illustrations or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block(s) of the flowchart illustrations or block diagrams. These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other device to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the block(s) of the flowchart illustrations or block diagrams. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process such that the instructions which execute on the computer, other programmable data processing apparatus, or other device provide processes for implementing the functions/acts specified in the block(s) of the flowchart illustrations or block diagrams. The flowchart illustrations and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart illustrations or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order or out of order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustrations, and combinations of blocks in the block diagrams or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. | 35,340 |
11943316 | DETAILED DESCRIPTION A database proxy can connect clients, such as computing resources and/or applications, to a database. For example, instead of connecting to the database directly, clients can connect to the database proxy. The database proxy can have one or more direct connections to the database, such that the database proxy can route data to and/or from the database in association with different clients that connect to the database proxy. Accordingly, the database proxy can have a set of client connections to different clients, and also have a set of database connections to the database. In some examples or situations, the database proxy may “pin” a client connection, associated with a client, to a database connection. In such examples or situations, there can be a 1 to 1 mapping between the client connection and the database connection, such that the database proxy always uses the database connection pinned to the client connection to send and/or receive data associated with the client. In other examples or situations, the database proxy can provide multiplexing services that allow the same database connection to be re-used in association with different clients and client connections. For example, when a first client attempts to perform a first database transaction via the database proxy, the database proxy may temporarily associate the first client with a particular database connection until the first database transaction is complete. After the first database transaction is complete, the database proxy can disassociate the first client from the particular database connection. Accordingly, if a second client later attempts to perform a second database transaction via the database proxy, the database proxy may re-use the particular database connection by temporarily associating the second client with the particular database connection until the second database transaction is complete. Because the multiplexing provided by the database proxy can allow a set of database connections to be re-used in association with different clients and client connections, fewer computing resources associated with the database can be used to set up and maintain those database connections, relative to establishing direct database connections for each individual client that interacts with the database. Multiplexing can also be more efficient than having exclusive database connections for each individual client. For instance, while individual clients may only sporadically interact with the database, such that exclusive database connections associated with the individual clients may be left unused relatively often, multiplexing can allow the database proxy to re-use database connections with different clients when those clients do interact with the database. In some cases, a service provider that owns or operates the database proxy and/or the database may charge clients less for multiplexed access to the database because database connections can be reused, and may charge clients more for pinned connections that exclusively reserve database connections for those clients but that may increase usage of computing resources associated with the database. Although multiplexing database connections can be more efficient and cost-effective than pinning database connections in many situations, some existing database proxies are unable to multiplex database connections in association with certain types of database interactions. For example, many database proxies are unable to multiplex database connections when clients use prepared statements to interact with a database. A client can be configured to set up a prepared statement with a database. The prepared statement may be a database query or other database command, such as a Structured Query Language (SQL) command. After the client sets up the prepared statement with the database, the client can then re-use the prepared statement multiple times to interact with the database, without setting up an entire new query or other command for each individual database interaction. For example, a prepared statement may be a SQL statement that includes one or more variables. After the prepared statement is set up with the database, the client may repeatedly re-use the prepared statement by providing the database with different values for the variables of the prepared statement, instead of generating and formatting entire new SQL statements based on different variable values for each individual database interaction. Re-using prepared statements can thus be more efficient for clients than repeatedly generating full database commands, and may thus reduce time and usage of processor cycles, memory, and/or other computing resources associated with database interactions. However, because prepared statements are set up with the database by clients, the database can associate prepared statements with the database connections that were used to set up the prepared statements. This can cause conflicts or errors if database connections are re-used though multiplexing via a database proxy as described above. For example, a database proxy may temporarily associate a client with a first database connection during a first database transaction. During the first database transaction, the client may set up a prepared statement that the database associates with the first database connection. At a later point in time, in association with a second database transaction, the database proxy may temporarily associate the same client with a different second database connection. Accordingly, if the client attempts to re-use the prepared statement during the second database transaction, the database may return an error in some cases because the prepared statement had been set up in association with the first database connection instead of the second database connection the database proxy is using for the client during the second database transaction. In other cases, if the database associates the second database connection with a different prepared statement that had been set up by a different client, and the client attempts to invoke the prepared statement it had set up during the first database transaction in association with the first database connection, the database may return unexpected or undesired data to the client according to the different prepared statement that the database associates with the second database connection. Due to the possibility of such conflicts and errors, many existing database proxies do not allow multiplexing of connections when clients use prepared statements. For example, if a client attempts to set up a prepared statement with a database, such that the database associates the prepared statement with a particular database connection, some database proxies are configured to pin the client to that particular database connection. By pinning the client to the particular database connection in this situation, the database proxy can ensure that any attempts by the client to re-use the prepared statement are passed to the database via the same database connection that the database associates with the prepared statement. However, as discussed above, pinning the client to a particular database connection that becomes reserved exclusively for the client can be an inefficient use of database resources, and/or can be more costly for the client. Other database proxies handle multiplexing with prepared statements by refreshing database connections each time a database connection is used for a database transaction associated with a client. By refreshing the database connections, any previous prepared statements associated with those database connections that might otherwise lead to conflicts or errors can be cleared by the database. However, because such database proxies refresh the database connections when the database connections are re-used, the clients may have to set up prepared statements again for each database transaction. For example, the database proxies may inform clients that any prepared statements set up by the clients will not be preserved, and that the clients can be coded to set up prepared statements anew for each database transaction. However, having to set up prepared statements anew for each database transaction can be inefficient and defeat the purpose of prepared statements, which as discussed above are generally used by clients to efficiently re-use prepared statements over time after the prepared statements have been set up. However, described herein is a database proxy that can multiplex database connections when clients use prepared statements to interact with a database. The database proxy can store state data associated with clients. When a client attempts to set up a prepared statement with the database, the database proxy can save corresponding prepared statement setup data in the state data. If the client later attempts to re-use the prepared statement, when the database proxy may be using a different database connection in association with the client, the database proxy can send the prepared statement setup data saved in the state data to the database. The prepared statement setup data provided by the database proxy can cause the database to set up the prepared statement in association with the current database connection being used by the database proxy in association with the client. The database proxy can also pass any additional information, such as variable values, provided by the client in association with the attempt to re-use the prepared statement to the database, such that the database can respond based on the prepared statement. Accordingly, although multiplexing can cause a client's attempt to re-use a prepared statement to be transmitted via a different database connection than the database proxy previously used for the client, the database proxy can used stored prepared statement setup data to set up the prepared statement again with the database in association with the current database connection being used for the client. This process may be transparent to the client itself, such that the client can send setup data associated with a prepared statement once, and then later re-use the prepared statement even if the database proxy is multiplexing database connections and data associated with the client may pass to the database via different database connections at different times. The systems and methods associated with the database proxy described herein may be implemented in a number of ways. Example implementations are provided below with reference to the following figures. FIG.1shows an example100of system in which a database proxy102can connect clients104to a database106and perform multiplexing in association with prepared statements. The clients104, such as clients104A-104E shown inFIG.1, can be configured to interact with the database106. For example, the clients104can be configured to engage in database transactions, such as transactions to store data in the database106, edit data in the database106, retrieve data from the database106, and/or otherwise interact with the database106. In some examples, the clients104can be servers, computers, virtual machines, hosts, nodes, and/or other computing resources. In other examples, the clients104can be software applications, such as web applications, mobile applications, or software applications executing on one or more computing resources. Different clients104may, in some examples, be different instances of the same application that are executing via different hosts or other computing resources. As discussed above, the clients104can be configured to interact with the database106. For example, during a database transaction, a client can send one or more messages108to the database106, and the database106can return one or more corresponding responses110to the client. In some examples the messages108can be, or include, SQL statements that request data from the database106, add data to the database106, edit data in the database106, delete data in the database106, and/or otherwise interact with the database106. The responses110from the database106can include data retrieved from the database106in response to corresponding messages108, confirmations of operations performed by the database106in response to corresponding messages108, error messages associated with requested operations that could not be performed by the database106, and/or any other types of responses from the database106. The database proxy102can be configured to connect individual clients104to the database106via client connections112between the clients104and the database proxy102, and via database connections114between the database proxy102and the database106. For example, the database proxy102can receive messages108from clients104via client connections112, and can forward the messages108to the database106via database connections114. The database proxy102can similarly receive responses110from the database106via database connections114, and can forward the responses110to the corresponding clients104via client connections112. In some situations, the database proxy102may also generate messages108that the database proxy102sends to the database106via database connections114, and may receive corresponding responses110from the database106via the database connections114. The database proxy102and/or the database106can execute via one or more servers, virtual machines, or other computing resources. In some examples, the database proxy102and/or the database106can execute via computing resources of a service provider network, a cloud computing environment, or other type of computing environment. In some examples, the database proxy102can expose one or more application programming interfaces (APIs) or other interfaces that clients104can use to establish corresponding client connections112with the database proxy102, such as client connections112A-112E shown inFIG.1. The database proxy102can also establish a set of database connections114between the database proxy102and the database106, such as database connections114A-114C shown inFIG.1. Accordingly, a client can interact with the database106via one of the client connections112that connects the client to the database proxy102, and via one of the database connections114that connects the database proxy102to the database106. The number of client connections112between clients104and the database proxy102may differ from the number of database connections114between the database proxy102and the database106. For example, while there may be N client connections112, there may be M database connections114. In some examples, there may be fewer database connections114than client connections112. In some situations, the database proxy102may pin a particular database connection with a particular client connection, in order to create a 1 to 1 mapping between the particular client connection and the particular database connection. A database connection that is pinned by the database proxy102to a client connection can become reserved for exclusive use with that client connection, and the database proxy102can serve as a passthrough between the client connection and the pinned database connection. Accordingly, in these situations, the database proxy102may be configured to always use a particular pinned database connection to communicate with the database106in association with a corresponding client connection, at least until the client connection is terminated or is unpinned from the particular database connection. As an example, the database proxy102may pin client connection112A, associated with client104A, with database connection114C. The database proxy102may thus serve as a passthrough for messages108and responses110that the client104A and the database106exchange via the client connection112A and the pinned database connection114C. However, the database proxy102can also multiplex database connections114, such that the database proxy102can use different database connections114at different times in association with different client connections112. For example, if a client attempts to engage in a database transaction with the database106via a client connection that is not pinned to a particular database connection, the database proxy102can select any of the database connections114that are not in current use and that are not already associated with other client connections112. The database proxy102can temporarily associate the client connection with the selected database connection during the database transaction, but may disassociate the client connection from the selected database connection upon completion of the database transaction. Accordingly, the database proxy102may later re-use the same database connection in association with a different client connection, for example by temporarily associating a different client connection with the same database connection during a different database transaction. Over time, the database proxy102can use multiplexing to associate the same database connection with different client connections112. As an example, if client104E uses client connection112E to send a database query to the database proxy102, the database proxy102may determine that database connection114A is currently available to be used with client connection112E. For instance, the database proxy102may determine that database connection114A is currently available because database connection114A is not currently pinned to any client connection, and because database connection114A is not currently temporarily associated with any client connection. Accordingly, the database proxy102may at least temporarily associate client connection112E with database connection114A, and can use database connection114A to send the database query on to the database106. If the database106sends a response to the database query to the database proxy102via database connection114A, the database proxy102can send the response to client104E via client connection112E. After the interaction between client104E and the database106is complete, the database proxy102can disassociate database connection114A from client connection112E, such that database connection114A again becomes available for use with any of the client connections112. For instance, if client104D later attempts to interact with the database106via client connection112D, the database proxy102may re-use database connection114A by temporarily associating database connection114A with client connection112D during interactions between client104D and the database106. The database proxy102can maintain state data116associated with client connections112and/or database connections114. The state data116can be used to track authentication data, session variables, temporary tables, metadata, and/or other attributes associated with the client connections112and/or database connections114. The state data116can also track information about prepared statements used by clients104, as discussed further below. The database proxy102can use the state data116during multiplexing of database connections114, for instance to determine which database connections114the database proxy102can use in connection with which client connections112. As an example, the state data116can indicate that a particular client connection is associated with UTF-8 character encoding. When the database proxy102receives a message for the database106via that particular client connection, the database proxy102can use the state data116to determine which available database connections114have also been set up to use UTF-8 character encoding. The database proxy102can select one of the database connections114associated with UTF-8 character encoding and use the selected database connection to forward the message to the database106, in order to avoid errors that may occur if the message were instead forwarded via a different database connection that uses a different type of character encoding. As another example, the state data116can indicate that one or more client connections112have been authenticated with a particular username, and can also indicate that one or more database connections114have been authenticated with that particular username. Accordingly, when a message for the database106arrives via one of the client connections112that the state data116indicates have been authenticated with a username, the database proxy102can use the state data116to select one of the database connections114have also been authenticated with the same username. The database proxy102can use the selected database connection to forward the message to the database106, in order to avoid authentication errors that may occur if the message were instead forwarded via a different database connection that was not authenticated with the username. In some situations, clients104may use prepared statements during interactions with the database106. If clients104set up prepared statements, the database proxy102can store corresponding prepared statement (PS) setup data118in the state data116, in association with the corresponding client connections112and/or database connections114. The database proxy102can use the state data116associated with the prepared statements during multiplexing of database connections114, for instance to ensure that prepared statements associated with client connections112can be used via corresponding database connections114selected by the database proxy102. Prepared statements can be associated with database queries or other database commands, such as SQL commands, that clients104can use to interact with the database106during database transactions. A client can set up a prepared statement in order to re-use the prepared statement one or more times during similar database transactions, without generating and formatting a new database query or command for each of the individual database transactions. As an example, prepared statements can be set up and used via the PostgreSQL extended query protocol or similar protocols, in which execution of SQL commands can be divided into multiple steps. Such steps can be associated with various types of messages108, such as Parse messages, Bind messages, Describe messages, Execute messages, Sync messages, and/or other types of messages. A Parse message can define a prepared statement, and can be processed by the database106to set up the prepared statement. As described further below, the database106can set up the prepared statement in association with the database connection by which the database106receives the Parse message. The Parse message can include a textual string for an SQL statement. The textual string may, in some examples, include variable placeholders for values that can be provided as arguments in later separate messages. As an example, a Parse message can include a textual string for an SQL statement such as “SELECT*FROM my_table WHERE some_var=?” in which “?” is a variable placeholder. In this example, the SQL statement in the Parse message can be configured to retrieve, from a table in the database106named “my_table,” all records that have a “some_var” field with a value that can be provided in a later message (as denoted by the “?” variable placeholder). In some examples, the Parse message can also provide a name for the prepared statement. After the database106has set up a prepared statement based on a Parse message as described above, a client can use a Describe message to retrieve information about the prepared statement. For example, a Describe message can specify a name of an existing prepared statement to retrieve information about variable placeholders associated with the prepared statement, types and/or formats of data that can be returned by the prepared statement, and/or other information about the prepared statement. Additionally, after the database106has set up a prepared statement based on a Parse message as described above, a client can send a Bind message that readies the prepared statement for execution. The Bind message can identify the name of the prepared statement, and can include arguments providing values for corresponding variable placeholders, if any, in the prepared statement. Accordingly, rather than setting up an entire new SQL statement, the client can use a Bind message to call an existing prepared statement and, in some cases, provide arguments with values for the variable placeholders of the prepared statement. The client can also send an Execute message after the Bind message, which can cause the database106to execute the prepared statement, for instance by filling in variable placeholders of the SQL statement with values provided in the Bind message and executing the filled-in SQL statement. For instance, if a client has already sent a Parse message to set up a prepared statement that includes the example “SELECT*FROM my_table WHERE some_var=?” SQL statement described above, at a first time the client may send a first Bind message with an argument of “75,” followed by a first Execute message. The first Bind message and the first Execute message can thus cause the database106to execute, based on the prepared statement set up earlier via the Parse message, a first query of “SELECT*FROM my_table WHERE some_var=75.” At a later second time, the client may send a second Bind message with an argument of “300,” followed by a second Execute message. The second Bind message and the second Execute message can thus cause the database106to execute, based on the prepared statement set up earlier via the Parse message, a second query of “SELECT*FROM my_table WHERE some_var=300.” In this example, the client can avoid generating and sending distinct full SQL statements such as “SELECT*FROM my_table WHERE some_var=75” and “SELECT*FROM my_table WHERE some_var=300” at different times. The client can instead set up the prepared statement once, and then use later Bind and Execute messages that provide values for variable placeholders and that cause the database to execute corresponding queries. In some examples, a block of messages108can be used to set up or invoke a prepared statement. The block of messages may conclude with a Sync message, and the database106may be configured to wait until a Sync message is received to execute one or more messages108that precede the Sync message. For example, a prepared statement may initially be set up for later execution via a block of PS setup messages, such as a Parse message followed by a Sync message. As another example, a prepared statement can be set up and executed via a block of PS setup messages that includes a Parse message, a Bind message, an Execute message, and a Sync message. As yet another example, to use an existing prepared statement that was set up earlier, a client may send a block of PS invocation messages that reference the existing prepared statement. For example, a block of PS invocation messages can include a Bind message, an Execute message, and a Sync message. When the database proxy102is configured to multiplex database connections114, and a particular client connection is not pinned to a particular database connection, the database proxy102can select an available database connection to route messages108received from a client to the database106. The database proxy102may, at different times, use different database connections114to send messages108associated with the same prepared statement and/or the same client connection to the database106. However, as discussed above, the database106can associate particular prepared statements with particular database connections114. Accordingly, the database proxy102can use PS setup data118to avoid routing messages108from clients104, which reference previously-set-up prepared statements, over database connections114that the database106does not associate with those prepared statements. For example, during multiplexing of database connections114, the database proxy102can use the PS setup data118to select database connections114that the database106already associates with prepared statements that are referenced in messages108received from clients104, and/or to cause the database106to set up the prepared statements again in association with database connections114selected by the database proxy102. When a client sends PS setup messages to set up a prepared statement, such as a block of messages108that includes a Parse message, the database proxy102can store corresponding PS setup data118in state data116associated with the client connection by which the messages108were received from the client. In some examples, the PS setup data118can be a copy of one or more of the PS setup messages received from the client, such as a Parse message. In other examples, the database proxy102can derive one or more types of PS setup data118from the PS setup messages received from the client, such as a name for the prepared statement, a textual string of a query or command associated with the prepared statement, data types of arguments associated with variable placeholders in the textual string, and/or any other data about the setup of the prepared statement that is included in or derived from the PS setup messages sent by the client. The database proxy102can also select a database connection to temporarily associate with the client connection by which the PS setup messages were received from the client, and can use the selected database connection to forward the PS setup messages to the database106. In some examples, the database proxy102may store the corresponding PS setup data118in the state data116prior to forwarding the PS setup messages to the database106. In these examples, if the database106returns one or more error responses110indicating that the database106could not successfully process the PS setup messages or set up the prepared statement, the database proxy102can roll back the state data116to a prior state that existed before the PS setup messages were forwarded to the database106. For instance, the database proxy102can roll back the state data116by clearing the PS setup data118associated with the prepared statement from the state data116. In other examples, the database proxy102may store the PS setup data118in the state data116after forwarding the PS setup messages to the database106via the selected database connection, and after the database proxy102receives corresponding responses110from the database106confirming that the database106successfully set up the prepared statement in response to the PS setup messages. As described above, the database proxy102can store PS setup data118in state data116associated with a client connection through which PS setup messages setting up a prepared statement were received from a client. However, in some examples the database proxy102can also, or alternately, store the same or similar PS setup data118in state data116associated with a selected database connection used to forward such PS setup messages to the database106, such that the database proxy102can use the PS setup data118in the state data116to track which prepared statements the database106has associated with which database connections114. In some examples, the database proxy102can select database connections114to associate with client connections112based at least in part on the PS setup data118, so that PS invocation messages referencing previously-set-up prepared statements are routed via database connections114that the database106already associates with those prepared statements or compatible prepared statements. Accordingly, by using the PS setup data118to select database connections114, the PS invocation messages can be processed successfully by the database106without the clients104and/or the database106setting up the prepared statements again. As an example, if a block of PS invocation messages received by the database proxy102from client104E via client connection112E begins with a Bind message that references a prepared statement, the database proxy102may use PS setup data118associated with client connection112E to identify one or more attributes of the prepared statement. For instance, the database proxy102can use the PS setup data118to identify a name of the prepared statement, a textual query string associated with the prepared statement, and/or variable placeholders or corresponding data types associated with the prepared statement. The database proxy102can also use PS setup data118associated with the database connections114to determine that the database106already associates database connection114A with the prepared statement named in the Bind message, or a compatible prepared statement that may have a different name but that has the same textual query string and variable placeholders. In this example, the database106may associate database connection114A with the prepared statement due to one or more previous PS setup messages, such as a previous Parse message, sent by client104E or a different client to set up the prepared statement. Accordingly, in this example, the database proxy102can use the PS setup data118to select database connection114A for a current database transaction associated with client connection112E. The database proxy102can send the block of PS invocation messages, which references the prepared statement, to the database106via database connection114A. Because database106already associates database connection114A with the prepared statement named in the block of messages108, or a compatible prepared statement, the database106can process the block of PS invocation messages from client104E even if the database proxy102has previously associated client connection112E with other database connections114during previous database transactions. In other examples or situations, the database proxy102can use PS setup data118to cause the database106to set up a prepared statement in association with a database connection that the database proxy102has selected to be at least temporarily associated with a client connection. For example, as discussed in more detail below with respect toFIG.2, if clients104use client connections112to send PS invocation messages that reference previously-set-up prepared statements, the database proxy102can use corresponding PS setup data118to generate “injected” PS setup messages that correspond with the previously-set-up prepared statements. The injected PS setup messages can, for example, include Parse messages that cause the database106to set up the prepared statements referenced by the PS invocation messages. Such Parse messages can be copies of Parse messages stored in the PS setup data118, or can be new Parse messages generated by the database proxy102based on the PS setup data118. The database proxy102can send the injected PS setup messages over selected database connections114that the database proxy102at least temporarily associates with the client connections112. The injected PS setup messages can cause the database106to set up the prepared statements in association with the selected database connections114currently being used by the database proxy102in association with the client connections112. Following the injected setup messages, the database proxy102can also forward the PS invocation messages that reference the prepared statements via the selected database connections114. Accordingly, by using the PS setup data118to send injected PS setup messages that cause the database106to set up the prepared statements in association with the database connections114selected by the database proxy102during multiplexing, the database106can process PS invocation messages sent via those database connections114without the clients104setting up the prepared statements again. As a first example, the database proxy102can track PS setup data118in association with database connections114as discussed above. However, if the database proxy102receives PS invocation messages from client104C via client connection112C that reference a prepared statement previously set up by client104C, the database proxy102may determine that zero database connections114are available that the database106already associates with that prepared statement. The database proxy102can instead select another available database connection, such as database connection114B, to associate with client connection112C. The database proxy102can use the PS setup data118associated with client connection112C to send injected PS setup messages that cause the database106to set up the prepared statement in association with database connection114B. Accordingly, although the database106did not previously associate database connection114B with the prepared statement, the injected PS setup messages can cause the database106to set up the prepared statement in association with database connection114B so that the database106can process the PS invocation messages sent by client104C that the database proxy102forwards via selected database connection114B. As a second example, the database proxy102may not be configured to track PS setup data118in association with database connections114, but may track PS setup data118in association with client connections112. In this example, if the database proxy102receives PS invocation messages from client104B via client connection112B that reference a prepared statement previously set up by client104B, the database proxy102may select an available database connection, such as database connection114C, to associate with client connection112B. The database proxy102can use the PS setup data118associated with client connection112B to send injected PS setup messages that cause the database106to set up the prepared statement in association with database connection114C. Accordingly, although the database106may not have previously associated database connection114C with the prepared statement, the injected PS setup messages can cause the database106to set up the prepared statement in association with database connection114C so that the database106can process the PS invocation messages sent by client104B that the database proxy102forwards via selected database connection114C. In some examples, injected PS setup messages sent by the database proxy102to cause the database106to set up a prepared statement in association with particular database connection can include, or be sent after, “flush” instructions or other instructions that cause the database106to refresh the database connection and/or clear any previously-existing prepared statements associated with the database connection. By refreshing the database connection and/or clearing any previously-existing prepared statements, potential conflicts between the prepared statement set up via the injected PS setup messages and any such previously-existing prepared statements can be avoided. In other examples, flush instructions may be sent separably and/or at other times by the database proxy102, for instance when the database proxy102initially determines to use a particular database connection in association with a client connection. The database proxy102can use PS setup data118to send injected PS setup messages, and cause the database106to set up corresponding prepared statements in association with currently-selected database connections114, without notifying the clients104. Setting up of prepared statements in association with different database connections114at different times can thus be transparent to the clients104. Accordingly, from the perspective of the clients104, the clients104can send initial PS setup messages to set up prepared statements, and can later send corresponding PS invocation messages that reference those prepared statements regardless of whether or not the database proxy102forwards those messages to the database via the same or different database connections114. As shown inFIG.1, the state data116can include or be associated with a pending message queue120. The pending message queue120can be associated with a pairing of a client connection and a database connection used in association with the setup and/or use of a prepared statement during a database transaction as described herein. The pending message queue120can store information associated with messages108that set up and/or reference prepared statements. For example, as described further below with respect toFIG.3, the database proxy102can add messages108to the pending message queue120when the messages108are sent or forwarded to the database106. The database proxy102can also remove messages108from the message queue120when corresponding responses110, such as confirmations or errors, are returned by the database106. Accordingly, the database proxy102can use the pending message queue120to track messages108that have been sent to the database106by the database proxy102in association with a database transaction, and for which the database proxy102has not yet received corresponding responses110from the database106. The messages108tracked in the pending message queue120can include messages108that have been received from a client via a client connection, and that have been forwarded by the database proxy102to the database106via a database connection that the database proxy102temporarily associates with the client connection. For instance, messages108added to the pending message queue120can be PS setup messages or PS invocation messages received from the client via the client connection, and that database proxy102has forwarded to the database106via the database connection. The database proxy102can remove such messages108from the pending message queue120when the database106returns corresponding responses110to the messages108, and the database proxy102can forward the responses110to the client. The messages108tracked in the pending message queue120can also include injected messages108that were not received from the client via the client connection, but that were sent by the database proxy102to the database106. For instance, messages108added to the pending message queue120can be injected PS setup messages that have been generated by the database proxy102based on PS setup data118, and that have been sent by the database proxy102to the database106via the database connection. The database proxy102can remove such injected messages108from the pending message queue120when the database106returns corresponding responses110, and the database proxy102may be configured to refrain from forwarding such responses110to injected messages108to the client via the client connection. As an example, if the database proxy102sends a Parse message to the database106via a database connection, for example based on forwarded PS setup messages or as part of injected PS setup messages, the database proxy102can add the Parse message to the pending message queue120. At a later point in time, in some examples after one or more subsequent messages108have been sent to the database106, the database proxy102can receive a response to the Parse message from the database106. If the response is a confirmation or success message, such as a “ParseComplete” message indicating that the database106successfully processed the Parse message, the database proxy102can remove the Parse message from the pending message queue120to signify that the database proxy102is no longer waiting for a response to the Parse message. In some examples, if the database106returns an error indicating that a particular message could not be processed, the database proxy102can update the pending message queue120accordingly. For example, the pending message queue120may indicate that the database proxy102has sent a block of messages108including a Bind message and an Execute message to the database106. However, the database106may return an error response, such as an “ErrorResponse” message, indicating that the Bind message could not be processed. The database106may be configured to not process any subsequent messages, in a block of messages, that follow a message the database106is unable to process. The error response indicating that the Bind message could not be processed can therefore also indicate that the database106has not processed, and will not process, the subsequent Execute message. Accordingly, based on the error response to the Bind message, the database proxy102can remove both the Bind message and the subsequent Execute message from the pending message queue120. In other examples, if the database106returns an error indicating that a particular message could not be processed, the database proxy102can use the pending message queue120to roll back to an earlier state of the state data116. For example, the pending message queue120may indicate that the database proxy102has sent a Parse message to the database106, such as a Parse message in PS setup messages received from the client or in injected PS setup messages sent by the database proxy102based on PS setup data118. However, the database106may return an error response, such as an “ErrorResponse” message, indicating that the Parse message could not be processed. Based on the error response, the database proxy102can roll back the state data116and revert to a prior state that existed before the database proxy102sent or forwarded the Parse message to the database106. For instance, if the Parse message was a new PS setup message received from the client that defined a new prepared statement, the database proxy102may have stored new corresponding PS setup data118associated with the new prepared statement in the state data116. However, based on the error indicating that the database106was unable to process the Parse message or set up the new prepared statement, the database proxy102can delete or otherwise clear the added PS setup data118associated with the new prepared statement from the state data116. The database proxy102can also remove the Parse message, and any subsequent messages that the database106may not process due to the error associated with the Parse message, from the pending message queue120. In some examples, the database proxy102may be configured with a connection pin threshold122associated with a size of the state data116associated with prepared statements. As a non-limiting example, the connection pin threshold122can be set to 10 MB. If the size of the state data116associated with prepared statements exceeds the connection pin threshold122, the database proxy102can pin client connections112to database connections114, in order to reduce the size of the state data116or inhibit additional growth of the size of the state data116. For example, a client may send PS setup messages or PS invocation messages to the database proxy102via a client connection that is not already pinned to a database connection. However, if the current size of the state data116associated with prepared statements exceeds the connection pin threshold122, the database proxy102can pin the client connection to an available database connection, such that the database connection is taken out of a pool of database connections114that can be multiplexed. When the client connection is pinned to the database connection, the database proxy102can serve as a passthrough for corresponding messages108and responses110exchanged between a client and the database via the client connection and the database connection, and the database proxy102can avoid storing and maintaining state data116associated with the pairing of the client connection and the database connection that might otherwise be used during multiplexing as described herein. In some examples, when the database proxy102pins a client connection to a database connection, the database proxy102may use previously-stored PS setup data118to send injected PS setup messages that prepare the pinned database connection to be used with one or more prepared statements that the client previously set up. The database proxy102may then remove the previously-stored PS setup data118from the state data116. For example, a client may send PS invocation messages to use a previously-set-up prepared statement as part of a new database transaction while the size of the state data116associated with prepared statements exceeds the connection pin threshold122. In this example, the database proxy102may use PS setup data118already stored in the state data116to send injected PS setup messages that cause the database106to set up the prepared statement in association with a selected database connection that will be pinned to the client connection associated with the client. Accordingly, because the injected PS setup messages can cause the database106to set up the prepared statement in association with the pinned database connection, the database proxy102can pin the client connection to the selected database connection and can remove the PS setup data118associated with the client connection from the state data116. In some examples, if the size of the state data116associated with prepared statements reaches a lower threshold, such as 60% or any other percentage of the connection pin threshold122, the database proxy102may send warnings to clients104indicating that due to memory limits the database proxy102may begin pinning client connections112that use prepared statements to database connections114instead of multiplexing database connections114. Accordingly, if such clients104are configured with modes or alternate features that can avoid use of prepared statements, such clients104may respond to such warnings by activating such modes or alternate features in order to avoid sending the database proxy102PS setup messages or PS invocation messages that may result in pinning of the client connections112associated with the clients104to database connections114. Overall, although the database106can associate prepared statements with particular corresponding database connections114, and the database proxy102may cause client connections112to be associated with different database connections114at different times, the database proxy102can use PS setup data118to ensure that prepared statements can be used and re-used by clients104regardless of which database connections114the database proxy102selects to use in association with the client connections112. The operations of the database proxy102associated with the storage and/or use of the PS setup data118can be transparent to the clients104, such that the clients104can set up, use, and re-use prepared statements without having information indicating which database connections114the database proxy102selects to use in association with the client connections112at different times. For instance, in some examples the database proxy102can use PS setup data118to select a database connection that is already associated with a prepared statement that a client is attempting to use via PS invocation messages. As another example, when the database proxy102selects a database connection to use to route PS invocation messages from a client to the database106, the database proxy102can use PS setup data118to send injected PS setup messages that cause the database106to set up the prepared statement referenced by the PS invocation messages in association with the selected database connection, such that the database106can successfully process the PS invocation messages as described further below with respect toFIG.2. FIG.2shows a sequence diagram200associated with an example in which the database proxy102can, based on stored PS setup data118, use different database connections114during different database transactions that are associated with the same client202and the same prepared statement. As shown inFIG.2, the client202can connect to the database proxy102via a client connection204. The client202can send one or more PS setup messages206associated with a new prepared statement to the database proxy102via the client connection204. For example, the PS setup messages206can include a Parse message with a textual string for a database query associated with the prepared statement. The textual query may, in some examples, include one or more variable placeholders. The Parse message can also provide a name for the prepared statement, and/or other attributes associated with setup of the prepared statement. In some examples, the PS setup messages206may also include other messages108associated with an initial setup and/or execution of the prepared statement, such as a Bind message, a Describe message, an Execute message, and/or a Sync message. The database proxy102may be configured to multiplex database connections114, and as such may not have pinned client connection204to any individual database connection. Accordingly, the database proxy102can select an available database connection to temporarily associate with client connection204during a database transaction initiated via the PS setup messages206. For example, as shown inFIG.2, the database proxy102can determine to temporarily associate a first database connection208with client connection204. Because the database proxy102is multiplexing database connections114, and may later at least temporarily associate client connection204with another database connection that is different from the first database connection208, the database proxy102can store PS setup data118based on the PS setup messages206. The database proxy102can store the PS setup data118in state data116associated with the client connection204. As described further below, the database proxy102can later use the PS setup data118to set up the corresponding prepared statement with the database106again in association with the same or a different database connection. In some examples, the PS setup data118can include copies of one or more PS setup messages206received from client202via the client connection204. In other examples, the database proxy102can derive one or more types of PS setup data118from the one or more PS setup messages206received from client202via client connection204, such as a name for the prepared statement, a textual string of a query or command associated with the prepared statement, data types of arguments associated with variable placeholders in the textual string, and/or any other data about the setup of the prepared statement that is included in or derived from the one or more PS setup messages206. In addition to storing the PS setup data118based on the PS setup messages206, the database proxy102can use the selected first database connection208to forward the PS setup messages206to the database106. The database proxy102can store the PS setup data118in the state data116associated with client connection204before or after forwarding the PS setup messages206to the database106via the first database connection208. In response to the PS setup messages206, the database106can also return one or more corresponding first PS setup responses210to the database proxy102via the first database connection208. The first PS setup responses210can, for example, include a “ParseComplete” message indicating that the database106successfully processed the Parse message in the PS setup messages206and set up the prepared statement. The first PS setup responses210can also, for example, include a “BindComplete” message indicating that the database106successfully processed a Bind message in the PS setup messages206, data retrieved from the database106via an execution of the prepared statement in response to a Bind message and an Execution message in the PS setup messages206, a “CommandComplete” message indicating that execution of command associated with the prepared statement is complete, and/or a “ReadyForQuery” message indicating that the database106has completed processing of the PS setup messages206. The database proxy102can forward the first PS setup responses210to the client202via client connection204. Receipt of the first PS setup responses210, for example including a “ReadyForQuery” message, can indicate a conclusion of the database transaction initiated by the client202via the PS setup messages206. Because the database proxy102only temporarily associated the first database connection208with client connection204during the database transaction, the database proxy102can disassociate the first database connection208from the client connection204based on receipt of the first PS setup responses210from the database106. In some examples, the database proxy102may wait to store the PS setup data118in the state data116until first PS setup responses210confirming that the database106has set up the corresponding prepared statement have been received from the database106. However, in other examples, the database proxy102can add the PS setup data118to the state data116in association with forwarding the PS setup messages206as described above. If the first PS setup responses210include an error message indicating that the database106did not successfully set up the prepared statement, the database proxy102can roll back the state data116to an earlier state before the PS setup messages206were forwarded to the database106, for instance by deleting the PS setup data118from the state data116. If the first PS setup responses210indicated that the database106was able to set up the prepared statement in response to the PS setup messages206, the database proxy102can continue to store the PS setup data118in state data116associated with client connection204. The database proxy102may later receive one or more PS invocation messages212from client202via client connection204. Because the client202can have received the first PS setup responses210in response to the PS setup messages206, the information received by the client202can indicate, to the client202, that the database106has set up the prepared statement and the prepared statement can be used by the client202. Accordingly, the one or more PS invocation messages212can reference or invoke that prepared statement. As an example, the PS invocation messages212can include a Bind message that attempts to ready the prepared statement for execution, for instance by providing values for variable placeholders of the prepared statement. The PS invocation messages212can also include an Execute message and a Sync message that can cause the database106to execute the prepared statement based on the Bind message. As another example, the PS invocation messages212can include a Describe message that requests information about the prepared statement, followed by a Sync message. Because database proxy102is multiplexing database connections114, the database proxy102can select an available database connection to temporarily associate with the client connection204during the database transaction being initiated by the PS invocation messages212. For instance, if the first database connection208used earlier in association with client connection204associated with a different client connection at the time the database proxy102receives the PS invocation messages212from client202via client connection204, the database proxy102may instead temporarily associate client connection204with a different database connection. Accordingly, as shown inFIG.2, the database proxy102may determine to temporarily associate client connection204with a second database connection214(instead of the first database connection208) during the database transaction being initiated by the PS invocation messages212. Although database proxy102may temporarily associate client connection204with the second database connection214in response to receiving the PS invocation messages212, the prepared statement referenced by the PS invocation messages212can have been set up earlier by the database106in association with the first database connection208. Accordingly, because the database106may not yet associate the prepared statement referenced by the PS invocation messages212with the second database connection214, the database proxy102can send injected PS setup messages216based on the PS setup data118to cause the database106to set up the prepared statement referenced by the PS invocation messages212in association with the second database connection214. For example, when the database proxy102receives one or more PS invocation messages212from the client202via the client connection204, the database proxy102can identify stored PS setup data118that is associated with client connection204and that is associated with the prepared statement referenced by the PS invocation messages212. The database proxy102can use the stored PS setup data118to send the injected PS setup messages216to the database106via the second database connection214. The injected PS setup messages216can include at least some messages108that are similar to, or identical to, one or more messages108included in the PS setup messages206that caused the database106to set up the prepared statement for use with the first database connection208. For example, the injected PS setup messages216can include a Parse message that defines the name of the prepared statement, a textual string for a database query associated with the prepared statement, including one or more variable placeholders in some examples, and/or other attributes associated with setup of the prepared statement. In addition to sending the injected PS setup messages216based on the PS setup data118, the database proxy102can also forward the PS invocation messages212, received from client202via client connection204, to the database106via the second database connection214. In some examples, the database proxy102may send the injected PS setup messages216and the PS invocation messages together, for instance in a block of messages that includes the injected PS setup messages216, followed by the PS invocation messages212, followed by a concluding Sync message. In some examples, the database proxy102can also send instructions to the database106to flush and/or refresh the second database connection214by removing any previous prepared statements that the database106associates with the second database connection214. The database proxy102can send such flush instructions to the database106upon selection of the second database connection214in response to receipt of the PS invocation messages212from the client202, and/or prior to the injected PS setup messages216sent to the database106. For example, the database proxy102may send flush instructions to the database106as an initial block of messages before a block of messages that includes injected PS setup messages216and the PS invocation messages, or as part of a block of messages that includes the flush instructions, followed by the injected PS setup messages216, followed by the PS invocation messages212. When the database proxy102sends the injected PS setup messages216and forwards the PS invocation messages212to the database106via the second database connection214, the database proxy102can also receive one or more corresponding responses110from the database106via the second database connection214. For example, the database106can return second PS setup responses218associated with the injected PS setup messages216, and PS responses220associated with the forwarded PS invocation messages212. The second PS setup responses218can, for example, include a “ParseComplete” message indicating that the database106successfully processed the Parse message in the injected PS setup messages216and set up the prepared statement. As another example, if the PS invocation messages212included a Bind message and an Execute message, the PS responses220can include a “BindComplete” message indicating that the database106successfully processed the Bind message, data retrieved from the database106via an execution of the prepared statement in response to the Bind message and the Execution message, and/or a “CommandComplete.” The messages108returned by the database106can also include a “ReadyForQuery” message indicating that the database106has completed processing of the injected PS setup messages216and the PS invocation messages212. As shown inFIG.2, the database proxy102can forward the PS responses220, including a concluding “ReadyForQuery” message, to the client202via the client connection204. However, the database proxy102can refrain from forwarding the second PS setup responses218to the client202. As shown inFIG.2, the client202may have sent PS setup messages206via client connection204that caused the database106to initially set up a prepared statement in association with the first database connection208. However, although the database proxy102may later determine to use the second database connection214in association with client connection204, the injected PS setup messages216can nevertheless cause the database106to set up the prepared statement in association with the second database connection214, such that the database106can successfully process the later PS invocation messages212routed by the database proxy102via the second database connection214instead of the first database connection208. The operations of the database proxy102associated with sending the injected PS setup messages216and receiving the corresponding second PS setup responses218can be transparent to the client202. For example, although the client202can receive the PS responses220provided by the database106in response to the PS invocation messages212sent by the client202, the database proxy102can avoid informing the client202about the injected PS setup messages216. The database proxy102can also avoid forwarding the second PS setup responses218, returned by the database106to the database proxy102in response to the injected PS setup messages216, to the client202. Accordingly, from the perspective of the client202, the client202can set up a prepared statement and re-use the prepared statement one or more times without concern as to whether the database proxy102may route corresponding messages108and/or responses110via different database connections114during each individual database transaction associated with the prepared statement. For example, the client202can use the initial PS setup messages206to set up the prepared statement with the database106. The client202can then use PS invocation messages212to later re-use the prepared statement, without sending a new Parse message or otherwise setting up the prepared statement again with the database106, even if the database proxy102determines to use a different database connection to route the PS invocation messages212to the database106. FIG.3shows an example300of changes to the pending message queue120over a period of time. As discussed above, the state data116can include a pending message queue120associated with messages108that set up and/or reference prepared statements. The pending message queue120can be associated with a pairing of a client connection and a database connection used in association with the setup and/or use of a prepared statement during a database transaction. The database proxy102can use the pending message queue120to track messages108that have been sent to the database106by the database proxy102in association with the database transaction, and for which the database proxy102has not yet received corresponding responses110from the database106. At state302, the pending message queue120can indicate that the database proxy102has sent a Parse message and a Bind message to the database106. After the database proxy102sends a subsequent Execute message and a Sync message to the database106, the database proxy102can add the Execute message to the end of the pending message queue120as shown inFIG.3in association with state304. In some examples after state304, the database proxy102may receive a successful response to the Parse message, such as “ParseComplete,” indicating that the database106successfully processed the Parse message and set up the prepared statement in association with the database connection. Accordingly, based on receiving the response to the Parse message, the database proxy102can update the pending message queue120to state306by removing the Parse message from the front of the pending message queue120. In some situations, after a successful response to the Parse message, the database proxy102may receive subsequent responses110to the Bind and/or Execute messages that cause the database proxy102to also remove the Bind and Execute messages from the pending message queue120. As a first example, the database proxy102may receive subsequent responses110indicating that the database106successfully processed the Bind and Execute messages, which can cause the database proxy102to remove the corresponding Bind and Execute messages from the front of the pending message queue120. As a second example, the database proxy102may receive an error response, such as an “ErrorResponse” message, indicating that the database106could not process the Bind message. The database proxy102can respond to such an error message by removing the Bind message and the Execute message from the pending message queue120. As discussed above, the database106may be configured to not process any subsequent messages, in a block of messages, that follow a message the database106is unable to process. Accordingly, the error response associated with the Bind message can also indicate that the database106has not processed, and will not process, the Execute message that was already sent by the database proxy102. Accordingly, based on the error response associated with the Bind message, the database proxy102can remove the Bind message as well as the subsequent Execute message from the pending message queue120. In other examples after state304, the database proxy102may receive an error response to the Parse message, such as “ErrorResponse,” indicating that the database106did not successfully process the Parse message and has not set up the prepared statement in association with the database connection. In these examples, the error response to the Parse message can indicate that the database106has not processed, and will not process, the Parse message, the Bind message, and the Execute message that was already sent by the database proxy102. Accordingly, the database proxy102can remove the Parse message, the Bind message, and the Execute message from the pending message queue120at state308. In addition to, or in association with, removing the Parse message, the Bind message, and the Execute message from the pending message queue120, the database proxy102can also roll back the state data116associated with the prepared statement to an earlier state prior to when the Parse message was sent to the database106and when the Parse message was added to the pending message queue120. For instance, if the Parse message was a message from the client that defined a new prepared statement, and that was forwarded by the database proxy102to the database106, the database proxy102may have stored new corresponding PS setup data118associated with the new prepared statement in the state data116. However, based on the error indicating that the database106was unable to process the Parse message or set up the new prepared statement, the database proxy102can delete or otherwise clear the added PS setup data118associated with the new prepared statement from the state data116, and thus roll back the state data116along with removing the Parse message and subsequent Bind message and Execute message from the pending message queue120. FIG.4is a flow diagram of an illustrative process400by which the database proxy102can perform multiplexing of database connections114in association with prepared statements. Process400is illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement the processes. At block402, the database proxy102can receive a message from a client via a client connection. The message can reference a prepared statement associated with a database transaction. In some examples, the message received at block402can be a PS setup message that sets up the prepared statement. In other examples, the message received at block402can be a PS invocation message that references a previously-set-up prepared statement. At block404, the database proxy102can determine whether the client connection is pinned to a particular database connection. If the client connection is pinned to a particular database connection (Block404-Yes), the database proxy102can serve as a passthrough and the database proxy102can forward the message to the database106via the pinned database connection at block406. In some examples, if the client connection is pinned to a particular database connection, but the database proxy102has not yet used PS setup data118associated with a prepared statement to send injected PS setup messages that cause the database106to set up the prepared statement in association with the particular database connection, the database proxy102can send such injected PS setup messages ahead of the message forwarded at block406, such that the database106can set up the prepared statement and thus be able to successfully process the message forwarded at block406. However, if the client connection is not pinned to a particular database connection (Block404-No), at block408the database proxy102can select a database connection to use to send the message to the database106during the database transaction. If the client connection is not yet associated by the database proxy102with a database connection for the database transaction, for instance if the message is at the beginning of a block of messages108associated with the database transaction, at block408the database proxy102can select an available database connection to at least temporarily associate with the client connection during the database transaction. The database proxy102can also, in some examples, send instructions to the database106to flush and/or refresh the selected database connection by removing any previous prepared statements that the database106associates with the selected database connection. If the client connection is already associated by the database proxy102with a database connection for the database transaction, for instance if the message is not at the beginning of a block of messages108associated with the database transaction and the database proxy102has already selected a database connection based on an earlier message in the block of messages108, at block408the database proxy102can select that database connection. At block410, the database proxy102can determine whether the message is a PS setup message, such as a Parse message, that sets up a new prepared statement. In some examples, if the message is a PS setup message that sets up a new prepared statement (Block410-Yes), the database proxy102may use previously-stored PS setup data118to send injected PS setup messages that cause the database106to set up a corresponding prepared statement in association with the selected database connection. For instance, if the message is a Parse message that sets up a prepared statement in association with a particular name, but the client previously sent an earlier Parse message to set up a prepared statement with that same name, the injected PS setup messages can cause the database106to set up the earlier prepared statement with the name so that the new Parse message returns an error or other response as may be expected based on a second Parse message that references the same name. Additionally, if the message is a PS setup message that sets up a new prepared statement (Block410-Yes), the database proxy102can determine at block412whether the size of the state data116associated with prepared statements maintained by the database proxy102is currently exceeding the connection pin threshold122. If the size of the state data116associated with prepared statements is exceeding the connection pin threshold122(Block412-Yes), at block414the database proxy102can pin the client connection to the database connection selected at block408, and at block406the database proxy102can forward the message to the database106via the pinned database connection. By pinning the client connection to the database connection at block414, the database proxy102can avoid storing state data116associated with the client connection. However, if the size of the state data116associated with prepared statements is not currently exceeding the connection pin threshold122(Block412-No), the database proxy102can instead store PS setup data118in the state data116in association with the client connection at block416, based on the message. For example, if the message is a Parse message, the database proxy102can save the Parse message, and/or elements in or derived from the Parse message, in the PS setup data118. At block418, the database proxy102can also forward the message to the database106via the database connection selected at block408. At block420, the database proxy102can add the message to the pending message queue120in the state data116in association with the client connection. As discussed below with respect toFIG.5, the database proxy102can later remove the message from the pending message queue120based on a corresponding response received from the database106. Returning to block410, the database proxy102may determine that the message is not a PS setup message, such as a Parse message, that sets up a new prepared statement. If the message is not a PS setup message that sets up a new prepared statement (Block410-No), the database proxy102can determine whether the message references a prepared statement that has already been set up by the database106in association with the selected database connection. As an example, if the message is a Bind message that references a prepared statement set up by a preceding Parse message in a block of messages108associated with the same database transaction, the database proxy102can determine that the preceding Parse message caused, or will cause, the database106to set up the prepared statement referenced by the current Bind message in association with the selected database connection. As another example, the database proxy102may have previously used injected PS setup messages during the current database transaction to cause the database106to set up the prepared statement referenced by the current message in association with the selected database connection. Accordingly, if the message references a prepared statement that has already been set up by the database106in association with the selected database connection (Block422-Yes), the database proxy102can forward the message to the database106via the database connection at block418. The database proxy102can also add the message to the pending message queue120in the state data116in association with the client connection at block420. However, if the message references a prepared statement that has not already been set up by the database106in association with the selected database connection (Block422-No), the database proxy102can use PS setup data118associated with the prepared statement and the client connection to cause the database106to set up the prepared statement in association with the selected database connection. For example, if the message references a prepared statement that was previously set up by the database106in association with a different database connection, the database106may not currently associate the database connection selected at block408with that prepared statement. Accordingly, at block424, the database proxy102can use the PS setup data118to send injected PS setup messages to the database106via the database connection selected at block408. The injected PS setup messages can cause the database106to set up the prepared statement in association with the selected database connection. The database proxy102can also add the injected PS setup messages to the pending message queue120in the state data116in association with the client connection at block426. After sending the injected PS setup messages at block424and adding the injected PS setup messages to the pending message queue120at block426, at block418the database proxy102can forward the message received via the client connection at block402to the database106via the database connection. The database proxy102can also add the message to the pending message queue120in the state data116in association with the client connection at block420. Accordingly, although the message referenced a prepared statement that was not already set up by the database106in association with the database connection, the injected PS setup messages sent at block424can cause the database106to set up the prepared statement in association with the database connection, such that the database106can process the message sent later at block418. The database proxy102can repeat the operations shown inFIG.4, and/or take different paths through the flow diagram, for subsequent messages108received from the client via the client connection. For example, for Parse message that sets up a new prepared statement, the database proxy102may store corresponding PS setup data at block416and send the Parse message to the database106at block418. However, for a Bind message that follows the Parse message in the same block of messages108, the database proxy102may determine at block422that the Bind message references the prepared statement that was set up by the Parse message, and can respond by forwarding the Bind message on to the database106via the database connection at block418. If a subsequent message is instead a Bind message at the beginning of a separate block of messages, and that attempts to use a prepared statement set up by the client via earlier PS setup message, the database proxy102may instead determine at block422that the Bind message references a prepared statement that the database106does not associate with a database connection that is now temporarily being associated with the client connection, and the database proxy102can send injected PS setup messages at block424, before the Bind message forwarded at block418, to cause the database106to set up the prepared statement in association with the client connection so that the database106can successfully process the Bind message. As discussed above with respect to blocks420and426, the database proxy102can add messages108, such as messages received from the client and/or injected PS setup messages, to the pending message queue120when the messages108are sent or forwarded to the database106. The database proxy102can receive responses110, such as success confirmations or errors, to such messages108, and can update the pending message queue120accordingly as discussed below with respect toFIG.5. In some examples, the database proxy102can execute the operations ofFIG.4andFIG.5concurrently or in parallel as part of a two-phase tracking system to add messages108to the pending message queue120when the messages108are sent and to remove messages108from the pending message queue120when corresponding responses110are received. FIG.5is a flow diagram of an illustrative process500by which the database proxy102can process responses110received from the database106in association with prepared statements. Process500is illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement the processes. At block502, the database proxy102can receive a response from the database106via a database connection. The response can be associated with a message, previously sent to the database106by the database proxy102via the database connection, associated with setup or use of a prepared statement during a database transaction associated with a pairing of the database connection and a client connection. As described above, in some examples the message may have been a PS setup message or a PS invocation message received by the database proxy102from a client via the client connection. In other examples, the message may have been an injected PS setup message sent by the database proxy102to cause the database106to set up the prepared statement in association with the database connection. At block504, the database proxy102can determine whether the response received at block502is an error returned by the database106. If the response is not an error (Block504-No), at block506the database proxy102can remove the corresponding message from the pending message queue120in the state data116associated with the client connection. For example, if the message was a Parse message in PS setup messages or injected PS setup messages, and the response received at block502is a “ParseComplete” message indicating that the database106successfully processed the Parse message and set up the prepared statement in association with the database connection, the database proxy102can remove the Parse message from the pending message queue120. Similarly, if the message was a Bind message in PS invocation messages, and the response received at block502is a “BindComplete” message indicating that the database106successfully processed the Bind message, the database proxy102can remove the Bind message from the pending message queue120. At block508, the database proxy102can determine whether the response received at block502is a response to an injected PS setup message. If the response is not to an injected PS setup message (Block508-No), the database proxy102can forward the response to the client via the client connection at block510. For example, if the response is to a Parse message that was sent by the client, or to a Bind message, Execute message, or any other message sent by the client, the database proxy102can forward the response to the client via the client connection at block510. However, if the response is to an injected PS setup message (Block508-Yes), at block512the database proxy102can refrain from forwarding the response and thus not send the response to the client. For example, if the response is to a Parse message that was sent by the database proxy102in injected PS setup messages, instead of to a Parse message that was sent by the client, the database proxy102can determine not to forward the response to the client. Returning to block504, the database proxy102may instead determine that the response received at block502is an error returned by the database106. If the response is an error (Block504-Yes), at block514the database proxy102can determine whether the error is in response to an injected PS setup message sent by the database proxy102based on PS setup data118. In some examples, the database proxy102may not maintain PS setup data118in the state data116unless a Parse message and/or other PS setup messages from the client have been successfully processed by the database106, such that the database106can be expected to successfully process similar injected PS setup messages sent by the database proxy102based on such PS setup data118. Accordingly, an error to an injected PS setup message can be unexpected, and if the database proxy102determines that the error is in response to an injected PS setup message (Block514-Yes), the database proxy102can drop the database connection at block516and the process can end. In some examples, the database proxy102may also roll back the state data116in response to the error to the injected PS setup message, for instance by deleting the corresponding PS setup data118from the state data116. However, if the database proxy102determines that the error is not in response to an injected PS setup message (Block514-No), at block518the database proxy102can determine whether the error was in response to a PS setup message originally sent by the client. If the error is in response to a PS setup message originally sent by the client (Block518-Yes), the database proxy102can roll back the state data116in response to the error to the PS setup message at block520. For example, while the database proxy102may have added PS setup data118to the state data116when the database proxy102received the PS setup message from the client and/or when the database proxy102forwarded the PS setup message to the database106, the database proxy102can delete the PS setup data118from the state data116in response to the error at block520. After or in addition to rolling back the state data116at block520, or if the error was not in response to a PS setup message originally sent by the client (Block518-No), at block522the database proxy102can remove one or more messages from the pending message queue120based on the error. If the pending message queue120includes additional messages108, and the error indicates that responses to those messages108will not be coming from the database106, the database proxy102can also remove those additional messages108from the pending message queue120at block522. For example, if the error is to a Bind message sent by the client as part of a block of messages that included the Bind message and an Execute message, and the Bind message and the Execute message are both in the pending message queue120, the error to the Bind message can indicate that the database106will not be processing the Execute message that followed the Bind message. Accordingly, the database proxy102can remove both the Bind message and the Execute message from the pending message queue120in response to the error. Additionally, because the error indicated in the response is not associated with an injected PS setup message, the database proxy102can also forward the response including the error to the client via the client connection at block524. The database proxy102can repeat the operations shown inFIG.5, and/or take different paths through the flow diagram, for subsequent responses110received from the database106via the database connection. As discussed above, in some examples the database proxy102can execute the operations ofFIG.4andFIG.5concurrently or in parallel as part of a two-phase tracking system to add messages108to the pending message queue120when the messages108are sent, and to remove messages108from the pending message queue120when corresponding responses110are received. FIG.6is a system and network diagram that shows an illustrative operating environment600for the configurations disclosed herein, which includes a service provider network602that can be configured to perform techniques disclosed herein. In some examples, the service provider network602can be an example of a cloud computing environment. Elements of the service provider network602can execute various types of computing and network services, such as data storage and data processing, and/or can provide computing resources for various types of systems on a permanent or an as-needed basis. For example, among other types of functionality, the computing resources provided by the service provider network602may be utilized to implement various services described above such as, for example, services provided and/or used by the database proxy102, clients104, the database106, and/or other elements described herein. Additionally, the operating environment can provide computing resources that include, without limitation, data storage resources, data processing resources, such as virtual machine (VM) instances, networking resources, data communication resources, network services, and other types of resources. Each type of computing resource provided by the service provider network602can be general-purpose or can be available in a number of specific configurations. For example, data processing resources can be available as physical computers or VM instances in a number of different configurations. The VM instances can be configured to execute applications, including web servers, application servers, media servers, database servers, some or all of the network services described above, and/or other types of programs. Data storage resources can include file storage devices, block storage devices, and the like. The service provider network602can also be configured to provide other types of computing resources not mentioned specifically herein. The computing resources provided by the service provider network602may be enabled in one embodiment by one or more data centers604A-604N (which might be referred to herein singularly as “a data center604” or in the plural as “the data centers604”). The data centers604are facilities utilized to house and operate computer systems and associated components. The data centers604typically include redundant and backup power, communications, cooling, and security systems. The data centers604can also be located in geographically disparate locations. One illustrative embodiment for a data center604that can be utilized to implement the technologies disclosed herein will be described below with regard toFIG.7. The data centers604may be configured in different arrangements depending on the service provider network602. For example, one or more data centers604may be included in, or otherwise make-up, an availability zone. Further, one or more availability zones may make-up or be included in a region. Thus, the service provider network602may comprise one or more availability zones, one or more regions, and so forth. The regions may be based on geographic areas, such as being located within a predetermined geographic perimeter. Users and/or owners of the service provider network602may access the computing resources provided by the service provider network602over any wired and/or wireless network(s)606, which can be a wide area communication network (“WAN”), such as the Internet, an intranet or an Internet service provider (“ISP”) network or a combination of such networks. For example, and without limitation, a computing device, e.g., a computing device associated with one or more clients104or the database proxy102can be utilized to access the service provider network602by way of the network(s)606. Other elements described herein can also interact via the network(s)606. For example, clients104can interact with the database proxy102via client connections112that extend through the network(s)606, and the database proxy102can interact with the database106via database connections114that extend through the network(s)606. It should be appreciated that a local-area network (“LAN”), the Internet, or any other networking topology known in the art that connects the data centers604to remote customers and other users can be utilized. It should also be appreciated that combinations of such networks can also be utilized. Each of the data centers604may include computing devices that include software, such as applications that receive and transmit data. The data centers604can also include databases, data stores, or other data repositories that store and/or provide data. For example, data centers604can store and/or execute one or more instances of the database106and/or the database proxy102. In some examples, the data centers604can also execute one or more clients104, which can interact with instances of the database proxy102and/or the database106that may or may not also execute at data centers604of the service provider network602. FIG.7is a computing system diagram that illustrates one configuration for a data center604(N) that can be utilized to implement the database proxy102as described above with respect toFIGS.1-6. The example data center604(N) shown inFIG.7includes several server computers700A-700E (collectively700) for providing computing resources702A-702E (collectively702), respectively. The server computers700can be standard tower, rack-mount, or blade server computers configured appropriately for providing the various computing resources (illustrated inFIG.7as the computing resources702A-702E). The computing resources702can include, without limitation, analytics applications, data storage resources, data processing resources such as VM instances or hardware computing systems, database resources, networking resources, and others. Some of the servers700can also be configured to execute access services704A-704E (collectively704) capable of instantiating, providing and/or managing the computing resources702, some of which are described in detail herein. The data center604(N) shown inFIG.7also includes a server computer700F that can execute any or all of the software components described above. For example, and without limitation, the server computer700F can be configured to execute the database proxy102and/or the database106. The server computer700F can also be configured to execute other components and/or to store data for providing some or all of the functionality described herein. In this regard, it should be appreciated that components of the systems described herein can execute on many other physical or virtual servers in the data centers604in various configurations. For example, the database proxy102and the database106may execute via different servers700of the same data center604or different data centers604. In the example data center604(N) shown inFIG.7, an appropriate LAN706is also utilized to interconnect the server computers700A-700F. The LAN706is also connected to the network606illustrated inFIG.6. It should be appreciated that the configuration of the network topology described herein has been greatly simplified and that many more computing systems, software components, networks, and networking devices can be utilized to interconnect the various computing systems disclosed herein and to provide the functionality described above. Appropriate load balancing devices or other types of network infrastructure components can also be utilized for balancing a load between each of the data centers604(1)-(N), between each of the server computers700A-700F in each data center604, and, potentially, between computing resources702in each of the data centers604. It should be appreciated that the configuration of the data center604described with reference toFIG.7is merely illustrative and that other implementations can be utilized. FIG.8is a system services diagram that shows aspects of several services that can be provided by and utilized within the service provider network602, which can be configured to implement the various technologies disclosed herein. The service provider network602can provide a variety of services to users including, but not limited to, a storage service800A, an on-demand computing service800B, a serverless compute service800C, a cryptography service800D, an authentication service800E, a policy management service800F, and a deployment service800G. The service provider network602can also provide other types of computing services, some of which are described below. It is also noted that not all configurations described include the services shown inFIG.8and that additional services can be provided in addition to, or as an alternative to, the services explicitly described herein. Each of the systems and services shown inFIG.8can also expose web service interfaces that enable a caller to submit appropriately configured API calls to the various services through web service requests. The various web services can also expose GUIs, command line interfaces (“CLIs”), and/or other types of interfaces for accessing the functionality that they provide. In addition, each of the services can include service interfaces that enable the services to access each other. Additional details regarding some of the services shown inFIG.8will now be provided. The storage service800A can be a network-based storage service that stores data obtained from users of the service provider network602and/or from computing resources in the service provider network602. The data stored by the storage service800A can be obtained from clients104and/or computing devices of users. The data stored by the storage service800A may also include data associated with the database proxy102, the database106, and/or other elements described herein. For example, the storage service800A may store data in one or more instances of the database106, and/or can execute one or more instances of the database proxy102that clients104can use to access data stored in the database106. The on-demand computing service800B can be a collection of computing resources configured to instantiate VM instances and to provide other types of computing resources on demand. For example, a user of the service provider network602can interact with the on-demand computing service800B (via appropriately configured and authenticated API calls, for example) to provision and operate VM instances that are instantiated on physical computing devices hosted and operated by the service provider network602. The VM instances can be used for various purposes, such as to operate as servers supporting the network services described herein, a web site, to operate business applications or, generally, to serve as computing resources for the user. In some examples, one or more clients104can execute via computing resources provided by the on-demand computing service800B. Other applications for the VM instances can be to support database applications, electronic commerce applications, business applications and/or other applications. Although the on-demand computing service800B is shown inFIG.8, any other computer system or computer system service can be utilized in the service provider network602to implement the functionality disclosed herein, such as a computer system or computer system service that does not employ virtualization and instead provisions computing resources on dedicated or shared computers/servers and/or other physical devices. The serverless compute service800C is a network service that allows users to execute code (which might be referred to herein as a “function”) without provisioning or managing server computers in the service provider network802. Rather, the serverless compute service800C can automatically run code in response to the occurrence of events. The code that is executed can be stored by the storage service800A or in another network accessible location. In this regard, it is to be appreciated that the term “serverless compute service” as used herein is not intended to infer that servers are not utilized to execute the program code, but rather that the serverless compute service800C enables code to be executed without requiring a user to provision or manage server computers. The serverless compute service800C executes program code only when needed, and only utilizes the resources necessary to execute the code. In some configurations, the user or entity requesting execution of the code might be charged only for the amount of time required for each execution of their program code. The service provider network602can also include a cryptography service800D. The cryptography service800D can utilize storage services of the service provider network602, such as the storage service800A, to store encryption keys in encrypted form, whereby the keys can be usable to decrypt user keys accessible only to particular devices of the cryptography service800D. The cryptography service800D can also provide other types of functionality not specifically mentioned herein. The service provider network602, in various configurations, also includes an authentication service800E and a policy management service800F. The authentication service800E, in one example, is a computer system (i.e., collection of computing resources) configured to perform operations involved in authentication of users or customers. For instance, one of the services shown inFIG.8can provide information from a user or customer to the authentication service800E to receive information in return that indicates whether or not the requests submitted by the user or the customer are authentic. The policy management service800F, in one example, is a network service configured to manage policies on behalf of users or customers of the service provider network602. The policy management service800F can include an interface (e.g. API or GUI) that enables customers to submit requests related to the management of a policy, such as a security policy. Such requests can, for instance, be requests to add, delete, change, or otherwise modify policy for a customer, service, or system, or for other administrative actions, such as providing an inventory of existing policies and the like. The service provider network602can additionally maintain other network services based, at least in part, on the needs of its customers. For instance, the service provider network602can maintain a deployment service800G for deploying program code in some configurations. The deployment service800G provides functionality for deploying program code, such as to virtual or physical hosts provided by the on-demand computing service800B. Other services include, but are not limited to, database services, object-level archival data storage services, and services that manage, monitor, interact with, or support other services. The service provider network602can also be configured with other network services not specifically mentioned herein in other configurations. FIG.9shows an example computer architecture for a computer900capable of executing program components for implementing functionality described above. The computer architecture shown inFIG.9illustrates a conventional server computer, workstation, desktop computer, laptop, tablet, network appliance, e-reader, smartphone, or other computing device, and can be utilized to execute any of the software components presented herein. For instance, in some examples, the computer900may be associated with the database proxy102, one or more clients104, or the database106. The computer900includes a baseboard902, or “motherboard,” which may be one or more printed circuit boards to which a multitude of components and/or devices may be connected by way of a system bus and/or other electrical communication paths. In one illustrative configuration, one or more central processing units (“CPUs”)904operate in conjunction with a chipset906. The CPUs904can be standard programmable processors that perform arithmetic and logical operations necessary for the operation of the computer900. The CPUs904perform operations by transitioning from one discrete, physical state to the next through the manipulation of switching elements that differentiate between and change these states. Switching elements can generally include electronic circuits that maintain one of two binary states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements can be combined to create more complex logic circuits, including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like. The chipset906provides an interface between the CPUs904and the remainder of the components and devices on the baseboard902. The chipset906can provide an interface to a RAM908, used as the main memory in the computer900. The chipset906can further provide an interface to a computer-readable storage medium such as a read-only memory (“ROM”)910or non-volatile RAM (“NVRAM”) for storing basic routines that help to startup the computer900and to transfer information between the various components and devices. The ROM910or NVRAM can also store other software components necessary for the operation of the computer900in accordance with the configurations described herein. The computer900can operate in a networked environment using logical connections to remote computing devices and computer systems through a network, such as the network912. The chipset906can include functionality for providing network connectivity through a NIC914, such as a gigabit Ethernet adapter. The NIC914is capable of connecting the computer900to other computing devices over the network912. It should be appreciated that multiple NICs914can be present in the computer900, connecting the computer to other types of networks and remote computer systems. The computer900can be connected to a mass storage device916that provides non-volatile storage for the computer. The mass storage device916can store an operating system918, programs920, and data, which have been described in greater detail herein. The mass storage device916can be connected to the computer900through a storage controller922connected to the chipset906. The mass storage device916can consist of one or more physical storage units. The storage controller922can interface with the physical storage units through a serial attached SCSI (“SAS”) interface, a serial advanced technology attachment (“SATA”) interface, a fiber channel (“FC”) interface, or other type of interface for physically connecting and transferring data between computers and physical storage units. The computer900can store data on the mass storage device916by transforming the physical state of the physical storage units to reflect the information being stored. The specific transformation of physical state can depend on various factors, in different implementations of this description. Examples of such factors can include, but are not limited to, the technology used to implement the physical storage units, whether the mass storage device916is characterized as primary or secondary storage, and the like. For example, the computer900can store information to the mass storage device916by issuing instructions through the storage controller922to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description. The computer900can further read information from the mass storage device916by detecting the physical states or characteristics of one or more particular locations within the physical storage units. In addition to the mass storage device916described above, the computer900can have access to other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media is any available media that provides for the non-transitory storage of data and that can be accessed by the computer900. By way of example, and not limitation, computer-readable storage media can include volatile and non-volatile, removable and non-removable media implemented in any method or technology. Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM (“EPROM”), electrically-erasable programmable ROM (“EEPROM”), flash memory or other solid-state memory technology, compact disc ROM (“CD-ROM”), digital versatile disk (“DVD”), high definition DVD (“HD-DVD”), BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information in a non-transitory fashion. As mentioned above, the mass storage device916can store an operating system918utilized to control the operation of the computer900. According to one configuration, the operating system comprises the LINUX operating system or one of its variants such as, but not limited to, UBUNTU, DEBIAN, and CENTOS. According to another configuration, the operating system comprises the WINDOWS SERVER operating system from MICROSOFT Corporation. According to further configurations, the operating system can comprise the UNIX operating system or one of its variants. It should be appreciated that other operating systems can also be utilized. The mass storage device916can store other system or application programs and data utilized by the computer900. In one configuration, the mass storage device916or other computer-readable storage media is encoded with computer-executable instructions which, when loaded into the computer900, transform the computer from a general-purpose computing system into a special-purpose computer capable of implementing the configurations described herein. These computer-executable instructions transform the computer900by specifying how the CPUs904transition between states, as described above. According to one configuration, the computer900has access to computer-readable storage media storing computer-executable instructions which, when executed by the computer900, perform the various processes described above. The computer900can also include computer-readable storage media for performing any of the other computer-implemented operations described herein. The computer900can also include one or more input/output controllers924for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic stylus, or other type of input device. Similarly, an input/output controller924can provide output to a display, such as a computer monitor, a flat-panel display, a digital projector, a printer, or other type of output device. It will be appreciated that the computer900might not include all of the components shown inFIG.9, can include other components that are not explicitly shown inFIG.9, or can utilize an architecture completely different than that shown inFIG.9. Based on the foregoing, it should be appreciated that technologies for multiplexing database connections114by the database proxy102in association with prepared statements have been disclosed herein. Moreover, although the subject matter presented herein has been described in language specific to computer structural features, methodological acts, and computer readable media, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features, acts, or media described herein. Rather, the specific features, acts, and media are disclosed as example forms of implementing the claims. The subject matter described above is provided by way of illustration only and should not be construed as limiting. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure. Various modifications and changes can be made to the subject matter described herein without following the example configurations and applications illustrated and described, and without departing from the true spirit and scope of the present invention, which is set forth in the following claims. | 116,352 |
11943317 | DESCRIPTION OF IMPLEMENTATIONS To make a person skilled in the art understand the technical solutions in the present application better, the following clearly and completely describes the technical solutions in the implementations of the present application with reference to the accompanying drawings in the implementations of the present application. Apparently, the described implementations are merely some but not all of the implementations of the present application. All other implementations obtained by a person of ordinary skill in the art based on the implementations of the present application without creative efforts shall fall within the protection scope of the present application. FIG.1is a schematic diagram illustrating a service consensus process, according to an implementation of the present application, specifically including the following steps. S101. A first blockchain node receives, by using a server in a plurality of servers included in the first blockchain node, a service request sent by a client. In this implementation of the present application, in a service processing process, a user can send a service request to a first blockchain node by using a client. The client mentioned here can be a client installed on an end-user device held by the user. The user can start the client on the end-user device, and enter service information on an interface displayed by the client to the user. After receiving the service information entered by the user, the client can generate a corresponding service request based on service logic pre-stored in the client, and send the service request to the first blockchain node by using the end-user device. Certainly, in this implementation of the present application, the user can directly enter corresponding service information to the end-user device, and the end-user device can generate a corresponding service request based on the service information entered by the user, and send the service request to the first blockchain node. In this implementation of the present application, the first blockchain node includes a plurality of servers (in other words, the first blockchain node includes a server cluster, and the server cluster is equivalent to the first blockchain node), and the servers share node configuration information such as point-to-point routing table, asymmetric public/private key of a node, and node identity (ID). Therefore, for other blockchain nodes in a consensus network and the client, operations performed by the servers in the first blockchain node are all considered to be performed by the first blockchain node. Therefore, when sending the service request to the first blockchain node, the client first needs to determine a server in the first blockchain node that the service request should be sent to. Therefore, this implementation of the present application provides a registration center. The registration center is configured to manage addresses of servers in a blockchain node, and push the addresses of the servers to the client. The client can randomly select an address from the addresses pushed by the registration center, and send the service request to a server corresponding to the address, which is shown inFIG.2. FIG.2is a schematic diagram of pushing an address to a client by a registration center, according to an implementation of the present application. InFIG.2, the first blockchain node includes the plurality of servers, and each server can register with the registration center when being online (“online” mentioned here means that the server starts to normally perform service processing), in other words, notify the registration center that the server is currently in an available state and can receive the service request sent by the client. After determining that the server is online, the registration center can obtain an address of the server, and then push the address to the client, so that the client stores the address after receiving the address. In this implementation of the present application, the registration center can proactively obtain the address of the server from the server, or the server can provide the address for the registration center. For example, after the server registers with the registration center, the registration center can return a Registration Successful message to the server. The server can proactively send the address of the server to the registration center after receiving the message, so that the registration center manages the address. Besides that the registration center can proactively push the obtained address to the client, the client can also proactively obtain addresses managed by the registration center. For example, in addition to sending the service request to the first blockchain node, the client can further send an address acquisition query message to the registration center. After receiving the query message, the registration center can send an address of a currently available server (namely, a server currently registering with the registration center) in the first blockchain node to the client, so that after receiving addresses sent by the registration center, the client selects an address from the addresses, and sends the service request to a server corresponding to the selected address. Certainly, the client can also obtain the addresses of the servers included in the first blockchain node from the registration center by using other methods. Details are omitted here. It is worthwhile to note that in practice, a server in the first blockchain node is possibly offline (in other words, the server cannot perform service processing) because of a running fault, a program restart, etc. If the registration center sends an address of the offline server to the client, and the client exactly selects the address of the offline server during server selection, the client possibly cannot send the service request to the first blockchain node, and the first blockchain node cannot process the service request. To avoid the problems, in this implementation of the present application, the registration center can regularly send a heartbeat detection message to each server that has registered with the registration center. The server can return a response message to the registration center based on the received heartbeat detection message when running normally online. After receiving the response message, the registration center can determine that the server is normally running online, and continues to manage an address of the server. After sending the heartbeat detection message to the server, if detecting that no response message returned by the server based on the heartbeat detection message is received after a specified time elapses, the registration center can determine that the server is possibly offline currently because of a running fault, a program restart, etc., and does not push the address of the server to the client. In addition, if the registration center has pushed the address of the server to the client, the registration center can send a notification that the server is offline to the client, so that the client locally deletes the address of the server based on the notification. After deleting the address of the offline server, the client does not send the service request to the server. When the server is back online, the client re-obtains the address of the server from the registration center, and sends the service request to the server by using the address. It is worthwhile to note thatFIG.2is merely described by using an example that the client obtains the addresses included in the first blockchain node from the registration center. InFIG.2, because the first blockchain node can receive the service request sent by the client from the client, the servers in the first blockchain node need to register with the registration center, so that the registration center can push the addresses of the servers in the first blockchain node to the client, and the client can send the service request to the servers in the first blockchain node by using the obtained addresses. However, in practice, second blockchain nodes in the consensus network can also receive the service request sent by the client, and process the service request. Therefore, in this implementation of the present application, servers included in each second blockchain node in the consensus network can also register with the registration center, so that the registration center can push addresses of the servers included in the second blockchain node to the client. As such, the client can also send the service request to the servers in the second blockchain node by using the obtained addresses. S102. Store, by using the server, the service request in a service memory included in the first blockchain node, and send the service request to each second blockchain node in a consensus network, so that each second blockchain node stores the service request in a service memory included in the second blockchain node after receiving the service request. After receiving the service request sent by the client, the server included in the first blockchain node can store the service request in the service memory included in the first blockchain node. In addition, the server can send the service request sent by the client to each second blockchain node in the consensus network, so that the second blockchain node stores the service request in the service memory included in the second blockchain node after receiving the service request. The server in the first blockchain node can first perform valid verification on the service request after receiving the service request. The valid verification can be valid verification by using an asymmetric signature such as an RSA algorithm, or can be verification in other forms. When determining that the service request succeeds in the valid verification, the server can store the service request in the service memory included in the first blockchain node, and send the service request to each second blockchain node in the consensus network. When determining that the service request does not succeed in the valid verification, the server does not store the service request, but can return a message indicating that the service request fails to be processed to the client, so that the user performs certain operations after reading the message by using the client. For example, the user can re-edit the service request in the client and send an edited service request to the server included in the first blockchain node by using the client after reading the message. In this implementation of the present application, each blockchain node in the consensus network can include a plurality of servers. Therefore, when sending the service request to the second blockchain node, the server in the first blockchain node also needs to obtain addresses of the servers in the second blockchain node, and then sends the service request to the servers included in the second blockchain node by using the obtained addresses. Therefore, in this implementation of the present application, the registration center also needs to manage the addresses of the servers in the second blockchain node, and sends the addresses of the servers in the second blockchain node to each server in the first blockchain node, which is shown inFIG.3. FIG.3is a schematic diagram of pushing an address of each server in a second blockchain node to each server in a first blockchain node by a registration center, according to an implementation of the present application. InFIG.3, the servers in the second blockchain node can also register with the registration center after being online, so that the registration center obtains the addresses of the servers in the second blockchain node. The registration center can proactively obtain the addresses from the servers in the second blockchain node, or the servers in the second blockchain node can proactively send their addresses to the registration center. A specific method is the same as the previously described method in step S101of obtaining the addresses of the servers in the first blockchain node by the registration center, and details are omitted here. After obtaining the addresses of the servers in the second blockchain node by using the registration center, each server in the first blockchain node can store the obtained addresses. When sending the service request to the second blockchain node, the server in the first blockchain node can select an address from the stored addresses included in the second blockchain node, and then send the service request to a server corresponding to the address, so that the server corresponding to the address stores the service request in the service memory corresponding to the second blockchain node after receiving the service request. The server in the second blockchain node can also perform valid verification on the service request after receiving the service request. The server can store the service request in the service memory included in the second blockchain node when determining that the service request succeeds in the valid verification. The server does not store the service request when determining that the service request does not succeed in the valid verification. It is worthwhile to note that the servers included in the second blockchain node are also possibly offline as mentioned in step S101. Therefore, after obtaining the addresses of the servers in the second blockchain node, the registration center can regularly send a heartbeat detection message to the servers corresponding to these addresses. When receiving a response message returned by a server based on the heartbeat detection message after a specified time elapses (or within a specified time), the registration center can determine that the server is still in an online state, and continues to manage an address of the server. When not receiving the response message returned by the server based on the heartbeat detection message after the specified time elapses, the registration center can determine that the server is possibly offline because of a running fault, network instability, etc., and does not continue to manage the address of the server until the server is back online. In addition, when determining, by using the previously described method, that a certain server in the second blockchain node is offline, the registration center can send a notification that the server is offline to the servers in the first blockchain node and the client, so that the servers in the first blockchain node and the client delete an address of the server after receiving the notification, and subsequently do not send the service request to the server corresponding to the address until the server is back online. After re-obtaining the address of the server from the registration center, the servers in the first blockchain node and the client can send the service request to the server corresponding to the address by using the obtained address. FIG.3merely shows a case that servers included in one second blockchain node register with a registration center. In practice, there are a plurality of second blockchain nodes in a consensus network. Therefore, servers in each second blockchain node in the consensus network can register with the registration center after being online, so that the registration center obtains addresses of the servers in the second blockchain node, and pushes the obtained addresses to the servers in the first blockchain node. In other words, each server in the first blockchain node stores the addresses of the servers included in each second blockchain node in the consensus network. It is worthwhile to note that in practice, the entire consensus network includes a plurality of blockchain nodes. The first blockchain node mentioned in this implementation of the present application is a blockchain node that receives the service request sent by the client, and other blockchain nodes than the first blockchain node can be referred to as second blockchain nodes in this implementation of the present application. The first blockchain node and the second blockchain node are relative terms. To be specific, a blockchain node that receives the service request from the client can be referred to as the first blockchain node, and a blockchain node that receives the service request sent by the first blockchain node can be referred to as the second blockchain node. Because the blockchain nodes in the consensus network can all receive the service request sent by the client, the blockchain nodes can be essentially first blockchain nodes, or can be second blockchain nodes. Division between the first blockchain node and the second blockchain node depends on where the service request is received. Certainly, in a consensus check process, division between the first blockchain node and the second blockchain node can also be determined based on which node initiates the consensus check. To be specific, a consensus check initiator that sends a preprocessing block that includes at least one service request to the consensus network can be the first blockchain node, and a blockchain node that receives the preprocessing block can be referred to as the second blockchain node. S103. Select a server from the plurality of servers included in the first blockchain node, and obtain at least one service request from a service memory included in the first blockchain node by using the selected server, in a service consensus phase. In this implementation of the present application, the server in the first blockchain node needs to perform service consensus on the service request in the service memory included in the first blockchain node. Therefore, in the service consensus phase, the server in the first blockchain node can obtain the at least one service request from the service memory included in the first blockchain node, and subsequently package the obtained service request into a preprocessing block and send the preprocessing block to each second blockchain node in the consensus network for service consensus. In this implementation of the present application, in addition to the plurality of servers and the service memory, the first blockchain node further includes a scheduled task trigger, and the scheduled task trigger is used to periodically initiate service consensus to each blockchain node in the consensus network by using the server in the first blockchain node. However, because the first blockchain node includes the plurality of servers, the scheduled task trigger can select a server from the plurality of servers included in the first blockchain node in the service consensus phase, and then the server obtains the at least one service request from the service memory included in the first blockchain node. In this implementation of the present application, the scheduled task trigger can be a hardware device, or can be a form of software. For a form of software, the scheduled task trigger can be set in a certain server in the first blockchain node. When running in the server, the scheduled task trigger can select a server from the servers included in the first blockchain node in the service consensus phase, and send a notification to the selected server by using a server that the scheduled task trigger is located in, so that the selected server obtains the at least one request from the service memory included in the first blockchain node after receiving the notification. In the process of obtaining each service request from the service memory (namely, the service memory included in the first blockchain node), the server can obtain each service request based on the time sequence that each service request stores in the service memory, or can obtain each service request based on the service type of each service request, or can obtain each service request based on the service level of each service request. There are many acquisition methods, and details are omitted here. S104: Package the at least one service request into a preprocessing block by using the selected server, and send the preprocessing block to each second blockchain node in the consensus network by using the selected server, so that each second blockchain node performs service consensus on the preprocessing block. After obtaining the at least one service request from the service memory included in the first blockchain node, the server in the first blockchain node can process the obtained service requests, and package the service requests into a preprocessing block. The server can sort the obtained service requests based on a predetermined sorting rule to obtain a sorting result of the service requests, and determine, by using a predetermined identifier determining rule and the sorting result, an identifier to be verified that uniquely corresponds to the service requests. Then, the server can package the obtained service requests, the sorting result of the service requests, and the determined identifier to be verified into one preprocessing block, and then send the preprocessing block to the servers included in the second blockchain node. A specific method of determining the identifier to be verified by the server can be shown inFIG.4. FIG.4is a schematic diagram of determining an identifier to be verified by a server, according to an implementation of the present application. InFIG.4, a server in the first blockchain node (namely, a server determined by using the scheduled task trigger included in the first blockchain node) obtains four service requests shown inFIG.4from the service memory included in the first blockchain node. The server can sort the four service requests based on a predetermined sorting rule, to obtain a sorting result shown inFIG.4. Then, the server can separately determine child identifiers Hash1 to Hash4 corresponding to the four service requests based on a predetermined identifier determining rule: a hash algorithm, and place the determined four child identifiers in leaf nodes of a Merkle tree from left to right based on an obtained sorting result, to determine a value Hash7 at the root node of the Merkle tree. The server can determine the determined the value Hash7 at the root node of the Merkle tree as an identifier to be verified that uniquely corresponds to the four service requests, and then package the determined identifier to be verified, the sorting result, and the four service requests into one preprocessing block. It is worthwhile to note that the method of determining the identifier to be verified shown inFIG.4is not unique. For example, in addition to determining the identifier to be verified that uniquely corresponds to the service requests by using the hash algorithm as the predetermined identifier determining rule, the server in the first blockchain node can further determine the identifier to be verified that uniquely corresponds to the service requests by using an algorithm such as a message digest algorithm 5 (MD5), provided that the determined identifier to be verified uniquely corresponds to the service requests. In addition to the form shown inFIG.4, the Merkle tree can further have other forms. Details are omitted here. Certainly, in this implementation of the present application, in addition to the Merkle tree, the server in the first blockchain node can further determine the identifier to be verified that uniquely corresponds to the service requests by using other methods. For example, after determining child identifiers corresponding to the service requests, the server can sort the determined child identifiers based on a certain sequence, encrypt the sorted result again, and use the encrypted result as the identifier to be verified that uniquely corresponds to the service requests. Alternatively, after determining child identifiers corresponding to the service requests, the server can generate a universally unique ID by using a snowflake algorithm, and use the ID as the identifier to be verified that uniquely corresponds to the service requests. Alternatively, the server can determine a universally unique identifier (UUID) of the determined child identifiers corresponding to the service requests, and use the UUID as the identifier to be verified that uniquely corresponds to the service requests. Certainly, there are still other determining methods, and details are omitted here, provided that it is ensured that the determined identifier to be verified can uniquely correspond to the service requests. After determining the preprocessing block, the server in the first blockchain node (namely, the server selected by using the scheduled task trigger in the first blockchain node in the service consensus phase) can send the preprocessing block to each second blockchain node in the consensus network. However, each second blockchain node in the consensus network includes a plurality of servers. Therefore, when sending the preprocessing block, the server in the first blockchain node needs to determine a server in each second blockchain node that the preprocessing block is sent to. In this implementation of the present application, each server in the first blockchain node can obtain the addresses of the servers included in each second blockchain node in the consensus network from the registration center. Therefore, when the server in the first blockchain node needs to send the preprocessing block to a certain second blockchain node in the consensus network, the server can select an address from the stored addresses of the servers in the second blockchain node (the stored addresses are addresses of servers in the second blockchain node that are in an online state), and send the preprocessing block to a server corresponding to the address, so that the server corresponding to the address performs consensus check on the preprocessing block after receiving the preprocessing block. There are a plurality of second blockchain nodes in the consensus network. Therefore, when sending the preprocessing block to each second blockchain node, the server in the first blockchain node can separately determine, from the stored addresses by using the previously described method, servers in each second blockchain node that receive the preprocessing block, and then separately send the preprocessing block to the servers in each second blockchain node by using the determined addresses. For the second blockchain node, after receiving the preprocessing block sent by the server in the first blockchain node, the server included in the second blockchain node can parse the preprocessing block, to determine service requests included in the preprocessing block, a sorting result of the service requests, and an identifier to be verified. Then, the server in the second blockchain node can find service requests that match the service requests included in the preprocessing block from the service memory included in the second blockchain node, and determine, by using a predetermined identifier determining rule and the determined sorting result of the service requests, an identifier that uniquely corresponds to the service requests found from the service memory included in the second blockchain node. The predetermined identifier determining rule mentioned here is the same as the identifier determining rule used by the server in the first blockchain node. The server in the second blockchain node can compare the determined identifier with the identifier to be verified that is included in the preprocessing block after determining the identifier, and can determine that the preprocessing block succeeds in local consensus check (in other words, which is performed by the server in the second blockchain node) when determining that the two are consistent, and then store a check result in the service memory included in the second blockchain node, and send the check result to other blockchain nodes in the consensus network (the other blockchain nodes mentioned here includes each second blockchain node and the first blockchain node). The method of sending the check result by the server in the second blockchain node is the same as the method of sending the service request or the preprocessing block to each second blockchain node in the consensus network by the server in the first blockchain node. To be specific, when the server in the second blockchain node needs to send the check result to a certain blockchain node in the consensus network (which can be the second blockchain node, or can be the first blockchain node), the server can select an address from the locally stored addresses of servers in the blockchain node, and send the check result to a server corresponding to the address. After receiving the check result, the server corresponding to the address can store the check result in a service memory included in the blockchain node that the server belongs to. When the server in the second blockchain node sends the check result to each blockchain node in the consensus network, other servers in the second blockchain node or the server can also receive check results about the preprocessing block that are sent by other blockchain nodes in the consensus network, and store all the received check results in the service memory included in the second blockchain node. Then, the server (the server can be a server that receives the preprocessing block) in the second blockchain node can determine, from the service memory included in the second blockchain node, a check result (including a check result obtained by the server) about the preprocessing block that is obtained by each blockchain node in the consensus network, and determine a comprehensive check result about the preprocessing block that is obtained by each blockchain node in the consensus network. Then, the server can send the determined comprehensive check result to each blockchain node in the consensus network by using a method that is the same as that of sending the check result, and store the comprehensive check result in the service memory included in the second blockchain node. After the server in the second blockchain node sends the comprehensive check result, other servers in the second blockchain node or the server (namely, a server that sends the comprehensive check result) can also receive a comprehensive check result about the preprocessing block that is sent by each blockchain node (including each second blockchain node and the first blockchain node) in the consensus network, and store the comprehensive check in the service memory included in the second blockchain node. The server (namely, the server that sends the comprehensive check result) in the second blockchain node can obtain a comprehensive check result sent by other blockchain nodes in the consensus network from the service memory included in the second blockchain node, determine, by using the received comprehensive check result and a comprehensive check result determined by the server, whether the preprocessing block succeeds in the service consensus in the consensus network, and write each service request included in the preprocessing block into a blockchain that the second blockchain node is stored in if determining that each service request included in the preprocessing block succeeds in the service consensus in the consensus network based on the comprehensive check results (including the comprehensive check result determined by the server) stored in the service memory, or otherwise do not write each service request into the blockchain. The server in the second blockchain node can write complete content of each service request into the blockchain, or can write only an information digest of each service request into the blockchain. The previously described service consensus process is relatively complex. For ease of understanding, the following lists a simple example to clearly describe the process of performing service consensus on the preprocessing block by the server in the second blockchain node, which is shown inFIG.5. FIG.5is a schematic diagram illustrating a process of performing service consensus on a preprocessing block by a server in a second blockchain node, according to an implementation of the present application. Assume that there are three blockchain nodes in a consensus network: a first blockchain node, a second blockchain node A, and a second blockchain node B. Each server in the three blockchain nodes respectively stores addresses of servers included in the other two blockchain nodes. A server #3 in the first blockchain node obtains at least one service request from a service memory included in the first blockchain node, packages the at least one service request into a preprocessing block, and sends the preprocessing block to the other two blockchain nodes. The server #3 determines to separately send the preprocessing block to a server A1and a server B1by using addresses of servers included in the other two blockchain nodes that are stored in the server #3. After receiving the preprocessing block, the server A1and the server B1can perform consensus check on the preprocessing block, and respectively store obtained check results for the preprocessing block in service memories included in blockchain nodes that the server A1and the server B1belong to. In addition, the server A1and the server B1can respectively send the determined check results to the other two blockchain nodes in the consensus network. The server A1determines to send a check result obtained by the server A1to a server #2 in the first blockchain node and a server B2in the second blockchain node B based on addresses of servers in the other two blockchain nodes that are stored in the server A1, and the server B1determines to send a check result obtained by the server B1to the server #3 in the first blockchain node and a server A3in the second blockchain node A. After separately receiving the check results sent by the servers in the other two blockchain nodes, servers in the three blockchain nodes in the consensus network can store the received check results in the service memories included in the blockchain nodes. The server A1(namely, a server that receives the preprocessing block) can obtain check results (including a check result obtained by the server A1) from a service memory included in the second blockchain node A, and obtain a comprehensive check result of the blockchain nodes in the consensus network for the preprocessing block based on these check results. The server A1can store the obtained comprehensive check result in the service memory included in the second blockchain node A, and send the comprehensive check result to the other two blockchain nodes. The sending method is the same as the method of sending the check result, and details are omitted here. The server #3 (namely, a server that sends a service consensus) and the server B1(a server that receives the preprocessing block) can also determine a comprehensive check result for the preprocessing block by using such method, and send the obtained comprehensive check result to the other two blockchain nodes in the consensus network. After receiving the check results sent by the other two blockchain nodes, the servers in the blockchain nodes in the consensus network can store the received check results in the service memories included in the blockchain nodes. The server A1can obtain comprehensive check results (including a comprehensive check result obtained by the server A1) for the preprocessing block, which are sent by the blockchain nodes, from the service memory included in the second blockchain node A. Then, the server A1can determine whether the preprocessing block succeeds in the service consensus in the consensus network based on the comprehensive check results. If yes, the server A1writes each service request included in the preprocessing block into a blockchain of the second blockchain node A, and if no, the server A1does not write each service request into the blockchain. Likewise, the server #3 and the server B1can also obtain, by using such method, comprehensive check results from the service memories included in the blockchain nodes that the server #3 and the server B1belong to, and determine, based on the obtained comprehensive check results, whether to write each service request included in the preprocessing block into the blockchain nodes of the server #3 and the server B1. It can be learned from the previously described method that each blockchain node in the consensus network includes a plurality of servers. Therefore, as long as one of servers in each blockchain node is in an online state, in other words, is available, the blockchain node is an available blockchain node in the consensus network, which greatly improves stability of the blockchain node in the consensus network. In addition, each blockchain node includes a plurality of servers, and functions and statuses of the servers are the same for the blockchain node. In other words, compared with the existing technology, equivalent servers are added to the blockchain node. This greatly improves the performance of the blockchain node, and thus the service processing efficiency of the blockchain node is greatly improved. It is worthwhile to note that in a service consensus process, each blockchain node in the consensus network can determine a check result obtained by the blockchain node for the preprocessing block, send the obtained check result to other blockchain nodes in the consensus network, and store the check result in a service memory corresponding to the blockchain node. The blockchain node can perform consensus check on the preprocessing block by using a first server included in the blockchain node, and the first server can be a specified server in the blockchain node, or can be a server selected from servers included in the blockchain node. In addition, the blockchain node also receives a check result sent by other blockchain nodes in the consensus network for the preprocessing block. The blockchain node can receive, by using a server included in the blockchain node, the check result sent by the other blockchain nodes, and stores the received check result in the service memory corresponding to the blockchain node. Here, a server that receives the check result sent by the other blockchain nodes can be referred to as a second server. The second server can be any server in the blockchain node, and certainly, can be the previously described first server. Which second server to receive the check result sent by the other blockchain nodes depends on the second server that is selected, by a server included in the other blockchain nodes, to receive the check result sent by the other blockchains. In step S101, in addition to randomly selecting an address from the stored addresses of the servers in the first blockchain node, the client can also select an address based on a load balancing status. Therefore, when pushing the addresses of the servers in the first blockchain node to the client, the registration center can jointly push load statuses of the servers to the client, so that the client selects an address of a lightly loaded server from the addresses by using a predetermined load balancing algorithm, and sends the service request to the server corresponding to the address. Likewise, when sending the service request to each second blockchain node in the consensus network, the server in the first blockchain node can also select a server from the stored addresses based on load balancing method. Certainly, the server in the first blockchain node can also send the preprocessing block based on load balancing method, and each blockchain node in the consensus network can also send the check result and the comprehensive check result based on load balancing method. A specific process is the same as the method of sending the service request to the first blockchain node by the client based on load balancing method, and details are omitted here. In this implementation of the present application, in addition to selecting a server that initiates service consensus by using the scheduled task trigger, consensus periods can be further respectively set in the servers in the blockchain node (including the first blockchain node and the second blockchain node), and different servers have different consensus periods. When detecting that a current time reaches a consensus period of the server, the server can obtain at least one service request from a service memory in the blockchain node that the server belongs to. In this implementation of the present application, the server in the blockchain node (including the first blockchain node and the second blockchain node) can also forward the service request to other servers in the blockchain node after receiving the service request, and the other servers store the service request in a service memory included in the blockchain node. After receiving the preprocessing block sent by the first blockchain node, the server in each second blockchain node in the consensus network can also forward the preprocessing block to other servers in the second blockchain node for consensus check, and store the obtained check result in the service memory included in the second blockchain node. The service consensus method according to the implementations of the present application is described above. Based on the same idea, an implementation of the present application further provides the following service processing devices and service consensus devices, which are shown inFIG.6toFIG.12. FIG.6is a schematic structural diagram illustrating a service processing device, according to an implementation of the present application, specifically including the following: a receiving module601, configured to receive a service request sent by a client; a storage module602, configured to store the service request in a service memory corresponding to the device; and a sending module603, configured to send the service request to each second blockchain node in a consensus network, so that each second blockchain node stores the service request in a service memory included in the second blockchain node after receiving the service request, where the second blockchain node includes a plurality of servers and at least one service memory. The device further includes the following: a registration module604, configured to send an address of the device to a registration center when it is determined that the device is online, so that the registration center sends the address to the client and each second blockchain node in the consensus network. The device further includes the following: an acquisition module605, configured to obtain addresses of the plurality of servers included in each second blockchain node from a registration center. The sending module603is specifically configured to select an address from the obtained addresses of the plurality of servers included in each second blockchain node; and send the service request to a server corresponding to the selected address. The storage module602is specifically configured to perform valid verification on the service request; and store the service request in the service memory when it is determined that the service request succeeds in the valid verification. The storage module602is further configured to skip storing the service request when it is determined that the service request does not succeed in the valid verification. A blockchain node includes a plurality of devices. FIG.7is a schematic structural diagram illustrating a service processing device, according to an implementation of the present application, specifically including the following: a request receiving module701, configured to receive a service request sent by a first blockchain node, where the first blockchain node includes a plurality of servers and at least one service memory; and a request storage module702, configured to store the service request in a service memory corresponding to the device. The device further includes the following: a registration module703, configured to send an address of the device to a registration center when it is determined that the device is online, so that the registration center sends the address to the first blockchain node, a client, and other second blockchain nodes in a consensus network. The request storage module702is specifically configured to perform valid verification on the service request; and store the service request in the service memory when it is determined that the service request succeeds in the valid verification. The request storage module702is further configured to skip storing the service request when it is determined that the service request does not succeed in the valid verification. FIG.8is a schematic structural diagram illustrating a service processing device, according to an implementation of the present application, specifically including the following: an information receiving module801, configured to receive service information entered by a user; a request generation module802, configured to generate a corresponding service request based on the service information; and a sending module803, configured to send the service request to a server included in a first blockchain node, so that the first blockchain node stores the received service request in a service memory included in the first blockchain node; and send the service request to each second blockchain node in a consensus network, where the first blockchain node includes a plurality of servers and at least one service memory, and the second blockchain node includes a plurality of servers and at least one service memory. The sending module803is specifically configured to obtain addresses of the plurality of servers included in the first blockchain node from a registration center; and select an address from the obtained addresses of the plurality of servers included in the first blockchain node, and send the service request to a server corresponding to the selected address. The device further includes the following: a deletion module804, configured to delete an address of a certain server when an offline notification sent by the registration center for the server is received. FIG.9is a schematic structural diagram illustrating a service consensus device, according to an implementation of the present application, specifically including the following: a request acquisition module901, configured to obtain at least one service request from a service memory corresponding to the device; and a sending module902, configured to package the at least one service request into a preprocessing block, and send the preprocessing block to each second blockchain node in a consensus network, so that each second blockchain node performs service consensus on the preprocessing block, where the second blockchain node includes a plurality of servers and at least one service memory. The device further includes the following: an address acquisition module903, configured to obtain addresses of the plurality of servers included in each second blockchain node from a registration center. The sending module902is specifically configured to select an address from the obtained addresses of the plurality of servers included in each second blockchain node; and send the preprocessing block to a server corresponding to the selected address, so that the server corresponding to the selected address performs service consensus on the received preprocessing block. FIG.10is a schematic structural diagram illustrating a service consensus device, according to an implementation of the present application, specifically including the following: a selection module1001, configured to select a server from a plurality of servers included in a first blockchain node, where the first blockchain node includes the plurality of servers and at least one service memory. The selection module1001is specifically configured to detect whether a current moment satisfies a task trigger condition; and select the server from the plurality of servers included in the first blockchain node when detecting that the task trigger condition is satisfied. FIG.11is a schematic structural diagram illustrating a service consensus device, according to an implementation of the present application, specifically including the following: an acquisition module1101, configured to obtain a preprocessing block; and a consensus module1102, configured to perform service consensus on the preprocessing block based on each service request stored in a service memory corresponding to the device. The consensus module1102is specifically configured to perform consensus check on the preprocessing block, to obtain a check result; receive each check result sent by other blockchain nodes in a consensus network, and store each received check result in the service memory corresponding to the device; and obtain each check result from the service memory, and perform service consensus on the preprocessing block by using each obtained check result. FIG.12is a schematic structural diagram illustrating a service consensus device, according to an implementation of the present application, specifically including the following: an acquisition module1201, configured to obtain addresses of a plurality of servers included in each blockchain node in a consensus network, where each blockchain node includes the plurality of servers and at least one service memory; and a sending module1202, configured to send the obtained addresses of the plurality of servers included in the blockchain node to other blockchain nodes in the consensus network and a client for storage. The device further includes the following: a notification module1203, configured to send a heartbeat detection message to the plurality of servers included in each blockchain node in the consensus network based on the obtained addresses of the plurality of servers included in the blockchain node; and when no response message returned by each server included in the blockchain node based on the heartbeat detection message is received after a specified time elapses, determine that the server is offline, and instruct the client and the other blockchain nodes in the consensus network to delete the stored address of the server. The implementations of the present application provide a service processing and consensus method and device. In the method, a first blockchain node includes a plurality of servers. The first blockchain node can receive a service request sent by a client and store the service request by using the plurality of included servers, obtain at least one service request from a service memory included in the first blockchain node by using a server in the plurality of servers, to obtain a preprocessing block, and send the preprocessing block to each second blockchain node in a consensus network by using the server, to perform service consensus on the preprocessing block by using each second blockchain node. It can be ensured that the first blockchain node is available, provided that one server in the plurality of servers included in the first blockchain node is available. Therefore, stability of the first blockchain node in the consensus network is improved. In addition, each server included in the first blockchain node can receive the service request sent by a user by using the client, and each server can initiate service consensus to each second blockchain node in the consensus network. Therefore, service processing efficiency of a blockchain service is greatly improved. In the 1990s, improvement of a technology can be clearly distinguished between hardware improvement (for example, improvement on a circuit structure such as a diode, a transistor, or a switch) and software improvement (improvement on a method procedure). However, with the development of technologies, improvement of many method procedures can be considered as direct improvement of a hardware circuit structure. Designers almost all program an improved method procedure to a hardware circuit, to obtain a corresponding hardware circuit structure. Therefore, it cannot be said that improvement of a method procedure cannot be implemented by using a hardware entity module. For example, a programmable logic device (PLD) (for example, a field programmable gate array (FPGA)) is such an integrated circuit. A logical function of the programmable logic device is determined by component programming executed by a user. The designers perform voluntary programming to “integrate” a digital system into a single PLD without requiring a chip manufacturer to design and produce a dedicated integrated circuit chip. In addition, instead of manually producing an integrated circuit chip, the programming is mostly implemented by “logic compiler” software, which is similar to a software compiler used during program development. Original code before compiling is also written in a specific programming language, which is referred to as a hardware description language (HDL), and there is more than one type of HDL, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), Confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), Lava, Lola, MyHDL, PALASM, and RHDL (Ruby Hardware Description Language), etc. Currently, VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are most commonly used. A person skilled in the art should also understand that a method procedure only needs to be logically programmed, and programmed to the integrated circuit by using the previous hardware description languages so that a hardware circuit that implements the logical method procedure can be easily obtained. A controller can be implemented by using any appropriate method. For example, the controller can be a microprocessor or a processor, or a computer-readable medium, a logic gate, a switch, an application-specific integrated circuit (ASIC), a programmable logic controller, or an embedded microprocessor that stores computer readable program code (such as software or firmware) that can be executed by the microprocessor or the processor. Examples of the controller include but are not limited to the following microprocessors: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320. The memory controller can also be implemented as a part of the control logic of the memory. A person skilled in the art also knows that a controller can be implemented by using pure computer-readable program code, and the steps in the method can be logically programmed to enable the controller to further implement same functions in forms of a logic gate, a switch, an application-specific integrated circuit, a programmable logic controller, an embedded microcontroller, etc. Therefore, the controller can be considered as a hardware component, and a device that is included in the controller and that is configured to implement various functions can also be considered as a structure in the hardware component. Alternatively, a device configured to implement various functions can be considered as both a software module for implementing the method and a structure in the hardware component. The system, device, module, or unit described in the described implementations can be implemented by a computer chip or an entity, or implemented by a product with a certain function. A typical implementation device is a computer. The computer can be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, or a wearable device, or a combination of any of these devices. For ease of description, the described device is described by dividing functions into various units. Certainly, when the present application is implemented, the functions of the units can be implemented in one or more pieces of software and/or hardware. A person skilled in the art should understand that the implementations of the present disclosure can be provided as a method, a system, or a computer program product. Therefore, the present disclosure can use a form of hardware only implementations, software only implementations, or implementations with a combination of software and hardware. In addition, the present disclosure can use a form of a computer program product that is implemented on one or more computer-usable storage media (including but not limited to a disk memory, a CD-ROM, and an optical memory) that include computer-usable program code. The present disclosure is described with reference to the flowcharts and/or block diagrams of the method, the device (system), and the computer program product according to the implementations of the present disclosure. It should be understood that computer program instructions can be used to implement each process and/or each block in the flowcharts and/or the block diagrams and a combination of a process and/or a block in the flowcharts and/or the block diagrams. These computer program instructions can be provided for a general-purpose computer, a dedicated computer, an embedded processor, or a processor of any other programmable data processing device to generate a machine so that the instructions executed by a computer or a processor of any other programmable data processing device generate a device for implementing a specific function in one or more processes in the flowcharts or in one or more blocks in the block diagrams. These computer program instructions can be stored in a computer readable memory that can instruct the computer or any other programmable data processing device to work in a specific method, so that the instructions stored in the computer readable memory generate an artifact that includes an instruction device. The instruction device implements a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams. These computer program instructions can be loaded to a computer or other programmable data processing devices, so that a series of operations and steps are performed on the computer or the other programmable devices, generating computer-implemented processing. Therefore, the instructions executed on the computer or the other programmable devices provide steps for implementing a specific function in one or more processes in the flowcharts or in one or more blocks in the block diagrams. In a typical configuration, the computing device includes one or more processors (CPU), one or more input/output interfaces, one or more network interfaces, and one or more memories. The memory can include a non-persistent memory, a random access memory (RAM) and/or a nonvolatile memory in a computer readable medium, for example, a read-only memory (ROM) or a flash memory (flash RAM). The computer readable medium includes persistent, non-persistent, movable, and unmovable media that can implement information storage by using any method or technology. Information can be a computer readable instruction, a data structure, a program module, or other data. A computer storage medium includes but is not limited to a parameter random access memory (PRAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), a random access memory (RAM) of other types, a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a flash memory or other memory technologies, a compact disc read-only memory (CD-ROM), a digital versatile disc (DVD) or other optical storages, a magnetic tape, a magnetic disk storage, other magnetic storage devices, or any other non-transmission media that can be used to store information that can be accessed by the computing device. Based on the definition in the present specification, the computer readable medium does not include transitory computer readable medium (transitory media), for example, a modulated data signal and carrier. It is worthwhile to further note that the term “include”, “contain”, or their any other variant is intended to cover a non-exclusive inclusion, so that a process, a method, merchandise, or a device that includes a list of elements not only includes those elements but also includes other elements which are not expressly listed, or further includes elements inherent to such process, method, merchandise, or device. An element preceded by “includes a . . . ” does not, without more constraints, preclude the existence of additional identical elements in the process, method, merchandise, or device that includes the element. A person skilled in the art should understand that the implementations of the present application can be provided as a method, a system, or a computer program product. Therefore, the present application can use a form of hardware only implementations, software only implementations, or implementations with a combination of software and hardware. In addition, the present application can use a form of a computer program product that is implemented on one or more computer-usable storage media (including but not limited to a disk memory, a CD-ROM, and an optical memory) that include computer-usable program code. The present application can be described in the general context of computer executable instructions executed by a computer, for example, a program module. Generally, the program module includes a routine, a program, an object, a component, a data structure, etc. for executing a specific task or implementing a specific abstract data type. The present application can also be practiced in distributed computing environments. In the distributed computing environments, tasks are performed by remote processing devices connected through a communications network. In a distributed computing environment, the program module can be located in both local and remote computer storage media including storage devices. The implementations in the present specification are all described in a progressive way. For the same or similar parts of the implementations, refer to the implementations. Each implementation focuses on a difference from other implementations. Particularly, a system implementation is basically similar to a method implementation, and therefore is described briefly. For related parts, refer to partial descriptions of the method implementation. The previous descriptions are merely implementations of the present application, and are not intended to limit the present application. A person skilled in the art can make various modifications and changes to the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application shall fall within the scope of the claims of the present application. This specification describes techniques for improving the stability and service processing efficiency of a blockchain node. For example, each blockchain node can include a corresponding plurality of servers. A registration center can be configured to manage addresses of servers in each blockchain node, and push the addresses of the servers to the client. The client can randomly select an address from the addresses pushed by the registration center, and send the service request to a server corresponding to the address. The registration center can also send the addresses of the servers in a blockchain node to each server in another blockchain node. The blockchain node can randomly select an address from the addresses of the other blockchain node, and send the service request to the other blockchain node based on the address. It can be ensured that the blockchain node is available, provided that one server in the plurality of servers included in the blockchain node is available. Therefore, stability of the blockchain node in the consensus network is greatly improved. In addition, each server included in the blockchain node can receive the service request sent by a client, and each server can initiate service consensus in the consensus network. Therefore, service processing efficiency of a blockchain service is greatly improved. Embodiments and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification or in combinations of one or more of them. The operations can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources. A data processing apparatus, computer, or computing device may encompass apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can include special purpose logic circuitry, for example, a central processing unit (CPU), a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). The apparatus can also include code that creates an execution environment for the computer program in question, for example, code that constitutes processor firmware, a protocol stack, a database management system, an operating system (for example an operating system or a combination of operating systems), a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures. A computer program (also known, for example, as a program, software, software application, software module, software unit, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A program can be stored in a portion of a file that holds other programs or data (for example, one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (for example, files that store one or more modules, sub-programs, or portions of code). A computer program can be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. Processors for execution of a computer program include, by way of example, both general- and special-purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random-access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data. A computer can be embedded in another device, for example, a mobile device, a personal digital assistant (PDA), a game console, a Global Positioning System (GPS) receiver, or a portable storage device. Devices suitable for storing computer program instructions and data include non-volatile memory, media and memory devices, including, by way of example, semiconductor memory devices, magnetic disks, and magneto-optical disks. The processor and the memory can be supplemented by, or incorporated in, special-purpose logic circuitry. Mobile devices can include handsets, user equipment (UE), mobile telephones (for example, smartphones), tablets, wearable devices (for example, smart watches and smart eyeglasses), implanted devices within the human body (for example, biosensors, cochlear implants), or other types of mobile devices. The mobile devices can communicate wirelessly (for example, using radio frequency (RF) signals) to various communication networks (described below). The mobile devices can include sensors for determining characteristics of the mobile device's current environment. The sensors can include cameras, microphones, proximity sensors, GPS sensors, motion sensors, accelerometers, ambient light sensors, moisture sensors, gyroscopes, compasses, barometers, fingerprint sensors, facial recognition systems, RF sensors (for example, Wi-Fi and cellular radios), thermal sensors, or other types of sensors. For example, the cameras can include a forward- or rear-facing camera with movable or fixed lenses, a flash, an image sensor, and an image processor. The camera can be a megapixel camera capable of capturing details for facial and/or iris recognition. The camera along with a data processor and authentication information stored in memory or accessed remotely can form a facial recognition system. The facial recognition system or one-or-more sensors, for example, microphones, motion sensors, accelerometers, GPS sensors, or RF sensors, can be used for user authentication. To provide for interaction with a user, embodiments can be implemented on a computer having a display device and an input device, for example, a liquid crystal display (LCD) or organic light-emitting diode (OLED)/virtual-reality (VR)/augmented-reality (AR) display for displaying information to the user and a touchscreen, keyboard, and a pointing device by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, for example, visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser. Embodiments can be implemented using computing devices interconnected by any form or medium of wireline or wireless digital data communication (or combination thereof), for example, a communication network. Examples of interconnected devices are a client and a server generally remote from each other that typically interact through a communication network. A client, for example, a mobile device, can carry out transactions itself, with a server, or through a server, for example, performing buy, sell, pay, give, send, or loan transactions, or authorizing the same. Such transactions may be in real time such that an action and a response are temporally proximate; for example an individual perceives the action and the response occurring substantially simultaneously, the time difference for a response following the individual's action is less than 1 millisecond (ms) or less than 1 second (s), or the response is without intentional delay taking into account processing limitations of the system. Examples of communication networks include a local area network (LAN), a radio access network (RAN), a metropolitan area network (MAN), and a wide area network (WAN). The communication network can include all or a portion of the Internet, another communication network, or a combination of communication networks. Information can be transmitted on the communication network according to various protocols and standards, including Long Term Evolution (LTE), 5G, IEEE 802, Internet Protocol (IP), or other protocols or combinations of protocols. The communication network can transmit voice, video, biometric, or authentication data, or other information between the connected computing devices. Features described as separate implementations may be implemented, in combination, in a single implementation, while features described as a single implementation may be implemented in multiple implementations, separately, or in any suitable sub-combination. Operations described and claimed in a particular order should not be understood as requiring that the particular order, nor that all illustrated operations must be performed (some operations can be optional). As appropriate, multitasking or parallel-processing (or a combination of multitasking and parallel-processing) can be performed. | 73,002 |
11943318 | DETAILED DESCRIPTION The embodiments disclosed herein provide a survey system for distributing a survey created for a first distribution channel across multiple distribution channels. In particular, the survey system provides a user with the ability to create a survey for one distribution channel and automatically provide the survey across additional distribution channels. Additionally, the survey enables improved survey administration for a user by collecting responses to a survey from across various distribution channels, compiling the responses, identifying results in each response, and providing survey results to the user. To illustrate, in one or more embodiments, the survey system receives an indication that a user desires to distribute a survey across a distribution channel that is different from the distribution channel for which the survey was originally created. For example, while creating an online survey, a user may desire that one or more respondents complete the survey via an additional distribution channel. As such, the user may select an option to distribute the survey via the additional distribution channel. In this manner, the survey system may allow the user to create a survey using familiar methods, such as creating an online survey, while still allowing the user to administer the survey over additional or alternative distribution channels, such as email, text, chat, messages, etc. In some example embodiments, when the user desires to administer the survey via an alternative distribution channel, the survey system identifies one or more questions in the survey to recompose via the alternative distribution channel. In particular, the survey system may identify the question type of each question in the survey. Based on the identified question type for each question in the survey, the survey system may determine whether to recompose the questions to be presented on the alternative distribution channel. For example, the survey system may identify a question as a multiple choice question, and further determine to recompose the question into a format more suitable to be sent via a messaging distribution channel. Alternatively, the survey system may determine that a question is an open-ended question and, therefore, does not need to be recomposed before being sent via the messaging distribution channel. In addition to recomposing a survey for presentation on an alternative distribution channel, in some embodiments, the survey system may facilitate sending the survey, with one or more recomposed questions, to various respondents via one or more alternative distribution channels. For example, the survey system may directly administer (e.g., send and receive) the survey via a messaging distribution channel. Alternatively, the survey system may communicate with a third-party to administer the survey to the various respondents. For instance, the survey system may communicate with a third-party service that sends and receives messages via one or more alternative distribution channels, such as via a messaging distribution channel. When a survey response to a survey question is received back, the survey system may validate the response. For example, the survey system may validate that the response corresponds to an active survey. Additionally, the survey system may verify that the response contains a potential answer (e.g., the response is not blank and does not contain incoherent data). Further, in the case that the response is in reply to a recomposed survey question, the survey system may validate that the response correlates to one of the available recomposed answer choices for the recomposed survey question. After validating a response, the survey system may determine an answer to the survey question. For instance, if the response to the survey question is responding to a survey question that has not been recomposed, the survey system may determine the response contains the answer to the survey. In another instance, however, when the response corresponds to a recomposed survey question, the survey system may determine that the response provides an indication of the actual answer (e.g., the response includes an indication that maps to an available answer choice to the survey question). In this instance, the survey system may use the indication of the answer to identify the actual answer to the survey question. Additional detail regarding the process of determining an answer to a survey question using a response to a recomposed survey question will be provided below. Once the survey system receives and/or determines an answer, the survey system may update the results of the survey with the answer. In some example embodiments, the survey system may send out the next question in a survey after receiving a valid answer for a previous question. In addition, the survey system may compile answers from various respondents and update the survey results. Further, the survey system may collect results from across different distribution channels and present the overall results to the user that created the survey. As mentioned above, the survey system described herein can provide a number of advantages. To illustrate, one or more embodiments of the survey system allows a user to create a survey for a single distribution channel, and then distribute the survey over multiple distribution channels without the user needing to manually customize the survey format or content for the other distribution channels. As another benefit, the survey system also collects responses from across multiple distribution channels and compiles the results together as a whole, without the user needing to manually intervene to combine responses and answers from the different distribution channels. As used herein, the term “survey” refers to an electronic communication used to collect information. For example, a survey may include an electronic communication in the form of a poll, questionnaire, census, or other type of sampling. In some example embodiments, the term survey may also refer to a method of requesting and collecting information from respondents via an electronic communication distribution channel. As used herein, the term “respondent” refers to a person who participates in, and responds to, a survey. A survey may include survey questions. As used herein, the term “survey question” refers to prompts included in the survey that invoke a response from a respondent. Example types of questions include, but are not limited to, multiple choice, open-ended, ranking, scoring, summation, demographic, dichotomous, differential, cumulative, dropdown, matrix, net promoter score (NPS), singe textbox, heat map, and any other type of prompt that can invoke a response from a respondent. In one or more embodiments, when one or more answer choices are available for a survey question, the term survey question may comprise a question portion as well as an available answer choice portion that corresponds to the survey question. For example, when describing a multiple choice survey question, the term survey question may include both the question itself as well as the multiple choice answers associated with the multiple choice question. As used herein, the term “response” refers any type of electronic data provided by a respondent. The electronic data may include feedback from the respondent in response to a survey question. Depending on the question type, the response may include, but is not limited to, a selection, a text input, an indication of an answer, an actual answer, and/or an attachment. For example, a response to a multiple choice question may include a selection of one of the available answer choices associated with the multiple choice question. As another example, a response may include a numerical value, letter, or symbol that that corresponds to an available answer choice. In some cases, a response may include a numerical value that is the actual answer to a corresponding survey question. The term, “distribution channel,” as used herein, refers generally to an electronic communication channel. Examples of distribution channels may include wired or wireless channels, such as online connections, electronic mail, and electronic messages (e.g., instant messages, text messages, multi-media messages, chat, etc.). In some embodiments, a distribution channel requires using a specific protocol when sending electronic data via the distribution channel. As a result, electronic data may need to be converted to a specific type of protocol before being sent over a corresponding distribution channel. For example, electronic data being sent to a mobile device via a SMS distribution channel must be based on SMS protocol before the electronic data can be sent via the SMS distribution channel. FIG.1illustrates a schematic diagram of a communication system100in accordance with one or more embodiments. As illustrated, the communication system100includes a survey system102and a client device104. The survey system102may connect to client device104via a network106. AlthoughFIG.1illustrates a particular arrangement of the client device104, the survey system102, and the network106, various additional arrangements are possible. For example, the survey system102may directly communicate with the client device104, bypassing the network106. As mentioned, the survey system102and the client device104may communicate via the network106. The network106may include one or more networks, such as the Internet, and may use one or more communications platforms or technologies suitable for transmitting data and/or communication signals. Additional details relating to the network106are explained below with reference toFIGS.11and12. As illustrated inFIG.1, a respondent110may interface with the first computing device220a, for example, to access the survey system102. The respondent110may be an individual (i.e., human user), a business, a group, or other entity. AlthoughFIG.1illustrates only one respondent110, one will understand that communication system100can include a plurality of respondents, with each of the plurality of respondents interacting with the communication system100using a corresponding client device. The client device104may represent various types of computing devices. For example, the client device104may be a mobile device (e.g., a cell phone, a smartphone, a PDA, a tablet, a laptop, a watch, a wearable device, etc.). In some embodiments, however, the client device104may be a non-mobile device (e.g., a desktop or server; or another type of client device). Additional details with respect to the client device104are discussed below with respect toFIG.11. In one or more embodiments, the survey system102may communicate with the respondent110. In particular, the survey system102may send a survey (e.g., questions or prompts associated with a survey) to the respondent110via the network106. More specifically, the survey system102may send the survey to the respondent110via the network106using a variety of distribution channels. For instance, the survey system102may send a survey via an online distribution channel (e.g., through a website). In another instance, the survey system102may send a survey via a messaging distribution channel (e.g., in a chat, text, instant message, etc.). In response, the respondent110may interact with the survey system102to complete the survey. In one or more example embodiments, the respondent110may respond to the survey using a mobile device or tablet client device. In alternative embodiments, the respondent110may used a laptop or desktop client device. In some example embodiments, the respondent110may use a combination of client devices to respond to a survey. FIG.2illustrates a schematic diagram of a survey system102in accordance with one or more embodiments. The survey system102may be an example embodiment of the survey system102described in connection with the survey system102ofFIG.1. The survey system102can include various components for performing the processes and features described herein. For example, in the illustrated embodiment, the survey system102includes a survey manager204, a distribution channel manager206, a composition manager208, and a results database210. In addition, the survey system102may include additional components not illustrated, such as those as described below. The various components of the survey system102may be in communication with each other using any suitable communication protocols, such as described with respect toFIG.12below. Each component of the survey system102may be implemented using one or more computing devices (e.g., server devices) including at least one processor executing instructions that cause the survey system102to perform the processes described herein. The components of the survey system102can be implemented by a single server device or across multiple server devices, as described above. Although a particular number of components are shown inFIG.1, the survey system102can include more components or can combine the components into fewer components (such as a single component), as may be desirable for a particular implementation. As illustrated, the survey system102includes a survey manager204. The survey manager204can manage the creation of a survey and the composition of one or more survey questions. Additionally, the survey manager204can manage the collection of responses provided by respondents in response one or more survey questions provided by the survey system102. In particular, the survey manager204can assist a user in generating and/or creating surveys, which enable the user to obtain feedback from respondents. For example, the user may interact with the survey manager204to create and/or organize a survey that includes one or more survey questions. As part of assisting a user in creating a survey, the survey manager204may suggest additional survey questions to include in the survey. For example, if a user selects a question that prompts a respondent to select an answer from a range of available answer choices, the survey manager204may recommend that the user also add an open ended questions to ask the respondent depending in the respondent's answer to the question. To illustrate, a survey administrator or product manager (e.g., the user) may add a question to a survey that asks a respondent to rank a satisfaction level with a product from one (1) to ten (10), where one (1) is very unsatisfied and ten (10) is very satisfied. If a respondent marks a low score (e.g., a 1, 2, or 3), the survey manager204may suggest adding an open-ended question that asks the respondent to explain his or her dissatisfaction with the product and/or what could be done to improve the respondent's satisfaction level. If the respondent marks a high score (e.g., an 8, 9, or 10), the survey manager204may suggest adding an open-ended question that asks the respondent the reason behind the respondent's satisfied with the product and/or what the respondent likes about the product. The survey manager204may provide other features to assist a user in the creation and composition of survey questions to present to respondents. For instance, the survey manager204may provide alternative wording for questions provided by the user. Further, the survey manager204may allow the user to review the set of survey questions as if a respondent was viewing the survey, such as in a preview mode. In addition to creating a survey, the survey manager204may assist a user in editing a survey that the user is creating or has created. For example, the survey manager204may provide tools that allow a user to add, remove, edit, or otherwise modify survey questions. For instance, the survey manager204may enable a user to change the available answer choices for a survey question. In another instance, the survey manager204may allow the user to remove one or more survey questions, even after the survey has begun to be administered to respondents. Further, the survey manager204may allow a user to specify preferences and/or parameters for one or more surveys. For example, the user may use the survey manager204to specify the beginning date of a survey, a survey's duration, and/or when a survey expires. The survey manager204may also enable a user to specify how long a respondent has to complete a survey, or the time (e.g., either a minimum time or a maximum time) a respondent has to respond to a survey question, In some example embodiments, the survey manager204may assist a user in specifying customizations to apply to a survey. For instance, a user may use the survey manager204to apply specific branding to a survey, such as applying a particular color scheme and/or adding a company's logo. Further, the user may use the survey manager204to specify when questions on a survey should import piped text (e.g., respondent-customized text) into a survey based on contact and demographic information the survey system has on file for respondents. For example, when a user selects an option to add piped text, the survey manager204may input the name, age, and/or gender of a respondent in survey questions sent to the specific respondent. In a similar manner, the survey manager204may aid a user in selecting the respondents to whom to send a survey. In some cases, the survey manager204may provide a listing of respondents to whom to send a survey. In additional, the survey manager204may organize respondents in groups and allow a user to select one or more groups. In some instances, the survey manager204may allow the user to import contact information for respondents. For instance, the user may upload a list of mobile numbers to which the survey system102should send a survey. In addition to creating a survey, the survey manager204may enable the user to select which distribution channel(s) the survey system102should use when administering the survey. As one example, the user may select an option to have the survey system102administer the survey system on a website available via the Internet. Additionally or alternatively, the user may select an option for the survey system102to administer the survey via instant messages, text messages, or within a chat session. When selecting multiple distribution channels on which to administer a survey, the survey manager204may prioritize one distribution channel over another distribution channel. For instance, the survey manager204may instruct the survey system102to attempt to administer a survey via the Internet (i.e., using hyperlinks and websites), and if unsuccessful, to administer the survey via text message. In some example embodiments, the survey manager204may be located outside of the survey system102. In other words, the survey manager204may be part of a system that is separate from the survey system102, such as belonging to a third-party system. When the survey manager is located outside the survey system102, the survey manager204may, apart form the survey system102, create and distribute surveys as well as gather and store responses from respondents for the surveys. Regardless of whether the survey manager204operates as part of the survey system102or another system, the survey manager204can collect responses to survey questions provided by respondents. The survey manager204may collect responses in a variety of ways. To illustrate, the survey manager204may extract responses to a survey question in bulk. For example, the survey manager204may collect a list of multiple responses to a survey question. In addition, or in the alternative, the survey manager204may collect responses to a survey question as respondents provided their responses to the survey question. Once the survey manager204collects a response to a survey question, the survey manager204can verify the answer to the survey question provided in the response. In particular, if the respondent is responding to a survey question that includes available answer choices, the survey manager204can determine that the response includes an answer that corresponds to one of the available answer choices for the survey question. In this manner, the survey manager204may ensure that only valid answers are being included in the results and stored in the survey system102, as described below. If the survey manager204determines that an answer is invalid, the survey manager204may enable the respondent to re-answer the survey question. In some cases, if the respondent is unavailable, the survey manager204may disregard the invalid answer in the response. In some example embodiments, upon collecting and verifying responses, the survey manager204may store the responses. More specifically, the survey manager204can store the responses for a survey in a results database210. In some instances, the survey manager204may separately store responses for each survey question. To illustrate, if a survey includes two survey questions, then the survey manager204can store responses for the first survey question together and responses for the second survey question together. Additionally or alternatively, the survey manager204may store the responses outside of the survey system or on a system belonging to a third-party. Further, after verifying that an answer for a survey question is valid, the survey manager204may compile answers for survey questions into a set of results. In some cases, compiling the results may include adding a newly obtained answer to a set of previously compiled results. For example, each time a respondent answers a particular survey question for a survey, the survey manager204may add the newly received answer to answers previously received from other respondents for the same survey question. Additionally, the survey manager204may compile a set of survey results based on the results for each survey question. The survey manager204may also provide the results for one or more survey questions in a survey to the user that created the survey, a survey administrator, and/or a survey result reviewer. The survey manager204may present the results using charts, graphs, and/or other graphics. For example, for a multiple choice question, the survey manager204may provide a bar graph comparing each answer together. Further, the survey manager204may update the results as additional answers are received from respondents, as described above. In some example embodiments, the survey manager204may present the results to the user via a website. The website may be the same website used by the user to create the survey. The website may provide results of the survey to the user regardless of the distribution channel the survey system102employed to administer the survey. For example, the website may display a single set of results of a survey even when respondents of the survey completed the survey via multiple distribution channels, such as online or via text messages, chat, instant messaging, email, etc. As briefly mentioned above, the survey system102includes a distribution channel manager206. When the survey system102administers a survey, the distribution channel manager206may send and receive the survey to and from designated respondents. More specifically, the distribution channel manager206may send and receive surveys to respondents via the distribution channel(s) selected by the user. In particular, when a user selects a particular distribution channel on which to administer a survey, the distribution channel manager206may identify the protocols and communication requirements for the particular distribution channel. For example, when the user selects the option to administer a survey via a website, the distribution channel manager206may identify relevant protocols, such as TCP/IP, HTTP, etc., along with the requirements for each protocol. As another example, when the user selects the option to administer a survey to mobile devices via text message, the distribution channel manager206may identify the protocols for sending and receiving messages via SMS, short message peer-to-peer (SMPP), multimedia messaging service (MMS), enhanced messaging service (EMS), and/or simple mail transport protocol (SMTP). Additionally, the distribution channel manager206may specify outgoing address information associated with a survey. Depending on the distribution channel, the distribution channel manager206may send a survey from one of multiple addresses (e.g., websites, email addresses, phone numbers, etc.). In the case of multiple distribution channels, the distribution channel manager206may specify the outgoing address from which to send a particular survey. In this manner, when a respondent receives a survey, or a request to participate in a survey, the incoming address seen by the respondent is the outgoing address specified by the distribution channel manager206. To illustrate, a user can select the option or designate a survey to be sent via text message to a respondent's mobile device. When sending a text message to a respondent's mobile device, the distribution channel manager206may choose from a number of outgoing addresses from which to send the survey. In some example embodiments, the distribution channel manager206may select between short numbers (e.g., 5-digit or short code numbers) and/or long numbers (e.g., 10-digit or long code numbers). Further, in one or more embodiments, the distribution channel manager206may allow a user to specify which outgoing address(es) the distribution channel manager206should associate with a survey and/or distribution channel. In one or more embodiments, the survey system102may be administering multiple surveys to the same respondent. In some of these embodiments, because the distribution channel manager206is able to send a survey to a respondent via multiple outgoing addresses, the distribution channel manager206may associate a different outgoing address with each survey being sent to the respondent. To illustrate, a first survey may be sent to the mobile device of a respondent via a first outgoing address, and a second survey may be sent to the mobile device of the respondent via a second outgoing address. In some example embodiments, the distribution channel manager206may use one or more third-party services to distribute a survey to respondents. For instance, when a user selects the option to administer a survey via a particular distribution channel, the distribution channel manager206may use a third-party service that is specialized in distributing information via the particular distribution channel. For example, if a user specifies that a survey should be administered via text message, the survey system102may employ a third-party text messaging service to send and receive the survey. The distribution channel manager206may provide navigational tools and options to the respondent based on the distribution channel that the distribution channel manager206uses to send a survey to a respondent. For example, when administering a survey via a website, the distribution channel manager206may provide navigational tools, such as a progress indicator, and navigational options, such forward and back, to the respondent. As another example, when administering a survey via text message, the distribution channel manager206may provide navigational tools and options that allow a respondent to skip a question, return to a previous question, stop the survey, get progress update, etc. Due to the nature of text messages, however, the distribution channel manager206may provide these tools and options to a respondent upon a respondent sending particular key words in a response, such as “skip,” “back,” “stop,” “status,” etc. In one or more embodiments, the distribution channel manager206may provide an option for a respondent to pause a survey and resume the survey using a different distribution channel. For example, when a respondent is completing a survey via a text message on a mobile device, the respondent may respond with “web access,” “online version,” or some other type of response indicating a desire to continue the survey online. In response, the distribution channel manager206may provide the respondent with a link (e.g., URL), which when selected, allows the respondent to continue the survey online rather than by text message. In some embodiments, the user designs the survey to automatically include the link in the survey (e.g., at the beginning of the survey), such that the respondent can use the link at anytime to continue the survey via a different distribution channel. Likewise, a respondent completing the survey online may select an option, such as a link built into or presented by the survey, to continue the survey via an alternative distribution channel, such as email, text message, instant message, etc. Further, when available, the distribution channel manager206may provide an option for a respondent to select a language preference. The preference may be applied on a per survey basis or applied to future surveys for the respondent (e.g., such as a global preference). Depending on the distribution channel on which the survey is administered, the option to set a language preference may be displayed as an option within the survey. Alternatively, the user may need to specify their language preference in response to a language preference message sent by the survey system102(e.g., text “Spanish” back to complete the survey in Spanish). In some example embodiments, depending on the distribution channel used, the distribution channel manager206may receive multiple responses corresponding to a single communication. For example, when sending and receiving a survey via some text message distribution channels, such as SMS, the distribution channel may limit the number of characters that can be included in an electronic communication (e.g., up to 160 characters). As such, the distribution channel may break up text messages in multiple messages. If the distribution channel manager206receives multiple text messages within a predefined time of one another (e.g., 10 seconds), the distribution channel manager206may need to concatenate the multiple text messages into a single response. When concatenating responses, the distribution channel manager206may determine the number of text messages that correspond to a response. If the distribution channel manager206determines that two text messages correspond to a response, the distribution channel manager206may identify the text message with 160 characters (or other character limit) as the first part of the response and the text message with less than 160 characters as the second part of the response. If the distribution channel manager206determines that more than two text messages correspond to a response, the distribution channel manager206may identify the text message with less than 160 characters is the last part of the response. Further, the distribution channel manager206may use capitalizations at the beginning of the response, timestamps, and other factors to determine the order of the remaining text messages. In some cases, the text messages themselves will provide an indication of the order that the text messages should be concatenated (e.g., “Message 1 of 3,” “Message 2 of 3,” and “Message 3 of 3”), which the distribution channel manager206can use to concatenating the multiple related text messages. While the distribution channel manager206may send a survey to a respondent upon the request of a user, as described above, in some cases, a respondent may initiate the survey. For example, a respondent may contact the survey system102to take a survey. Depending on the distribution channel the respondent uses to take the survey, the respondent may need to include an access code when contacting the survey system102to initiate a survey. For instance, when sending a text message to the survey system102(e.g., to an address associated with the survey system102), the access code included in the text message indicates to the survey system102the particular survey that the respondent desires to take. Accordingly, the distribution channel manager206may detect the incoming text message, identify the access code in the text message, and indicate to the survey system102that the respondent would like to take the corresponding survey. As described above, a user may specify that the survey system102administer a survey via a particular distribution channel. In some example embodiments, the user may specify that the survey manager204administer a survey via text message, such as via an instant message, a SMS, a chat, or another text-based distribution channel. The survey manager204, however, may be unable to administer a survey over the specified distribution channel because one or more survey question in the survey were not composed to be administered over the specified distribution channel. To illustrate, the user may create a survey online. The user may create the survey online using applications and tools provided by the survey manager204, as described above. As part of creating the survey, the user may select the option to distribute the survey via text message (in addition or in place of administering the survey online). If the user created the survey to be distributed as an online survey, rather than requiring that the user manually recreate the survey for distribution via text message, the survey system102may automatically recompose the survey to be distributed via text message. To illustrate, the survey system includes the composition manager208, which includes a question-type detector212, a question recomposer214, a response validator216, and a response translator218. When the survey system102needs to distribute a survey on a distribution channel other than the distribution channel for which the survey was created, the composition manager208may, without user intervention, recompose the survey to enable the survey to be presented on the selected distribution channel. To illustrate, when recomposing a survey, the composition manager208may determine whether a survey question in the survey can be sent via the selected distribution channel or if the survey question needs to be recomposed before the survey question can be sent via the selected distribution channel. In some example embodiments, the composition manager208may determine whether a survey question needs to be recomposed based on question type. Accordingly, the composition manager208may include a question-type detector212that identifies and detects the question-type of survey questions, and determines if the composition manager208needs to recompose one or more survey question. A survey question can be one of many question types. Examples of question types include, but are not limited to net promoter score (NPS), multiple choice, multiple selection, open-ended, ranking, scoring, summation, demographic, dichotomous, differential, cumulative, dropdown, matrix, short response, essay response, heat map, etc. The question-type detector212may determine the question type of a survey question using a variety of methods. In one or more embodiments, the question-type detector212may identify that a survey question has been tagged as a specific question type. For example, as part of creating a survey, a user may select a particular type of question to add to the survey. For instance, the user may select a multiple choice or open-ended survey question to add to the survey. When the user selects to add a survey question having a particular question type, the survey system102may tag the survey question as having the selected question type. Additionally or alternatively, the question-type detector212may analyze the survey question to determine the question type. For instance, the question-type detector212may identify key words in a question that may indicate the question type. For example, upon detecting that the survey question includes the words “additional comments,” the question-type detector212may determine that the survey question is an open-ended question. As another example, upon identifying the words, “mark all that apply,” the question-type detector212may identify the question type as a multiple selection question. Further, the question-type detector212may use the presence of available answer choices in a survey question to determine the question type of the survey question. For instance, if no available answer choices are associated with an answer, the question-type detector212may rule out the possibility of the survey question being a multiple choice or multiple selection question. Further, the question-type detector212may determine that the question type is open-ended, short response, or essay response. In another instance, if the question-type detector212determines that a respondent must select one of two answers, or if the only available answer choices are “yes” and “no,” the question-type detector212may determine the question type to be dichotomous. Similarly, if the question-type detector212determines that a respondent must select one of multiple available answer choices, the question-type detector212may determine that the survey question is a multiple choice question. Depending on the selected distribution channel and the question type, the composition manager212may determine to recompose a question to be better suited for the selected distribution channel. For example, if a survey is to be administered via text message, the composition manager208may determine that recomposing open-ended questions would minimally benefit respondents, while recomposing survey questions with multiple available answer choices (e.g., multiple selection, and matrix survey questions) would benefit respondents by allowing respondents to conveniently answer these questions via text message. When the composition manager208determines that a survey question should be recomposed, the question recomposer214may recompose the survey question to better suit the selected distribution channel. For example, the question recomposer214may recompose (e.g., reformat, restructure, or otherwise modify) a survey question before the survey question is provided to a respondent over the selected distribution channel. Features and functionalities of the question recomposer214will now be described. In addition,FIGS.5A-8Bprovide various examples and embodiments of recomposing survey questions composed originally for use with one distribution channel, and then recomposed to be provided via another distribution channel. For purposes of explanation, the question recomposer214will be described in terms of recomposing a survey question originally composed for an online survey to a recomposed survey question to be presented via text message. One will appreciate, however, that the principles described herein with respect to the question recomposer214apply to recomposing a survey question to be provided via other distribution channels. Further, when a user composes a survey question, the user may intend for the survey question to be distributed via text message, however, it may be more intuitive and familiar for the user, given the tools provided by the survey system102, to create a survey questions for online distribution rather than for distribution via text message. In general, the question recomposer214recomposes survey questions such that the survey question can be more easily answered via the selected distribution channel. For example, when a respondent is answering a multiple choice or multiple selection answer on a webpage, the respondent may use a cursor to select an answer from the available answer choices. If the same question was presented to the responded via text message, however, the responder does not have the option to use a cursor to select an answer. Accordingly, the question recomposer214may recompose the question to allow the respondent to answer the survey question with minimal effort, such as answering a multiple choice survey question by texting back a single digit or single letter. In some example embodiments, the question recomposer214may recompose a survey question based on protocol limitations of the distribution channel over which the survey question is sent. For example, if the survey is administered via text message such as SMS, each text message sent may be limited to 160 characters. Other distribution channel protocols may have similar character limitations. Accordingly, the question recomposer214may recompose a survey question by reducing words and/or characters in a survey question to fit within a single message. Fitting a survey question in a single message may allow a respondent to see the entire survey question as a whole, rather than the survey question being divided into multiple parts. Alternatively, the question recomposer214may determine to send a survey question as multiple messages. In this case, the question recomposer214may determine where to split a survey question in order to reduce confusion to a respondent. For example, the question recomposer214may prevent splitting the survey question between text (e.g., moving available answer choices to another text message). Further, the question recomposer214may ensure that when multiple messages are sent for a survey question, the messages do not arrive out of order, as often is the case when a text message system splits a single text message into multiple text messages. In some example embodiments, and based on question type, the question recomposer214may divide a survey question into multiple recomposed survey questions when presenting the survey question via text message. For example, and as shown and explained below inFIGS.6A-6B, a survey may include a matrix survey question. Presenting a matrix question composed for an online survey via text message is not practical and in some cases, not possible for a respondent to answer. Accordingly, the question recomposer214may recompose a matrix question to be presented via text message by dividing the matrix question into multiple recomposed questions. In one or more embodiments, the question recomposer214may recompose a survey question by assigning or mapping available answer choices of the survey question to corresponding letters or numbers. To illustrate, a survey question may ask “How often do you visit our restaurant?” and provide available answer choices of “Daily,” “Weekly,” “Monthly,” “Yearly,” and “Never.” The question recomposer214may recompose the available answer choices into “1—Daily,” “2—Weekly,” “3—Monthly,” “4—Yearly,” and “5—Never,” where a respondent need only to respond with the corresponding number (e.g., “1,” “2,” “3,” “4,” or “5”). In some cases, the question recomposer214can associate numbers to available answer choices that are words (e.g., “1: Dog,” “2: Cat,” “3: Bird,” etc.) while associating letters with available answer choices that include numbers (e.g., “A: 0,” “B: 1-10,” “C: 11-50,” etc.). Further, when the question recomposer214recomposes available answer choices of a survey question by assigning letters or numbers to each available answer choice, the question recomposer214may accept multiple answers for each available answer choice. For example, if a recomposed survey question that includes the recomposed answers of “1: Dog,” “2: Cat,” “3: Bird,” a response including either “2” or “Cat” may be valid because the survey system102may use either value to determine the respondent's answer to the survey question. As mentioned above, the question recomposer214may recompose a survey question based on the question type of the survey question. For example, the question recomposer214may recompose a multiple choice or multiple selection survey question. Before recomposing a multiple choice question however, the question recomposer214may determine if available answer choices associated with the multiple choice question are already suitable for distribution via text message. For instance, a multiple choice question may include numerical answers such as “1,” “2,” “3,” and “4” or “1,” “10,” “100,” and “1000.” In this case, the question recomposer214may determine that the available answer choices in the multiple choice question do not need to be recomposed before being presented via text message. In addition to recomposing survey questions of a survey, the question recomposer214may allow a user creating a survey to view recomposed survey questions as the user is composing a survey question. For example, if the user is adding a survey question via an online interface, the survey system102may display the survey question as composed by the user. In addition, the survey system102may display the recomposed survey question as it would appear on a mobile device or within a text message. Additionally or alternatively, the question recomposer214may provide the recomposed survey question in a text message to the user and allow the user to respond to the recomposed survey question. The question recomposer214may provide the recomposed survey question to the user as part of a test mode where answers by the user are not included in the results of the survey. Along similar lines, when the question recomposer214provides a preview of a recomposed survey question to the user creating a survey, the question recomposer214may determine whether a recomposed question is compatible with the select distribution channel. For example, the question recomposer214may determine that a recomposed question will not display via text message (or via another distribution channel) or that the recomposed survey question will be displayed in a manner that may be confusing or unpleasant when presented via text message. Upon making the determination, the question recomposer214may notify or warn the user that a survey question is not able to be recomposed for a selected distribution channel (e.g., specifically notify the user that the survey question is too long, or the survey question type does not lend itself to the new format of the selected distribution channel). In such a case, the survey system102may allow the user to manually recompose the survey question. For instance, the question recomposer214may provide the user with suggestions or alternative approaches to rephrase or reformat the survey questions, as described below. Additionally, or alternatively, the question recomposer214may automatically recompose the survey question specifically for the selected distribution channel, delete the survey question, or skip the survey question when administering the survey via the selected distribution channel. Similarly, if the question recomposer214is recomposing a survey question after a user has created a survey, the question recomposer214may skip a survey question that is not presentable via the selected distribution channel. For example, the question recomposer214may determine that a recomposed survey question is not presentable via text message. As such the question recomposer214may notify the survey system102and the survey system102may skip to the next presentable survey question via text message. Further, the question recomposer214may notify the creator of the survey or a survey administrator of the incompatibility. In some example embodiments, after the question recomposer214has recomposed a survey question to be sent via a select distribution channel, the distribution channel manager206(described above) can send and receive the recomposed survey question. Depending on the properties of the selected distribution channel, the distribution channel manager206may send and receive individual recomposed survey questions, in a serial manner, as described below. Alternatively, in some embodiments, the distribution channel manager206may send and receive multiple survey questions at one time. In one or more embodiments, the distribution channel manager206may not receive a response to a survey question sent, such as with a survey question or recomposed survey question sent via text message or instant message. After a threshold period of time has passed without receiving a response (e.g., an hour, a day, two days, a week, etc.), the distribution channel manager206may send another message asking the respondent if the respondent desires to continue the survey, or if the respondent prefers to stop the survey. If the respondent desires to continue the survey, the distribution channel manager206may resend the last survey question response. Alternatively, if the respondent does not respond or responds to stop the survey, the distribution channel manager206may send a confirmation that the survey is terminated for the respondent. Once the distribution channel manager206receives a response, to a recomposed survey question, the composition manager208may validate the response. In particular, the response validator216may validate the response received from a respondent within a text message. As described below in additional detail, the response validator216may determine whether a text message response to a survey question or a recomposed survey question is valid. Additional detail of verifying responses is provided with respect toFIG.4. The response validator216may validate a response based on a number of factors. As one example, the response validator216may determine if a response is empty, blank, or contains bad data. If the response validator216determines that a response does not include a plausible answer (e.g., the response is empty, blank, or nonsensical), the response validator216may identify the response as invalid. When the response validator216determines that a response is invalid, the response validator216may send a notification to the respondent indicating the invalid response. The response validator216may also include options in the notification that enables the respondent to re-respond to the survey question, skip the survey question, stop the survey, etc. Additionally or alternatively, the response validator216may cause the survey system102to resend the survey question, or recomposed survey question, to the respondent and allow the respondent to provide a valid response. If the response validator216determines that a response includes a valid response, the response validator216may further validate the response. For example, the response validator216can verify that the response corresponds to an active or open survey. For instance, the response validator216may use information from the response to identify the survey to which that the response corresponds.FIG.4, described below, provides additional detail regarding the survey system102identifying the survey to which the response corresponds. In some cases, the response validator216may determine the survey that the respondent is attempting to respond to has timed-out or expired. In other cases, the response validator216may determine that the response does not correspond to an active survey. For instance, the response validator216may receive a text message from a respondent that does not correspond to a survey being administered by the survey system102. In this case, the survey system102may indicate, for example, via a message, to the respondent that the response is invalid. Further, the survey system102may provide information, such as an activation code, to the respondent to allow the respondent to start a new survey session. Further, when the survey system102receives a response to a recomposed survey question, the response validator216may determine whether the response answers a recomposed survey question by including a selection of one of the available recomposed answer choices. For example, the response validator216may compare the response to the available recomposed answer choices to identify a match. If the response matches one of the available recomposed answer choices, the response validator216can determine that the response is a valid response. In addition, the response validator216may determine if the response includes only one answer, or if the response includes multiple answers. The response validator216may match the response to the recomposed survey question to determine if the recomposed survey question allows for the selection of multiple answers, and if so, whether the multiple answers are plausible answers. For example, if the recomposed survey is a multiple selection question, then a response can validly include multiple answers. In another instance, the response validator216may detect that the response includes multiple answers, but that the multiple answers correspond to each other. To illustrate, a valid response to a recomposed survey question may be “1” where “1” refers to an answer to the recomposed question (mapped to the available answer choice of “Cat” in the survey question). In addition, the answer of “Cat” may also be a valid answer choice of the same survey question. As shown inFIG.2, the survey system102may include a results database210. The results database210may be made up of a single database or multiple databases. In addition, the results database210may be located within the survey system102. Alternatively, the results database210may be external to the survey system102, such as in cloud storage. Further, the results database210may store and provide data and information to the survey system102, as further described below. The results database210may include surveys220, such as surveys created via the survey manager204. Further, the results database210may also include surveys imported from third-party sources. In addition, the results database210may store information about each survey, such as parameters and preferences that correspond to each survey. For example, when a user creates a survey and specifies that the survey be administered via a selected distribution channel, the results database210may record the user's specified selection. Each survey may have a survey identifier (or simply “survey ID”) to provide unique identification. In some cases, the surveys may be organized according to survey ID. Alternatively, surveys220in the results database210may be organized according to other criteria, such as creation date, last modified date, closing time, most recent results, etc. Further, the results database210may associate access codes with a survey ID, such that the survey system102can identify to which survey a response corresponds when the response includes an access code. As described below, in one or more embodiments, the survey system102may associate a survey ID with a survey token. A survey token may identify when a particular respondent is completing a particular survey. In some cases, survey tokens may also correspond to the distribution channel a respondent is using to respond to a survey. For example, when a respondent is completing a survey via text message, the survey system102may create and store a token that includes the originating address (e.g., the outgoing number the survey system102is using to send the survey) and the destination address (e.g., the respondent's number to which the survey system102is sending the survey). In this manner, when a respondent is completing a survey, the survey system102may associate a survey token to the survey, and thus link the respondent to the survey. In the instance that the survey system102has multiple outgoing numbers, the survey system102can maintain a record of each survey in which the respondent is participating. In a similar manner, the survey system102can also use the results database210to maintain a record of a respondent's progress within a survey. In particular, the results database210may store the survey question that a respondent is currently answering. For example, if a respondent has completed three out of five questions on a survey, the survey system102may include a record in the results database210of the participant's current progress. In particular, the survey system102may note that the respondent has answered the first two questions, been provided the third question, and has not yet answered the third question or subsequent questions. As shown inFIG.2, the surveys220may include questions222and results224. More specifically, each survey may include a set of questions222. The survey system102may store the questions grouped by survey. Further, each question may have a unique question identifier (or simply “question ID”). In some cases, the question ID may also identify the survey to which the question belongs. For example, all questions from a particular survey may include the survey ID within the question ID. Further, each question may be associated with a set of results, or a compilation of answers associated with the question. Accordingly, along with questions222, the surveys220may include results224. When a respondent provides an answer to a survey question, the survey system102may add the answer as part of the results224. As such, the results224may include a cumulative set of answers for a survey question. Further, each result may have a unique results identifier (or simply “result ID”). In some instances, the result ID may identify the survey and/or the question to which the result corresponds. For instance, based on the result ID, the survey system102is able to identify the corresponding question and/or the survey. FIG.3illustrates a sequence-flow method300showing the survey system102administrating a survey to a client device104. The survey system102and the client device104shown inFIG.3may each be example embodiments of the survey system102and the client device104described with regard toFIG.1. Further, as shown inFIG.3, the survey system102may include a survey manager204and a composition manager208. The survey manager204and the composition manager208shown inFIG.3may be example embodiments of the survey manager204and the composition manager208described in connection withFIG.2. As shown inFIG.3, the survey system102may receive a request to administer a survey. In particular, as shown in step302, the survey manager204may receive a request to administer a survey created for a first distribution channel on a second distribution channel. For example, a user may request that the survey system102administer a survey created as an online survey via text message. In step304, the survey manager204may send the survey created for the first distribution channel to the composition manager208. In particular, the survey manager204may detect that the user wants to administer the survey on a distribution channel other than the distribution channel for which the survey was created, and as a result, may send the survey over to the composition manager208to recompose one or more survey questions. For instance, the survey manager204may send the survey over to the composition manager208to recompose the survey questions for presentation on the second distribution channel via text message, such as SMS or instant messaging. Upon receiving the survey, the composition manager208may recompose one or more survey questions to be presented on the second distribution channel, as step306illustrates. More specifically, the composition manager208may determine whether a survey question in the survey is currently suitable for the second distribution channel. Additionally, the composition manager may determine, based on the survey question type, that a recomposed survey question may enable a respondent to better answer the survey question. As described above, the composition manager208may determine whether to recompose a survey question based on the question type of the survey question and/or based on the type of available answer choice (e.g., numerical answers) for the survey question. Additionally, as descried above, in some example embodiments, the composition manager208may recompose a single survey question into multiple recomposed survey questions. Based on the determination to recompose a survey question within the survey, the composition manager208may recompose the survey question to be presented on the second distribution channel. For instance, the composition manager208may recompose a survey question to be presented via text message, as described above. Further examples of recomposing survey questions are provided below in connection withFIGS.5A-8B. In some example embodiments, a survey question needs to be converted from a first protocol format to a second protocol format before the survey question can be sent via the second distribution channel. In particular, if the survey system102is sending a survey question (including a recomposed survey question) via text message, the survey system102may need to convert the survey question to another protocol, such as SMS. As step308illustrates, the survey system102may convert one or more survey questions for the second distribution channel. Further, as described above, in one or more embodiments, the survey system102may employ a third-party service to convert and distribute the survey questions via text message. After recomposing and/or converting the protocol of the survey question, the survey system102may administer the survey to one or more respondents. In particular, as shown in step310, the composition manager208may administer one or more survey questions via the second distribution channel to the client device104. For instance, the composition manager208may send a text message including a survey question or a recomposed survey question to a respondent associated with the client device104. WhileFIG.3illustrates one client device104, the composition manager208may send, via text message, one or more survey questions to multiple respondents associated with multiple client devices. As part of administering the survey via the second distribution channel in step310, the survey system102may receive responses from the client device104. More specifically, the composition manager208may receive a response from the client device104via text message. The response may include an answer to the recomposed survey question previously sent to the client device104. As illustrated in step312, the composition manager208may validate the received responses to the one or more survey questions. As briefly described above, the composition manager208may confirm that a response is not blank or empty, that the response corresponds to an active survey, and that the response includes an available answer. Further,FIG.4, which is described below, provides a more detailed example of validating responses received from respondents. After validating the responses received from the client device104, the composition manager208may determine survey answers from the valid responses, shown in step314. For example, as described above, the composition manager208may determine if a response includes a direct answer to a survey question, or an indication of an answer to a recomposed survey question. To illustrate, if the survey system102sent a non-recomposed survey question to a respondent, then the response can contain a direct answer to the survey question. In contrast, however, if the survey system102sent a recomposed survey question to a respondent, the response may contain an indication of an answer to the recomposed survey question, which serves as an indication of the actual answer to the survey question. For example, if the recomposed survey question includes available recomposed answer choices, which append number or letters to the available answer choice of the survey question, the respondent may include a number or letter in their response. As such, the number or letter in the response serves as an indication of one of the available answer choices to the survey question. The composition manager208may use the indication of the recomposed survey question to determine the answer to the survey question. In some example embodiments, a response to a recomposed survey question may include an actual answer to the survey question, as described above. In any case, the composition manager208may analyze a valid response, and determine a survey answer from the response. In step316, the composition manager208may send survey answers to the survey manager204. In some cases, the composition manager208may store the answers in a database, such as a results database. When this occurs, the composition manager208may send an indication to the survey manager204that the answers to one or more survey questions are stored in the results database. In other cases, the composition manager208may send answers individually to the survey manager204as the answers are received. Alternatively, the composition manager208may collect a plurality of answers to various survey questions before sending the answers to the survey manager204. The survey manager204may validate answers corresponding one or more survey questions, as step318illustrates. For example, the survey manager204can verify that, depending on the survey question, the correct number of answers is given. Further, the survey manager204may verify that the answers provided by the composition manager208satisfy the one or more survey questions to which they correspond. As illustrated in step320, after validating the one or more answers, the survey manager204may determine survey results from the valid answers. As described above, the survey manager204may compile numerous answers to a survey question into results for the survey question. Using the results for each survey question, the survey manager204can present the results to a user that created the survey or that is reviewing the survey results. Further, as described above, the survey manager204may present results to each survey question, or the survey as a whole, regardless of the distribution channel used to administer the one or more survey questions, or if the one or more survey questions were provided to different respondents via different distribution channels. FIG.4illustrates a sequence-flow method400showing the survey system102validating responses to a survey question received from a client device104. The survey system102and the client device104shown inFIG.4may each be example embodiments of the survey system102and the client device104described with regard toFIG.1. Further, as shown inFIG.4, the survey system102may include a composition manager208. The composition manager208shown inFIG.4may be an example embodiment of the composition manager208described in connection withFIG.2. Additionally, the steps inFIG.4may provide additional explanation and detail to the steps described in connection withFIG.3. In particular, the steps inFIG.4may, in some example embodiments, correspond to step310(i.e., administering one or more survey questions via the second distribution channel), step312(i.e., validating received responses), and step314(i.e., determining survey answers from the valid responses). To illustrate, step402inFIG.4illustrates the composition manager208of the survey system102sending a recomposed survey question to the client device104via the second distribution channel. For instance, the composition manager208may send the recomposed survey question via text message to a respondent associated with the client device104. The respondent may receive the recomposed survey question and provide a response. Accordingly, the client device104may send a response to the recomposed survey question back to the composition manager208. As shown in step404, the composition manager208may receive a response to the recomposed survey question from the client device104. Upon receiving the response to the recomposed survey question, the composition manager208may identify a survey associated with the response, illustrated in step406. As such, the survey system102, in particular, the composition manager208, can identify with which survey a response is associated. Further, because each response is received independently, the survey system102can identify which survey a response corresponds each time the survey system102receives a response. To identify the survey to which a text message response corresponds, the composition manager208may use information gathered from the response, such as the respondent's address (e.g., the address or phone number of the client device104) and the survey system's address (e.g., the address or number of the survey system102). Using the address information, the composition manager208may generate or identify a survey token. The composition manager208may provide the survey token to the survey system102and the survey system102may return the survey ID of the survey associated with the response. In particular, the survey system102, upon receiving the survey token, may look up the survey token in a database, such as the results database described above, and may identify the survey ID and, in some cases, the respondent associated with the survey token. In one or more embodiments, the survey system102may provide an indication to the composition manager208that the survey associated with the survey token is active. Alternatively, the survey system102may indicate that the survey token is associated with an expired, inactive, or closed survey. Additionally, the survey system102may indicate to the composition manager208that the survey token does not correspond to a known survey on the survey system102. Based on the information provided by the survey system102, the composition manager208may continue validating the response, or send a message to the client device104indicating that the response does not correspond to an active or valid survey. As briefly described above, the survey system102may use multiple addresses in connection with a distribution channel. For example, the survey system102may use a plurality of short code numbers and/or long code numbers to send out survey questions and receive responses. If the survey system102uses a different outgoing address each time it sends out a different survey, each survey token can correspond to a survey and will not overlap. If, however, the survey system102is limited in the number of addresses through which it can administer multiple surveys (e.g., the survey system102on has a single address), the survey system102can send different surveys to a respondent using the same outgoing and destination numbers. As a result, the survey token consisting of the survey system's address and the respondent's address may refer to multiple surveys and the survey token could no longer be used to identify a particular survey. As one solution to this issue, the survey system102may limit the number of surveys in which a respondent can simultaneously participate to the number of outgoing addresses the survey system102has per distribution channel. For example, if the survey system102has five text messaging numbers, the survey system102may limit a respondent to participating in only five surveys at one time. If the survey system102has one text messaging number, the survey system102may limit a respondent's participation to only one survey at one time. In this manner, if the respondent desires to participate in an addition survey, the respondent must finish or quit an existing survey. As another solution, the survey system102may request that the respondent include a unique identifier in each response. For example, a respondent often sends an access code to the survey system102to initiate a survey via text message, such as the access code “survey” or “demo” (e.g., “Text ‘survey’ to 55555 to take a short survey”). In response, the survey system102initiates a survey with the respondent via text message. In one or more embodiments, the survey system102may also request that a respondent provide the access code, or another identifier code, with each response. For instance, the survey system102may, for example, request that a respondent include “S1” in each response to indicate to the survey system102that the response corresponds to Survey1. In this manner, the survey system102may use the identifier code along with the sender's address and the respondent's address to create a survey ID and identify the survey to which a response corresponds. Further, the survey system102may use the same outgoing address to maintain multiple surveys with a respondent at the same time. In the event that the survey system102requests that a respondent include the access code or another identifier code in a text message response, the composition manager208can include the identifier code in each question as a reminder to the respondent to include the code. Further, the composition manager208may send a text message back to a respondent when a response does not include the identifier code. For instance, the composition manager208may send a message saying, “I'm not sure what question you are trying to answer. Please resend your answer along with the survey code shown in the question.” In yet another alternative embodiment, the composition manager208may recompose questions such that the selected recomposed answer is unique to all other possible answers in the survey system102. For example, if a survey question has three available answer choices, the composition manager208may recompose each of the available answer choices into available recomposed answer choices that are coded based on the survey ID, question ID, and available answer choice. In this manner, the composition manager208may use the response to identify the survey to which the response corresponds, the current question the respondent is answering, and the answer the respondent selected. While this approach may not be ideal for surveys administered via text message, this approach may be beneficial for surveys administered via alternative distribution channels. Returning toFIG.4, once the composition manager208has identified the survey to which the response corresponds, the composition manager208may then identify the question associated with the identified survey, as step408illustrates. For example, the composition manager208may provide the survey ID and the survey token to the survey system102, and the survey system102may return the survey question to which the respondent is currently responding. To illustrate, the survey system102may use the survey ID and the survey token to look up in a database, such as the results database, the question the respondent is currently answering, and return the question to the composition manager208. In some additional or alternative cases, the survey system102may return the recomposed survey question to the composition manager208. For example, upon identifying the survey question associated with the response, the survey system102may also identify that the response was received via the second distribution channel. Further, the survey system102may identify a corresponding recomposed survey question associated with the survey question that has been recomposed for the second distribution channel. Accordingly, the survey system102may send the recomposed survey question to the composition manager208upon identifying the composed survey question. Alternatively, rather than receiving the recomposed survey question from the survey system102, the composition manager208may use the survey question received from the survey system102and recompose the survey question, as described above. Using the identified question, the composition manager208can determine if the received response is valid for the identified survey question, as shown in step410. More specifically, the composition manager208may determine if the response contains an answer to the recomposed survey question. In some example embodiments, the composition manager208may compare the response to the available recomposed answer choices to determine whether the response matches one of the available recomposed answer choices. If the composition manager208does not identify a match, the composition manager208may determine that the response is not valid for the identified question. Based on the composition manager208detecting an invalid response, the composition manager208may send a message to the client device104indicating an invalid response, as step412illustrates. The message may be a text message and may provide the respondent an opportunity to re-respond to the recomposed survey question. In some cases, the composition manager208may resend the recomposed survey question to the client device104. When the respondent again replies, the composition manager208may receive an updated response to the recomposed survey question, as shown in step414. The composition manager208may again validate the response (e.g., repeat steps406-410). In some additional embodiments, the composition manager208may interact with the respondent to arrive at a valid answer. To illustrate, the recomposed survey question may prompt the respondent to enter a date. The recomposed survey question may allow the respondent to enter “Today,” “Yesterday,” or manually input a date. If the composition manager208receives a response that says “March 15,” the composition manager208may send a follow up message asking the respondent to enter a year or to confirm that the full date is “Mar. 15, 2015.” Once the composition manager208confirms the date with the responded, the composition manager208may determine that the response is valid. As step416illustrates, based on the composition manager208determining that the response is valid, the composition manager208may determine an answer to the survey question. As described above, the composition manager208may use the answer to the recomposed survey question to determine the answer to the survey question. For example, the composition manager208may use the mapping between the available answer choices of the survey question and the available recomposed answer choices of the recomposed survey question to identify the answer to the survey question selected by the respondent. FIG.5Aillustrates an example multiple choice survey question502composed for presentation on a first distribution channel. In particular, the multiple choice survey question502inFIG.5Aincludes a question502aand available answer choices502b. For purposes of explanation, the multiple choice survey question502has the answer “neutral” selected. A user may compose the multiple choice survey question502using tools provided by a survey system. For example, the survey system may provide online-based tools that help the user created a survey and compose survey questions. The survey system may allow a user to compose survey questions for a first distribution channel (e.g., online distribution via a website), such as the survey question shown inFIG.5Aeven when the user selects an option to have the survey system to administer the survey via a second distribution channel (e.g., via text message). If the user selects the option to administer the survey via a second distribution channel, the survey system may automatically recompose the survey question502to be presented via the second distribution channel. FIGS.5B-5Fillustrate examples of recomposed survey questions504-512for presentation on a second distribution channel. In particular,FIG.5Billustrates a recomposed survey question504where the question504ais the same as the question502ain the survey question502shown inFIG.5A. The available recomposed answer choices504binFIG.5Bmay also correspond to the available answer choices502bshown inFIG.5A. The available recomposed answer choices504b, however, may also include numbers associated with each of the available answer choices502bof the survey question502. For example, the first available recomposed answer choice is “1—very satisfied.” To answer the recomposed survey question504, a respondent need only respond with the number “1” to indicate the answer of “very satisfied.” In some example embodiments, in addition to associating number or letters with available answer choices, the survey system may modify the question and/or available answer choices of a recomposed survey question. For example,FIG.5Cillustrates a recomposed survey question506where the available recomposed answer choices506bhave been truncated. In particular, the survey system removes qualifier words like “somewhat” or “very.” In some instances, the survey system may additionally, or in the alternative, modify the question506aas well. For instance, the survey system may modify a recomposed survey question to fit within a text message. For example, some text message protocols, such as SMS, limit text messages to 160 characters. Other protocols may limit messages to more or less characters. Further, the survey system may truncate a question or available recomposed answer choices with a recomposed survey question to better ensure that the recomposed survey question fits within the display of a client device. Displays on some client devices may be smaller in size, and thus, a shorter recomposed survey question may better suit these client devices. As shown in the recomposed survey question508ofFIG.5D, the survey system may omit one or more available recomposed answer choices508b. For example, the survey system may omit the intermediary available answer choices of “somewhat satisfied” and “somewhat dissatisfied.” In some example embodiments, the survey system may allow a respondent to provide a response of “2” or “4” and the survey system will map those responses to “somewhat satisfied” and “somewhat dissatisfied” respectively. In other embodiments, the survey system may only allow the responses “1,” “3,” and “5.” In still other embodiments, when recomposing the survey question, the survey system may remove the available answer choices of “somewhat satisfied” and “somewhat dissatisfied,” and then provide available recomposed answer choices of “1—satisfied,” “2—neither,” and “3—dissatisfied.” In one or more embodiments, the survey system may recompose a survey question by folding the available answer choices into the survey question. To illustrate, the recomposed survey question510inFIG.5Edisplays the question510a, which has incorporated the available answer choices into the question510a. While folding in the available answers into the question510amay appear to change the question type, the survey system may properly associate the response to the recomposed survey question510to an available answer choice502bof the survey question502even though the survey question502and the recomposed survey question510are different question types. In some cases, incorporating the available answer choices502bfrom the survey question502into the question may reduce the overall length of the question. In other cases, however, shortening the question510atoo much may lead to confusion for a user or may change the outcome of the answer. For example, depending on the available answer choices in the survey question, incorporating the available answer choices into the question may change the nature of the question such that the respondent is prompted to provide a response that does not map to one of the available answer choices. In some example embodiments, the survey system may remove or change available answers from a survey question altogether when recomposing the survey question. For example, if a survey question is “What type of pets do you own?” and the available answer choices are “dog,” “cat,” bird,” and “fish,” the survey system may recompose the survey question to “What type of pets do you own?” If the respondent includes one of the available answer choices in their response, the survey system may use those provided answer to update the results of the survey question. Otherwise, the survey system may dismiss or otherwise store the answers provided by the respondent (e.g., as an “other” option or as the actual answer provided). In some example embodiments, the survey system may first provide the recomposed survey question510shown inFIG.5Eto a respondent. If the respondent does not provide a valid response, or requests help from the survey system in responding to the recomposed survey question510, the survey system may provide another recomposed survey question for the same survey question, such as the recomposed survey question504shown inFIG.5Bor the recomposed survey question504shown inFIG.5C. In one or more additional embodiments, the survey system may recompose a survey question by replacing available answer choice with substitute available recomposed answer choices. As shown inFIG.5F, the survey system may replace the available answer choice502bof the survey question502in a recomposed survey question512with numbers and/or graphics. Respondents are commonly using graphics or symbols, such as emojis, smileys, and ideograms in text messages. In many cases, a respondent's client device will automatically display graphics in place of certain strings of text (e.g., converts “:)” into a smiley face graphic). Accordingly, the survey system may recompose a survey question and allow a respondent to answer the recomposed survey question using graphics. FIG.6Aillustrates an example matrix survey question602composed for presentation on a first distribution channel. The matrix survey question602includes a question602aand available answer choices602b. Further, the matrix survey question602is composed for a respondent to answer via a first distribution channel, such as part of an online survey. When a respondent answers the matrix survey question602via the first distribution channel, the respondent may rate, using available answer choices602b, multiple aspects of the question602aat one time. As described above, if a user desires to provide a matrix question via a second distribution channel, such as text message, the survey system may be unable to provide the matrix survey question to a respondent. FIG.6Billustrates an example of recomposed survey questions604-608for presentation on a second distribution channel. As shown inFIG.6B, the survey system may recompose the matrix survey question602ofFIG.6Aby separating the matrix survey question602into multiple recomposed survey questions604-608. By separating the survey question602into three recomposed survey questions604-608, the survey system may now distribute the recomposed survey questions604-608via the second distribution channel (e.g., via text message). As shown inFIG.6B, the multiple recomposed survey questions604-608may break down the question602ain the survey question602ofFIG.6Ainto separate recomposed questions604a,606a, and608a. In addition, the multiple recomposed survey questions604-608may also recompose the available answer choices602binto available recomposed answer choices604b,606b,608b, as described above, to include numbers to allow a respondent to simply and easily respond to each recomposed survey question604-608. For example, in recomposed survey question604, the available recomposed answer choices604binclude “1—satisfied,” “2—neutral,” and “3—dissatisfied.” When the survey system recomposes a survey question into multiple recomposed survey questions for presentation on a second distribution channel, the survey system may send each recomposed survey question individually. For example, the survey system may send the first recomposed survey question604via text message to a respondent. Once the survey system receives a reply, the survey system may send the second recomposed survey question606to the respondent, and so forth. When all the recomposed survey questions for a survey question are received and validated, the survey system may determine the answer(s) to the survey question and update the results for the survey question. FIG.7Aillustrates an example multiple selection survey question702composed for presentation on a first distribution channel. As shown inFIG.7A, the multiple selection survey question702may include a question702aand available answer choices702b. Further, the multiple selection survey question702is composed for a respondent to answer via a first distribution channel, such as part of an online survey. When responding to the multiple selection survey question702, a respondent may select multiple answers from the available answer choices702b. In some cases, a respondent may select all of the available answer choices702b. As briefly described above, the survey system may recompose the multiple selection survey question702for distribution on a second distribution channel. For example,FIGS.7B-7Cillustrate examples of recomposed survey questions704-710for presentation on a second distribution channel (e.g., via text message). More specifically, as shown in the recomposed survey question704ofFIG.7B, the survey system may recompose the multiple selection survey question702by rewriting the question702ainto a recomposed question704a, and assigning numbers to each of the available answer choices702aas part of the available recomposed answer choices704b. As an alternative, the survey system may separate the multiple selection survey question702into separate recomposed survey questions706-710for each available answer choice702a, as shown inFIG.7C. In particular, the survey system may recompose the question702ainto separate recomposed questions706a,708a, and710a. Further, the survey system can recompose the available answer choices702binto simply yes or no available recomposed answer choices, where a respondent can respond with either the number “1” or “2,” or the words “yes” or “no” (shown as available recomposed answer choices706b,708b,710b). In some instances, the survey system may also accept a response of “y” or “n” to the recomposed survey questions. As with the matrix survey question, the survey system may separate a multiple selection survey question into separate recomposed survey questions. The survey system may send each recomposed survey question and receive a corresponding response before sending the next recomposed survey question. Further, the survey system may wait for valid responses to each recomposed survey question before combining the responses and updating the results for the survey question. In some example embodiments, whether the survey system separates a multiple selection survey question into separate recomposed survey questions may be based on the number of available answer choices702b. For example, when the survey question includes less than a threshold of available answer choices (e.g., less than five), the survey system may recompose the survey question into a single recomposed survey question. If the survey question includes more than a threshold of available answer choices (e.g., five or more), the survey system may recompose the survey question into a multiple recomposed survey question. Additionally, the survey system may determine whether to separate a multiple selection survey question into separate recomposed survey questions based on the length of the multiple survey question and the protocols of the selected distribution channel. In some example embodiments, the survey system may separate a multiple survey question into multiple recomposed survey questions, such as multiple recomposed survey questions that are designed like the recomposed survey question704illustrated inFIG.7B. FIG.8Aillustrates an example heat map survey question802composed for presentation on a first distribution channel. The heat map survey question802includes a question802aand an answer area802b. Further, the heat map survey question802is composed for a respondent to answer via a first distribution channel, such as part of an online survey. As described above, the heat map survey question802may provide an image802cthat allows a respondent to select a location within the image802c. To illustrate, as shown inFIG.8A, the heat map survey question802displays an image802cof a bicycle with the question802a“Where would you put our company's logo on the product shown below?” and the instruction for the responded to select an area of the image802c. To respond to the heat map survey question802, the respondent provides a selection within the answer selection area804bposition where he or she would place the logo. While the survey system may provide a heat map survey question802via certain distribution channels, such as when administering a survey online, the survey system may be unable to present the heat map survey question802via another distribution channel, such as via text message. Accordingly, the survey system may recompose the heat map survey question802for presentation on a second distribution channel. To illustrate,FIG.8Bshows an example of a recomposed heat map survey question804for presentation on a second distribution channel. For example, the recomposed heat map survey question804inFIG.8Bincludes a recomposed question804aand an answer grid804b. In particular, as shown inFIG.8B, the recomposed heat map survey question804displays an image802cof a bicycle. Within the recomposed survey question804, the recomposed question804aincludes a prompt, such as “Where would you put our company's logo on the product shown below?” along with the instruction for the responded to provide coordinates where the respondent would place the logo. The answer grid804bmay display numbers and letters in a grid pattern around the image804cto allow a respondent to provide an indication of where he or she would place the logo. Accordingly, the survey system may allow a respondent to provide an answer to a heat map question802via text message, where the heat map survey question802was originally composed for online surveys. Further, the survey system may obtain an answer to the heat map survey question802via text message without requiring the user that created the survey to intervene (e.g., rewrite or delete the survey question, or have the survey system skip the survey question when the survey question is provided via the second distribution channel). FIGS.1-8, the corresponding text, and the examples, provide a number of different systems, devices, and graphical user interfaces for administering survey questions to respondents via a distribution channel other than the distribution channel for which the survey question was created. In addition to the foregoing, embodiments disclosed herein also can be described in terms of flowcharts comprising acts and steps in a method for accomplishing a particular result. For example,FIGS.9-10illustrates flowcharts of exemplary methods in accordance with one or more embodiments disclosed herein. The methods described in relation toFIGS.9-10can be performed with less or more steps/acts or the steps/acts can be performed in differing orders. Additionally, the steps/acts described herein can be repeated or performed in parallel with one another or in parallel with different instances of the same or similar steps/acts. FIG.9illustrates a flowchart of an exemplary method900for distributing a survey via an additional distribution channel. The method900can be implemented by the survey system102described above. The method900involves an act902of identifying a survey question composed to be presented via a first distribution channel. In particular, the act902may involve identifying a question type of a survey question composed to be presented via a first distribution channel, where the survey question is associated with a survey. In some example embodiments, the first distribution channel may be an online distribution channel. Further, in one or more embodiments, the act902may involve identifying whether the survey question is a multiple choice question, a rating scale question, a drop down selection question, a matrix selection question, or an open-ended/open response question. In addition, the method900involves an act904of recomposing the survey question to be presented on a second distribution channel. In particular, the act904may involve recomposing the survey question to be presented on a second distribution channel based on the identified question type of the survey question. In some instances, the second distribution channel may include text messaging, such as SMS. Further, in one or more embodiments, the act904may involve recomposing the survey question into a plurality of recomposed survey questions. In some example embodiments, the act904may involve determining whether the survey question is compatible with (e.g., able to be recomposed for) presentation on the second distribution channel. Further, the method900involves an act906of providing the recomposed survey question via the second distribution channel. In particular, the act906may involve providing, to a client device104associated with a respondent, the recomposed survey question via the second distribution channel. For example, the act906may involve the survey system102sending the recomposed survey question to a client device104via text message. The method900also involves an act908of receiving a response to the recomposed survey question. In particular, the act908may involve receiving, from the client device104and via the second distribution channel, a response to the recomposed survey question. In some example embodiments, receiving the response to the recomposed survey question from the client device104and via the second distribution channel may involve receiving a plurality of responses corresponding to the plurality of recomposed survey questions. The method900involves an act910of determining an answer to the survey question based on the response. In particular, the act910may involve determining an answer to the survey question based on the received response to the recomposed survey question. In one or more embodiments, the act910may involve determining that the response to the recomposed survey question includes a selected recomposed answer from the plurality of available recomposed answer choices, and further involve identifying the answer to the survey question that corresponds to the selected recomposed answer of the recomposed survey question. The method900involves an act912of updating results with the answer to the survey question. In particular, the act912may involve updating results corresponding to the survey with the answer to the survey question. For example, the act912may include a survey system102updating the results for the survey question and presenting the results to one or more survey reviewers. In addition, the method900may involve an act of validating a received response, and based on the received response not satisfying the validation, sending a message to the client device104associated with the respondent to respond to the recomposed survey question with a valid response. In some example embodiments, the method900may involve an act of determining that the survey question does not need to be recomposed before being sent to the respondent via the second distribution channel based on the identified question type of the survey question. In one or more embodiments, the method900may include the act of determining that the survey question comprises a plurality of available answer choices. In these embodiments, the method900may also involve mapping the plurality of available answer choices in the survey question to a plurality of available recomposed answer choices in the recomposed survey question, where the plurality of available recomposed answer choices are each unique and/or sequential numbers. In some example embodiments, the method900may involve the act of validating the response to the recomposed survey question. Upon validating the response to the recomposed survey question and determining the answer to the survey question, the method900may also involve identifying an additional survey question associated with the survey. Further, based on the identified question type of the additional survey question, the method900may involve recomposing the additional survey question to be presented on the second distribution channel, and providing, to the client device associated with the respondent, the additional recomposed survey question via the second distribution channel. FIG.10illustrates a flowchart of an exemplary method1000for recomposing a survey question based on question type. The method1000can be implemented by the survey system102described above. The method1000involves an act1002of identifying a first question in a survey having a first question type composed to be presented on a first distribution channel. In particular, the act1002may involve identifying a first question in a survey having a first question type, the first question being composed to be presented on a first distribution channel. In some example embodiments, the first distribution channel may be an online distribution channel. The method1000also involves an act1004of recomposing the first survey question to be presented on a second distribution channel. In particular, the act1004may involve recomposing the first survey question to be presented on a second distribution channel based on the first question being identified as the first question type. The act1004may involve recomposing the first survey question to be presented on a second distribution channel in any suitable manner as described herein. In some example embodiments, the second distribution channel may include text messaging, such as SMS and instant messaging. Further, the method1000involves an act1006of identifying a second question in the survey having a second question type composed to be presented on the first distribution channel. In particular, the act1006may involve identifying a second question in the survey having a second question type, the second question being composed to be presented on the first distribution channel. Additionally, the method1000involves an act1008of allowing the second question to be presented on the second distribution channel without being recomposed. In particular, the act1008may involve allowing the second question to be presented on the second distribution channel without being recomposed based on the second question being identified as the second question type. For instance, the act10008may involve the survey system sending a survey question via text message without recomposing the survey question. The method1000also involves an act1010of providing the recomposed first survey question and the second survey question via the second distribution channel. In particular, the act1010may involve providing, to a client device104associated with a respondent, the recomposed first survey question in a first communication via the second distribution channel and the second survey question in a second communication via the second distribution channel. For example, the first communication and the second communication may each include a text message. In addition, the method1000involves an act1012of receiving a response to the first recomposed survey question and the second survey question. In particular, the act1012may involve receiving, from the client device and via the second distribution channel, a response to the first recomposed survey question and a response to the second survey question. For example, the act1012may involve receiving text messages from the client device104having responded to the first recomposed survey question and the second survey question. Further, the method1000involves an act1014of determining a first answer to the first survey question and a second answer to the second survey question. In particular, the act1014may involve based on the received response to the first recomposed survey question and the received response to the second survey question, determining a first answer to the first survey question and a second answer to the second survey question. In some example embodiments, the act1014may involve identifying an indication of the first answer in the response to the first recomposed survey question, determining that the indication of the first answer corresponds to the first answer to the first survey question, and identifying the first answer to the first survey question. Further, the act1014may involve identifying the response to the second survey question as the second answer. The method1000involves an act1016of updating results corresponding to the survey with the first answer and the second answer. In particular, the act1016may involve updating results corresponding to the survey with the first answer to the first survey question and the second answer to the second survey question. In one or more embodiments, the method1000may also involve providing, to the client device104associated with the respondent and via the second distribution channel, an option to respond to the first question via the first distribution channel. For example, the method1000may involve providing a link in a recomposed survey question sent to a respondent via text message to complete the survey or the recomposed survey question via a webpage. Similarly, the method1000may involve providing a link in a survey question provided to a respondent via an online survey to complete the survey or the survey question via text message. Embodiments of the present disclosure may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in additional detail below. Embodiments within the scope of the present disclosure also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. In particular, one or more of the processes described herein may be implemented at least in part as instructions embodied in a non-transitory computer-readable medium and executable by one or more computing devices (e.g., any of the media content access devices described herein). In general, a processor (e.g., a microprocessor) receives instructions, from a non-transitory computer-readable medium, (e.g., a memory, etc.), and executes those instructions, thereby performing one or more processes, including one or more of the processes described herein. Computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are non-transitory computer-readable storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the disclosure can comprise at least two distinctly different kinds of computer-readable media: non-transitory computer-readable storage media (devices) and transmission media. Non-transitory computer-readable storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media. Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to non-transitory computer-readable storage media (devices) (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media (devices) at a computer system. Thus, it should be understood that non-transitory computer-readable storage media (devices) can be included in computer system components that also (or even primarily) utilize transmission media. Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. In some embodiments, computer-executable instructions are executed on a general-purpose computer to turn the general-purpose computer into a special purpose computer implementing elements of the disclosure. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims. Those skilled in the art will appreciate that the disclosure may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices. Embodiments of the present disclosure can also be implemented in cloud computing environments. In this description, “cloud computing” is defined as a model for enabling on-demand network access to a shared pool of configurable computing resources. For example, cloud computing can be employed in the marketplace to offer ubiquitous and convenient on-demand access to the shared pool of configurable computing resources. The shared pool of configurable computing resources can be rapidly provisioned via virtualization and released with low management effort or service provider interaction, and then scaled accordingly. A cloud-computing model can be composed of various characteristics such as, for example, on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, and so forth. A cloud-computing model can also expose various service models, such as, for example, Software as a Service (“SaaS”), Platform as a Service (“PaaS”), and Infrastructure as a Service (“IaaS”). A cloud-computing model can also be deployed using different deployment models such as private cloud, community cloud, public cloud, hybrid cloud, and so forth. In this description and in the claims, a “cloud-computing environment” is an environment in which cloud computing is employed. FIG.11illustrates a block diagram of exemplary computing device1100that may be configured to perform one or more of the processes described above. One will appreciate that one or more computing devices such as the computing device1100may implement the survey system102and/or client device104described above. As shown byFIG.11, the computing device1100can comprise a processor1102, a memory1104, a storage device1106, an I/O interface1108, and a communication interface1110, which may be communicatively coupled by way of a communication infrastructure1112. While an exemplary computing device1100is shown inFIG.11, the components illustrated inFIG.11are not intended to be limiting. Additional or alternative components may be used in other embodiments. Furthermore, in certain embodiments, the computing device1100can include fewer components than those shown inFIG.11. Components of the computing device1100shown inFIG.11will now be described in additional detail. In one or more embodiments, the processor1102includes hardware for executing instructions, such as those making up a computer program. As an example, and not by way of limitation, to execute instructions, the processor1102may retrieve (or fetch) the instructions from an internal register, an internal cache, the memory1104, or the storage device1106and decode and execute them. In one or more embodiments, the processor1102may include one or more internal caches for data, instructions, or addresses. As an example and not by way of limitation, the processor1102may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in the memory1104or the storage1106. The memory1104may be used for storing data, metadata, and programs for execution by the processor(s). The memory1104may include one or more of volatile and non-volatile memories, such as Random Access Memory (“RAM”), Read Only Memory (“ROM”), a solid state disk (“SSD”), Flash, Phase Change Memory (“PCM”), or other types of data storage. The memory1104may be internal or distributed memory. The storage device1106includes storage for storing data or instructions. As an example and not by way of limitation, storage device1106can comprise a non-transitory storage medium described above. The storage device1106may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. The storage device1106may include removable or non-removable (or fixed) media, where appropriate. The storage device1106may be internal or external to the computing device1100. In one or more embodiments, the storage device1106is non-volatile, solid-state memory. In other embodiments, the storage device1106includes read-only memory (ROM). Where appropriate, this ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. The I/O interface1108allows a user to provide input to, receive output from, and otherwise transfer data to and receive data from computing device1100. The I/O interface1108may include a mouse, a keypad or a keyboard, a touch screen, a camera, an optical scanner, network interface, modem, other known I/O devices or a combination of such I/O interfaces. The I/O interface1108may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, the I/O interface1108is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation. The communication interface1110can include hardware, software, or both. In any event, the communication interface1110can provide one or more interfaces for communication (such as, for example, packet-based communication) between the computing device1100and one or more other computing devices or networks. As an example and not by way of limitation, the communication interface1110may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI. Additionally or alternatively, the communication interface1110may facilitate communications with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, the communication interface1110may facilitate communications with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination thereof. Additionally, the communication interface1110may facilitate communications various communication protocols. Examples of communication protocols that may be used include, but are not limited to, data transmission media, communications devices, Transmission Control Protocol (“TCP”), Internet Protocol (“IP”), File Transfer Protocol (“FTP”), Telnet, Hypertext Transfer Protocol (“HTTP”), Hypertext Transfer Protocol Secure (“HTTPS”), Session Initiation Protocol (“SIP”), Simple Object Access Protocol (“SOAP”), Extensible Mark-up Language (“XML”) and variations thereof, Simple Mail Transfer Protocol (“SMTP”), Real-Time Transport Protocol (“RTP”), User Datagram Protocol (“UDP”), Global System for Mobile Communications (“GSM”) technologies, Code Division Multiple Access (“CDMA”) technologies, Time Division Multiple Access (“TDMA”) technologies, Short Message Service (“SMS”), Multimedia Message Service (“MMS”), radio frequency (“RF”) signaling technologies, Long Term Evolution (“LTE”) technologies, wireless communication technologies, in-band and out-of-band signaling technologies, and other suitable communications networks and technologies. The communication infrastructure1112may include hardware, software, or both that couples components of the computing device1100to each other. As an example and not by way of limitation, the communication infrastructure1112may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination thereof. FIG.12illustrates an example network environment1200of a survey system. Network environment1200includes a client system1206, and a survey system1222connected to each other by a network1204. AlthoughFIG.12illustrates a particular arrangement of client system1206, survey system1222, and network1204, this disclosure contemplates any suitable arrangement of client system1206, survey system1222, and network1204. As an example and not by way of limitation, two or more of client system1206, and survey system1222may be connected to each other directly, bypassing network1204. As another example, two or more of client system1206and survey system1222may be physically or logically co-located with each other in whole, or in part. Moreover, althoughFIG.12illustrates a particular number of client systems1206, survey systems1202, and networks1204, this disclosure contemplates any suitable number of client systems1206, survey systems1202, and networks1204. As an example and not by way of limitation, network environment1200may include multiple client system1206, survey systems1202, and networks1204. This disclosure contemplates any suitable network1204. As an example and not by way of limitation, one or more portions of network1204may include an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, or a combination of two or more of these. Network1204may include one or more networks1204. Links may connect client system1206, and survey system1222to communication network1204or to each other. This disclosure contemplates any suitable links. In particular embodiments, one or more links include one or more wireline (such as for example Digital Subscriber Line (DSL) or Data Over Cable Service Interface Specification (DOCSIS)), wireless (such as for example Wi-Fi or Worldwide Interoperability for Microwave Access (WiMAX)), or optical (such as for example Synchronous Optical Network (SONET) or Synchronous Digital Hierarchy (SDH)) links. In particular embodiments, one or more links each include an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, a portion of the Internet, a portion of the PSTN, a cellular technology-based network, a satellite communications technology-based network, another link, or a combination of two or more such links. Links need not necessarily be the same throughout network environment1200. One or more first links may differ in one or more respects from one or more second links. In particular embodiments, client system1206may be an electronic device including hardware, software, or embedded logic components or a combination of two or more such components and capable of carrying out the appropriate functionalities implemented or supported by client system1206. As an example and not by way of limitation, a client system1206may include any of the computing devices discussed above in relation toFIG.11. A client system1206may enable a network user at client system1206to access network1204. A client system1206may enable its user to communicate with other users at other client systems1206. In particular embodiments, client system1206may include a web browser, such as MICROSOFT INTERNET EXPLORER, GOOGLE CHROME, or MOZILLA FIREFOX, and may have one or more add-ons, plug-ins, or other extensions, such as TOOLBAR or YAHOO TOOLBAR. A user at client system1206may enter a Uniform Resource Locator (URL) or other address directing the web browser to a particular server (such as server, or a server associated with a third-party system), and the web browser may generate a Hyper Text Transfer Protocol (HTTP) request and communicate the HTTP request to server. The server may accept the HTTP request and communicate to client system1206one or more Hyper Text Markup Language (HTML) files responsive to the HTTP request. Client system1206may render a webpage based on the HTML files from the server for presentation to the user. This disclosure contemplates any suitable webpage files. As an example and not by way of limitation, webpages may render from HTML files, Extensible Hyper Text Markup Language (XHTML) files, or Extensible Markup Language (XML) files, according to particular needs. Such pages may also execute scripts such as, for example and without limitation, those written in JAVASCRIPT, JAVA, MICROSOFT SILVERLIGHT, combinations of markup language and scripts such as AJAX (Asynchronous JAVASCRIPT and XML), and the like. Herein, reference to a webpage encompasses one or more corresponding webpage files (which a browser may use to render the webpage) and vice versa, where appropriate. In particular embodiments, survey system1222may include a variety of servers, sub-systems, programs, modules, logs, and data stores. In particular embodiments, survey system1222may include one or more of the following: a web server, action logger, API-request server, relevance-and-ranking engine, content-object classifier, notification controller, action log, third-party-content-object-exposure log, inference module, authorization/privacy server, search module, advertisement-targeting module, user-interface module, user-profile store, connection store, third-party content store, or location store. Survey system1222may also include suitable components such as network interfaces, security mechanisms, load balancers, failover servers, management-and-network-operations consoles, other suitable components, or any suitable combination thereof. In particular embodiments, survey system1222may include one or more user-profile stores for storing user profiles. A user profile may include, for example, biographic information, demographic information, behavioral information, social information, or other types of descriptive information, such as work experience, educational history, hobbies or preferences, interests, affinities, or location. Interest information may include interests related to one or more categories. Categories may be general or specific. Additionally, a user profile may include financial and billing information of users (e.g., respondents, customers, etc.). The foregoing specification is described with reference to specific exemplary embodiments thereof. Various embodiments and aspects of the disclosure are described with reference to details discussed herein, and the accompanying drawings illustrate the various embodiments. The description above and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of various embodiments. The additional or alternative embodiments may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope. | 126,539 |
11943319 | DETAILED DESCRIPTION Conventional systems and methods are often not capable of efficiently isolating applications associated with multiple tenants within a multi-tenant computing platform. Conventional systems and methods typically partition the network of a computing platform and corresponding network addresses on a per customer (e.g., per tenant) basis by using subnets and firewalls. This increases the complexity and cost of managing applications in a multi-tenant environment across the platform and makes it difficult to scale the platform when the number of customers (e.g., the number of tenants) increases. For example, each customer is assigned its own subnets of network addresses and responsible for configuring and managing the assigned subnets. In turn, the use of subnets by conventional systems and methods requires the use of security groups around each subnet, including, for example, firewalls, to guarantee the security of customer data communicated across the platform. In some embodiments, one or more solutions rooted in computer technology overcome one or more problems specifically arising in the realm of computer technology, including that of security of customer data. Some embodiments are directed to computing platforms including hosts connected through a network. More particularly, some embodiments of the present invention provide systems and methods for isolating applications associated with multiple tenants within a computing platform. In some examples, the hosts of the computing platform include virtual servers and/or virtual machines. In certain examples, the computing platforms include a virtual computing environment that provides an operating system and/or an application server for running one or more containers. For example, a container includes a containerized application. In some examples, one or more containers run on a server or host machine of the computing platform and are associated with particular resources that include CPU, memory, storage, and/or networking capacity. In certain examples, the hosts of the computing platform include physical servers and/or physical machines. In certain embodiments, systems and methods are configured to isolate applications (e.g., containers) on a per tenant and per host basis by assigning to each application (e.g., each container) a unique tenant identification number corresponding to a particular tenant of the computing platform and embedding the unique tenant identification number in a network address of a host running the application (e.g., container). In some examples, the systems and methods are further configured to isolate applications (e.g., containers) associated with different tenants at the data link layer by generating a broadcast domain including the host, assign the broadcast domain to the unique tenant identification number, and run the applications (e.g., the containers) associated with the unique tenant identification number in the broadcast domain of the host. In certain examples, the broadcast domain associated with the unique tenant identification number is mapped to the network address including the unique tenant identification number. According to some embodiments, benefits include significant improvements, including, for example, increased efficiency, reduced complexity, and improved scalability, in managing an increased number of tenants across a multi-tenant computing platform. In certain embodiments, other benefits include increased data security for each tenant on a multi-tenant computing platform. In some embodiments, systems and methods are configured to isolate application data from different tenants across a multi-tenant computing platform. FIG.1is a simplified diagram showing a system100for isolating applications associated with multiple tenants within a computing platform102according to one embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The system100includes the computing platform102and a network104. In some examples, the computing platform102includes a plurality of hosts. For example, the plurality of hosts includes hosts1061-m,1081-n. As an example, hosts1061-mrepresents hosts1061, . . . ,106mas shown inFIG.1, and hosts1081-nrepresents hosts1081, . . . ,108nas shown inFIG.1. As an example, each host of the hosts1061-m,1081-nis configured to be connected to other components of the computing platform102through the network104. As an example, each host of the hosts1061-m,1081-nis associated with a network address. In one example, each host of the hosts1061-m,1081-nis configured to run applications associated with multiple tenants. In certain examples, the computing platform102includes one or more networking devices1101-N. For example, networking devices1101-Nrepresents networking devices1101, . . . ,110Nas shown inFIG.1. As an example, each networking device of the one or more networking devices1101-Nis configured to be connected through the network104. In one example, each host of the hosts1061-m,1081-nis configured to be connected to one or more networking devices1101-Nthrough the network104. In certain examples, the network104includes at least three networking layers (e.g., a physical layer or layer1, a data link layer or layer2, and a network layer or layer3). For example, the network104includes an IPv4 network, an IPv6 network, or any combination thereof. In some examples, the computing platform102includes a plurality of racks. For example, each rack of the plurality of racks includes one or more hosts and one or more networking devices. As an example, the computing platform102includes N racks with the first rack including the networking device1101and the hosts1061-m, . . . , and the Nth rack including the networking device110Nand the hosts1081-n. In certain examples, the networking devices1101-Nof the racks include top-of-rack (ToR) switches. In some embodiments, the computing platform102includes a cluster computing platform including clusters of one or more server or host machines (e.g. one or more hosts of the hosts1061-m,1081-n). In some examples, the computing platform102includes a distributed computing platform that allows the one or more client devices1101-Mto distribute applications and/or data over the network104to the cluster of servers or host machines (e.g. clusters of the hosts1061-m,1081-n). For example, client devices1101-Mrepresents client devices1121, . . . ,112Mas shown inFIG.1. In certain examples, the computing platform102includes a cloud computing platform that allows the one or more client devices1121-Maccess to remote servers, data storages, networks, devices, applications and/or data resources over the network104. For example, multiple customers (e.g., multiple tenants) through the one or more client devices1121-Mstore data at the data storages of the cloud computing platform. In other examples, the computing platform102is associated with a platform provider that provides the platform to multiple customers (e.g., multiple tenants). For example, customers (e.g., tenants) of the computing platform102include individuals, organizations and/or commercial companies. In certain embodiments, the one or more servers or host machines (e.g., the one or more hosts of the hosts1061-m,1081-n) are divided into in one or more regions. For example, a region represents a geographic area that the one or more servers or host machines are located within. As an example, each region relates to a different geographic area. In other examples, each region of the one or more servers or host machines includes one or more separate zones. For example, each server or host machine within a region is associated with only one zone of the one or more separate zones associated with the region. As an example, each zone within a region is isolated from any other zone within the region. In one example, each zone within a region is connected with any other zone within the region through low-latency links. In some examples, the computing platform102is configured to not replicate applications and/or resources across different regions. For example, each region is completely independent from any other region of the computing platform102. According to some embodiments, the computing platform102includes a container-orchestration platform. In some examples, the container-orchestration platform allows for automated deployment, scaling and/or operations of containers across the platform. For example, the container-orchestration platform employs the containers across the one or more servers or host machines (e.g., one or more hosts of the hosts1061-m,1081-n) of the computing platform102. In some examples, a pod of the computing platform102represents a basic scheduling unit of work on the computing platform102. In certain examples, the pod includes one or more containers. In other examples, one or more pods of the computing platform102provide a service to the one or more client devices1121-M. According to certain embodiments, a container of the computing platform102includes one or more applications. In some examples, the container also includes data and libraries associated with the one or more applications. For example, the container allows the one and more applications and their associated data and libraries to be co-located on the same server or host machine (e.g., the same host of the hosts1061-m,1081-n). In one example, the container allows the one or more applications and their associated data and libraries to share resources. For example, the shared resources include CPU, memory, storage, and/or networking capacity. As an example, the container represents the lowest level of a micro-service of the computing platform102. In one example, the micro-service includes the one or more applications, libraries and the applications' dependencies. In some embodiments, the computing platform102includes a fleet controller114. In some examples, the computing platform102includes a fleet catalog116, a hardware (HW) controller118, a hardware catalog120, a control plane application122, a fleet health component124, a fleet scheduler126, a hardware health component128, and a hardware scheduler130. For example, the control plane application122is configured to schedule and manage applications that run on the hosts1061-m,1081-n. As an example, the control plane application122is configured to manage regions, tenants and node (e.g., host) assignments of the computing platform102. In certain examples, the control plane application122is configured to manage workloads and communications between applications running on the hosts1061-m,1081-n. In certain embodiments, the computing platform102is configured to provide services to tenants based at least in part on two abstract layers including a fleet layer and a hardware layer. In some examples, the fleet layer includes logical states and entities of components of the computing platform102. For example, logical entities include a logical entity associated with a cluster of 64 nodes (e.g., hosts). In one example, logical entities include a logical entity associated with three publicly routable IP addresses. As an example, the hardware layer includes actual physical components and resources (e.g., hardware components) of the computing platform102. In other examples, the organization of the two abstract levels of computing platform102is symmetrical with respect to the services provided by the computing platform. In some examples, the fleet catalog116and the hardware catalog120store data and sources of truth relating to the state of the two abstract layers, respectively. In some examples, the fleet controller114is configured to actuate an actual logical state of the computing platform102that matches a desired logical state stored in the fleet catalog116. In certain examples, the hardware controller118is configured to actuate a physical state of the computing platform102that matches a desired physical state stored in the hardware catalog120. For example, the actual logical state represents a state that corresponds to the actual physical state of the computing platform102. According to some embodiments, the fleet controller114is configured to receive a request from a client (e.g., a client device) associated with a tenant for running an application on the computing platform102. For example, each client device of the client devices1101-Mis associated with a different customer (e.g., a different tenant) of the multi-tenant computing platform102. In some examples, the fleet controller114is configured to send the received request for storing to the fleet catalog116. In certain examples, the fleet controller114is configured to queue requests received from the client devices1121-Mand/or other components of the computing platform102. For example, the fleet controller114is configured to provide a control loop for ensuring that a declared logical state in the fleet catalog116is satisfied. In certain examples, the fleet catalog116is configured to provide a source of truth for states of resources of the computing platform102. As an example, states of resources include logical assignment of the hosts1061-m,1081-nand their status. For example, the fleet catalog116provides information that associates a tenant with a cluster of the computing platform102. According to certain embodiments, the fleet controller114is configured to monitor changes and/or updates of the states of resources included in the fleet catalog116. For example, the fleet controller114is configured to retrieve a declared state of a resource from the fleet catalog116. In some examples, the fleet controller114is configured to query the hardware catalog120for available hardware components of the computing platform102. For example, the hardware catalog120is configured to provide a source of truth for the hardware components of the computing platform102and its state. As an example, states of hardware components of the computing platform102include host serial numbers, rack locators, ports, MAC addresses, internet protocol (IP) addresses, host images, host health, and power status of hardware components of the computing platform102. In some embodiments, the fleet controller114is configured to allocate and assign hardware components (e.g., physical machines and/or hosts1061-m,1081-n) to tenants and/or clusters of the computing platform102. For example, the fleet controller114is configured to, in response to successful allocating the hardware components of the computing platform102, update the hardware catalog120to reflect the allocation of the hardware components. As an example, the fleet controller114is configured to send reservations for hardware components to the hardware controller118. In one example, the fleet controller114is configured to map allocated compute nodes (e.g., hosts) to clusters of the computing platform102. For example, a certificate residing on a compute node (e.g., a host) includes information that associates the compute node with a cluster of the computing platform102. In certain embodiments, the hardware controller118is configured to monitor state changes in the hardware catalog120. In certain examples, the hardware controller118is configured to, in response to determining state changes in the hardware catalog120, actuate the changes in the corresponding hardware components of the computing platform102. For example, state changes include assignments and/or reservations added by the fleet controller114to the hardware catalog120. As an example, the hardware controller118is configured to provide a control loop for ensuring that a declared hardware state in the hardware catalog120is satisfied. According to some embodiments, the hardware controller118is configured to, in response to the fleet controller114assigning the hardware components (e.g., the physical machines and/or the hosts1061-m,1081-n) to tenants and/or clusters of the computing platform102, configure and/or connect the corresponding hardware components. For example, the hardware controller118is configured to provide IP addresses to the hardware components and connect the ToR switches, network interface controllers (NICs) and other components of the computing platform102to the hardware components. In some examples, the hardware controller118is configured to assign IP addresses on a per cluster basis. In certain examples, the hardware controller118is configured to read the hardware catalog120for reservations of compute nodes (e.g., hosts) of the computing platform102. For example, the hardware controller118is configured to communicate to a networking device connected to one or more reserved compute nodes (e.g., hosts) a request for network addresses to be assigned to the one or more reserved compute nodes. As an example, the networking device is configured to, in response to receiving the request, allocate the network addresses and assign the allocated network addresses to the one or more reserved compute nodes (e.g., hosts) connected to the networking device. According to certain embodiments, the hardware catalog120is configured to store information associated with the allocated network addresses of the one or more reserved compute notes (e.g., hosts), the networking device connected to the one or more reserved compute nodes, and/or the clusters associated with the one or more reserved compute nodes. In some examples, the hardware catalog120is configured to provide the actual hardware configuration of the computing platform102and record changes in the hardware configuration relating to the employment of physical servers and association of services and servers with certain customers (e.g., tenants). For example, the hardware catalog120is configured to provide information associated with mapping allocated compute nodes (e.g., allocated hosts) to clusters of the computing platform102. In some embodiments, the fleet scheduler126is configured to identify one or more hosts of the hosts1061-m,1081-nbased at least in part on the request for resources by querying the fleet catalog116and/or the hardware catalog120. In some examples, the hardware scheduler130is configured to determine a time when hardware components (e.g., physical machines and/or resources) are made available to the hardware controller118for employment to satisfy a declared state in the hardware catalog120. In some examples, the fleet health component124is configured to poll services running on hosts and/or other components of the computing platform102based at least in part on entries in the fleet catalog116. For example, the fleet health component124, in response to receiving results from the queried services, logs the health and state of the declared resource in the fleet catalog116. In certain examples, the fleet health component124is configured to generate a custom logic for polling services regarding different types of resources. For examples, types of resources include non-public types (e.g., resources of billing and logging components). In other examples, the hardware health component128is configured to update the hardware catalog120regarding the health and the state of hardware components of one or more regions of the computing platform. For example, the health includes information about the hardware components being operational, allocated and/or ready to operate. In some examples, the hardware health component128is configured to poll components of the computing platform102regarding their health and/or state. As an example, the hardware health component128is configured to push hardware changes and/or updates in the hardware catalog120to components of the computing platform102based at least in part on features supported by hardware standards implemented on the computing platform. In certain embodiments, the fleet controller114is configured to read the fleet catalog116to determine changes in the state of resources requested by tenants of the multi-tenant computing platform102. For example, the fleet catalog116includes a request by a tenant for a cluster of 64 hosts in two zones. In some examples, the fleet controller114is configured to, in response to determining changes in the state of resources requested by tenants, request the fleet scheduler126for scheduling the requested resources. In certain examples, the fleet scheduler126is configured to query the fleet catalog116in response to receiving a request for resources from the fleet controller114. For example, the fleet scheduler126is configured to send a reply to the fleet controller114in response to querying the fleet catalog116. As an example, the reply includes a determination whether all the requested resources or a percentage of the requested resources are available for employment. FIG.2is a simplified diagram showing the system100for isolating applications associated with multiple tenants within the computing platform102according to one embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. In some examples, the fleet controller114is configured to receive a request from the client (e.g., the client device1121) associated with tenant2001for running the application202on the computing platform102. In certain examples, the fleet controller114is configured to identify one or more hosts1061, . . .106kof the plurality of hosts (e.g., the hosts1061-m) based at least in part on the request. For example, the fleet controller114is configured to store information associated with the identified hosts1061, . . .106kin the fleet catalog116and/or hardware catalog120for recording the state changes of the identified hosts1061, . . .106k. In some examples, the network104includes one or more broadcast domains of the computing platform102. For example, broadcast domains are isolated from other broadcast domains at the data link layer of the network104. In one example, each broadcast domain includes different hosts of the computing platform102. In certain examples, the broadcast domains are associated with different tenants of the multi-tenant computing platform102. For example, each broadcast domain is associated with one tenant. As an example, the one tenant associated with a broadcast domain is different from tenants associated with the other broadcast domains. In some examples, the broadcast domains include virtual local area networks (VLANs). In some embodiments, the fleet controller114is configured to generate the broadcast domain2041including the identified one or more hosts1061, . . .106k. For example, the fleet controller114is configured to store information associated with the broadcast domain2041in the fleet catalog116and/or hardware catalog120. In some examples, the broadcast domain2041is isolated from other broadcast domains in the network104at a data link layer206. In certain examples, the networking device1101is connected to other platform components at a network layer208of the computing platform102. In other examples, the broadcast domain2041includes a unique domain identification number (e.g., “1”). In certain embodiments, the fleet controller114is configured to assign to the broadcast domain2041a unique tenant identification number (e.g., “1”) corresponding to tenant2001. For example, the fleet controller114is configured to store information associated with the unique tenant identification number (e.g., “1”) and the assigned broadcast domain2041in the fleet catalog116. In some examples, the unique tenant identification number (e.g., “1”) corresponds to the unique domain identification number (e.g., “1”) associated with the broadcast domain2041. For example, the unique tenant identification number includes a bit sequence with a first portion of the bit sequence representing the unique domain identification number. In other examples, the unique tenant identification number (e.g., “1”) represents a route identification number. As an example, a second portion of the bit sequence of unique tenant identification number represents the route identification number. According to some embodiments, the fleet controller114is configured to launch the application202on at least one host of the identified hosts1061, . . .106k. For example, the fleet controller114is configured to store information associated with the at least one host in the fleet catalog116and/or hardware catalog120for recording the state change of the at least one host. According to certain embodiments, the fleet controller114is configured to, in response to launching the application202on the at least one host, assign the unique tenant identification number (e.g., “1”) to the launched application. In some examples, the launched application is included in a container associated with tenant2001. In certain examples, the fleet controller114is configured to add the unique tenant identification (e.g., “1”) to the network address of the at least one host. In some embodiments, the network addresses of the hosts1061-minclude unique tenant identification numbers associated with one or more tenants of the multi-tenant computing platform102. In some examples, the unique tenant identification numbers relate to unique cluster identification numbers. For example, each network address includes a unique cluster identification numbers associated with a cluster of hosts of the computing platform102. As an example, each unique tenant identification number represents the unique cluster identification number. In certain examples, the unique cluster identification number is associated with one or more tenants of the multi-tenant computing platform102. In certain embodiments, each network address of the hosts1061-mincludes a region identification number, a rack identification number (e.g., a network device identification number), and/or a virtual interface of the corresponding host associated with the network address. In some examples, each network address includes an IPv6 address. In certain examples, each network address includes a predetermined network prefix. For example, the predetermined network prefix includes a /16 network prefix or a /28 network prefix. As an example, the predetermined network prefix is associated with a region of the computing platform102. In other examples, the predetermined network prefix associated with a region of the computing platform102represents the region identification number. In some examples, the network addresses include a bit sequence corresponding to subnets associated with the networking devices and/or the hosts of the computing platform102. For example, the bit sequence corresponds to 16-bit subnets associated with racks of the computing platform102. According to some embodiments, the network addresses of the hosts1061-minclude a bit sequence associated with the unique tenant identification number. For example, each network address includes a 20-bit sequence or a 32-bit sequence of the network address for embedding the unique tenant identification number. As an example, each pod running on the hosts1061-mis configured to use the bit sequence to identify the tenant associated with the unique tenant identification number. In some examples, each network address includes a bit sequence used to identify the virtual interface of the corresponding host for a particular tenant and for a particular networking device (e.g., a particular rack). For example, the bit sequence includes a 64-bit sequence. As an example, the bit sequence is configured to be used for assigning IP addresses of the host's virtual interface, which are generated, for example, by stateless address autoconfiguration (SLAAC). According to certain embodiments, the fleet controller114is configured to send the network address of the at least one host of the identified one or more hosts1061, . . .106kto the client (e.g., the client device1121) associated with tenant2001. FIG.3is a simplified diagram showing the system100for isolating applications associated with multiple tenants within the computing platform102according to one embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. In some examples, the fleet controller114is configured to generate the broadcast domains3001and3021including the identified hosts1061, . . . ,106i,106i+1, . . . ,106k. For example, the broadcast domain3001includes the hosts1061, . . . ,106i, and the broadcast domain3021includes the hosts106i+1, . . . ,106k. As an example, the fleet controller114is configured to store information associated with the broadcast domains3001and3021in the fleet catalog116and/or hardware catalog120. In some embodiments, the broadcast domains3001and3021are connected through the network layer208of the network104. In some examples, the broadcast domains3001and3021are associated with one or more tenants of the computing platform102. In some examples, the broadcast domains3001and3021are associated with the same tenant of the computing platform102. In other examples, the broadcast domain3001is connected to the networking device1101at the data link layer of the network104. In some examples, the broadcast domain3021is connected to the networking device110Kat the data link layer of the network104. In certain examples, the networking device1101is configured to send data frames from the broadcast domains3001to the broadcast domain3021using the network layer208of the network104. In certain embodiments, the fleet controller114is configured to assign to the broadcast domains3001and3021aunique tenant identification number corresponding to one tenant of the multi-tenant computing platform102. For example, the fleet controller114is configured to store information associated with the unique tenant identification number and the assigned broadcast domains3001and3021in the fleet catalog116. As an example, the broadcast domains3001and3021are associated with the same tenant. In some examples, the unique tenant identification number corresponds with the unique domain identification numbers (e.g., “1”) associated with the broadcast domains3001and3021. For example, the unique tenant identification number includes a bit sequence with a first portion of the bit sequence representing the unique domain identification number associated with the broadcast domains3001and3021. In certain examples, the broadcast domains3001and3021include a virtual local area network (VLAN) extending over the network104through the network layer208. According to some embodiments, networking devices of the computing platform102are configured to provide, at the data link layer206of the network104, layer2isolation on a per-tenant basis by assigning each tenant to a separate VLAN. For example, networking devices1101, . . . ,110K(e.g., ToR ports) of the network104that are connecting down to the compute nodes (e.g., the hosts) include 802.1q trunks for carrying multiple VLANs. As an example, each compute node (e.g., each host) includes at least two VLANs with one VLAN for the control plane application and the other VLAN for the container associated with the primary tenant running on the compute node (e.g., the host). In some examples, the networking devices1101and110Kare configured to tag received network data for sending across broadcast domains of the computing platform102. FIG.4is a simplified diagram showing the system100for isolating applications associated with multiple tenants within the computing platform102according to one embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. In some examples, the host1061includes a controller runtime component400. In certain examples, the controller runtime component400is configured to manage running the containers4021and4022. For example, the containers4021and4022are associated with the pod404running on the host1061. As an example, each of the containers4021and4022includes a container network interface (CNI) and a container runtime interface (CRI). In some examples, the container4021and the client device1121are associated with tenant4061. In certain examples, the container4022and the client device1122are associated with tenant4062. In some embodiments, the container network interfaces of the containers4021and4022are configured to set up the host-level network layer of the host1061. In some examples, the container network interfaces of the containers4021and4022are configured to generate network interfaces for launching the containers4021and4022on the host1061by assigning network addresses to the network interfaces. In certain examples, the container network interfaces of the containers4021and4022are configured to generate the broadcast domains (e.g., the VLANs) for each container that map to the unique tenant identification number associated with the containers, respectively. For example, the container network interface of the container4021is configured to generate the broadcast domain (e.g., the VLAN) for the container4021that maps to the unique tenant identification number (e.g., “1”) associated with the container4021. As an example, the unique tenant identification number (e.g., “1”) associated with the container4021corresponds to tenant4061. In another example, the container network interface of the container4022is configured to generate the broadcast domain (e.g., the VLAN) for the container4022that maps to the unique tenant identification number (e.g., “2”) associated with the container4022. As an example, the unique tenant identification number (e.g., “2”) associated with the container4022corresponds to tenant4062. In some examples, each controller network interface of the container4021and4022is configured to set up a virtual interface of the host1061for the pod404running on the host1061. In certain embodiments, the controller network interfaces of containers are configured to set up a pod-specific network for the corresponding pod at runtime of the pod. For example, the pod running on a host maps to the network address of the host. As an example, the controller network interfaces are configured to set up each pod network with the corresponding VLAN. In some examples, controller network interfaces are configured to use annotations to the network addresses associated with the corresponding pod to map a virtual ethernet interface to the corresponding VLANs. In certain examples, the controller network interface is configured to generate an interface list, IP configurations assigned to the interface, IPv6 addresses, and an internal domain name system. According to some embodiments, the controller runtime component400is configured to receive the requests4081and4082from the client devices1121and1122, respectively. For example, the controller runtime component400is configured to receive the requests4081and4082via an API server of the computing platform102. In some examples, the API server delegates authentication and authorization of received client requests to an authentication and authorization component of the computing platform102for evaluating the client requests and access requirements and for granting access of the clients associated with the client requests to applications running on hosts of the computing platform102. According to certain embodiments, the container runtime component400is configured to send the requests4081and4082to the containers4021and4022based at least in part on the unique tenant identification number associated with each request, respectively. For example, the container runtime component400is configured to send the request4081to the containers4021based at least in part on the unique tenant identification number (e.g., “1”) associated with the request4081. As an example, the container runtime component400is configured to send the request4082to the containers4022based at least in part on the unique tenant identification number (e.g., “2”) associated with the request4082. In some embodiments, the container runtime component400is configured to send the requests4081and4082to the containers4021and4022based at least in part on the network address associated with each request, respectively. For example, each network address associated with the requests4081and4082includes the corresponding unique tenant identification number associated with the client devices1121and1122, respectively. As an example, the network address associated with the request4081includes the unique tenant identification number (e.g., “1”) that is associated with the client devices1121and relates to tenant4061. In another example, the network address associated with the request4082includes the unique tenant identification number (e.g., “2”) that is associated with the client devices1122and relates to tenant4062. In other examples, the container runtime component400is configured to isolate client requests from each other based at least in part on the tenants associated with each client request. For example, the container runtime component400is configured to extract the unique tenant identification number from the network address associated with a received client request and forward the client request to the container associated with the extracted unique tenant identification number. FIG.5is a simplified diagram showing a method for isolating applications associated with multiple tenants within a computing platform according to one embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The method500includes processes502-516that are performed using one or more processors. Although the above has been shown using a selected group of processes for the method, there can be many alternatives, modifications, and variations. For example, some of the processes may be expanded and/or combined. Other processes may be inserted to those noted above. Depending upon the embodiment, the sequence of processes may be interchanged with others replaced. In some embodiments, some or all processes (e.g., steps) of the method500are performed by the system100. In certain examples, some or all processes (e.g., steps) of the method500are performed by a computer and/or a processor directed by a code. For example, a computer includes a server computer and/or a client computer (e.g., a personal computer). In some examples, some or all processes (e.g., steps) of the method500are performed according to instructions included by a non-transitory computer-readable medium (e.g., in a computer program product, such as a computer-readable flash drive). For example, a non-transitory computer-readable medium is readable by a computer including a server computer and/or a client computer (e.g., a personal computer, and/or a server rack). As an example, instructions included by a non-transitory computer-readable medium are executed by a processor including a processor of a server computer and/or a processor of a client computer (e.g., a personal computer, and/or server rack). In some embodiments, at the process502, a request is received from a client for running an application on a computing platform. The client is associated with a tenant of the computing platform. The computing platform includes a plurality of hosts connected through a network. Each host is associated with a network address and configured to run applications associated with multiple tenants. At the process504, one or more hosts of the plurality of hosts are identified based at least in part on the request. At process506, one or more broadcast domains including the identified one or more hosts are generated. The one or more broadcast domains are isolated in the network at the data link layer. At the process508, the one or more broadcast domains are assigned a unique tenant identification number corresponding to the tenant. At the process510, the application is launched on at least one host of the identified one or more hosts. At the process512, in response to launching the application on the at least one host, the unique tenant identification number is assigned to the launched application. At the process514, the unique tenant identification number is added to the network address of the at least one host. At the process516, the network address of the at least one host is sent to the client associated with the tenant. FIG.6is a simplified diagram showing a computing system for implementing a system for isolating applications associated with multiple tenants within a computing platform according to one embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The computing system600includes a bus602or other communication mechanism for communicating information, a processor604, a display606, a cursor control component608, an input device610, a main memory612, a read only memory (ROM)614, a storage unit616, and a network interface618. In some embodiments, some or all processes (e.g., steps) of the method500are performed by the computing system600. In some examples, the bus602is coupled to the processor604, the display606, the cursor control component606, the input device610, the main memory612, the read only memory (ROM)614, the storage unit616, and/or the network interface618. In certain examples, the network interface is coupled to a network620. For example, the processor604includes one or more general purpose microprocessors. In some examples, the main memory612(e.g., random access memory (RAM), cache and/or other dynamic storage devices) is configured to store information and instructions to be executed by the processor604. In certain examples, the main memory612is configured to store temporary variables or other intermediate information during execution of instructions to be executed by processor604. For examples, the instructions, when stored in the storage unit616accessible to processor604, render the computing system600into a special-purpose machine that is customized to perform the operations specified in the instructions. In some examples, the ROM614is configured to store static information and instructions for the processor604. In certain examples, the storage unit616(e.g., a magnetic disk, optical disk, or flash drive) is configured to store information and instructions. In some embodiments, the display606(e.g., a cathode ray tube (CRT), an LCD display, or a touch screen) is configured to display information to a user of the computing system600. In some examples, the input device610(e.g., alphanumeric and other keys) is configured to communicate information and commands to the processor604. For example, the cursor control608(e.g., a mouse, a trackball, or cursor direction keys) is configured to communicate additional information and commands (e.g., to control cursor movements on the display606) to the processor604. According to some embodiments, method for isolating applications associated with multiple tenants within a computing platform. For example, a method includes receiving a request from a client associated with a tenant for running an application on a computing platform. The computing platform includes a plurality of hosts connected through a network. Each host is associated with a network address and configured to run applications associated with multiple tenants. The method further includes identifying one or more hosts of the plurality of hosts based at least in part on the request. The method further includes generating one or more broadcast domains including the identified one or more hosts. The one or more broadcast domains are isolated in the network at a data link layer. The method further includes assigning to the one or more broadcast domains a unique tenant identification number corresponding to the tenant. The method further includes launching the application on at least one host of the identified one or more hosts. In response to launching the application on the at least one host: the unique tenant identification number is assigned to the launched application; the unique tenant identification number is added to the network address of the at least one host; and the network address of the at least one host is sent to the client associated with the tenant. The method is performed using one or more processors. For example, the method is implemented according to at leastFIG.1,FIG.2,FIG.3,FIG.4and/orFIG.5. In some examples, the network address further includes a unique cluster identification number. The unique cluster identification number is associated with a cluster of the computing platform. The cluster is associated with the tenant. In certain examples, the unique tenant identification number includes the unique cluster identification number. In other examples, the network address includes a plurality of bit sequences. One bit sequence of the plurality of bit sequences includes the unique tenant identification number. In certain examples, the plurality of bit sequences of the network address includes at least 128 bits. In some examples, the broadcast domains include virtual local area networks. In certain examples, the launched application is included in a container. The container is associated with the unique tenant identification number. For example, the container is included in a pod running on the at least one host. The pod includes one or more containers. Each container of the one or more containers is associated with one tenant of the computing platform. As an example, each container of the one or more containers is associated with a different tenant of the computing platform. In one example, the pod maps to the network address. In some examples, the network address further includes a region identification number, a network device identification number, or a virtual interface of the at least one host. In certain examples, the one or more broadcast domain map to the network address. According to certain embodiments, a system for isolating applications associated with multiple tenants within a computing platform includes a plurality of hosts connected through a network and a fleet controller. Each host is associated with a network address and configured to run applications associated with multiple tenants on a computing platform. The fleet controller is configured to, in response to receiving a first request from a client associated with a tenant for running an application on the computing platform, identify one or more hosts of the plurality of hosts based at least in part on the request. The fleet controller is further configured to generate one or more broadcast domains including the identified one or more hosts. The one or more broadcast domains are isolated in the network at a data link layer. The fleet controller is further configured to assign to the one or more broadcast domains a unique tenant identification number corresponding to the tenant. The fleet controller is further configured to send a second request to a scheduler for launching the application on at least one host of the identified one or more hosts. The fleet controller is further configured to, in response to receiving confirmation from the scheduler of the application being launched on the at least one host: assign the unique tenant identification number to the launched application; add the unique tenant identification number to the network address of the at least one host; and send the network address of the at least one host to the client associated with the tenant. For example, the system is implemented according to at leastFIG.1,FIG.2,FIG.3, and/orFIG.4. In some examples, the network address further includes a unique cluster identification number. The unique cluster identification number is associated with a cluster of the computing platform. The cluster is associated with the tenant. In certain examples, the network address includes a plurality of bit sequences. One bit sequence of the plurality of bit sequences includes the unique tenant identification number. In other examples, the plurality of bit sequences of the network address includes at least 128 bits. In some examples, the broadcast domains include virtual local area networks. According to some embodiments, a system for isolating applications associated with multiple tenants within a computing platform includes a client associated with a tenant and configured to request running an application on a computing platform. The computing platform includes a plurality of hosts connected through a network. Each host is associated with a network address and is configured to run applications associated with multiple tenants. The client is further configured to send a request for running the application on the computing platform. The client is further configured to, in response to sending the request for running the application on the computing platform, receive the network address of at least one host of the plurality of hosts. One or more hosts of the plurality of hosts are identified based at least in part on the request. The identified one or more hosts include the at least one host. One or more broadcast domains are generated to include the identified one or more hosts. The one or more broadcast domains are isolated in the network at a data link layer. The one or more broadcast domains are assigned to a unique tenant identification number corresponding to the tenant. The application is launched on the at least one host of the identified one or more hosts. The launched application is assigned to the unique tenant identification number. The unique tenant identification number is added to the network address of the at least one host. For example, the system is implemented according to at leastFIG.1,FIG.2,FIG.3, and/orFIG.4. In some examples, the network address further includes a unique cluster identification number. The unique cluster identification number is associated with a cluster of the computing platform. The cluster is associated with the tenant. In certain examples, the network address includes a plurality of bit sequences. One bit sequence of the plurality of bit sequences includes the unique tenant identification number. For example, some or all components of various embodiments of the present invention each are, individually and/or in combination with at least another component, implemented using one or more software components, one or more hardware components, and/or one or more combinations of software and hardware components. In another example, some or all components of various embodiments of the present invention each are, individually and/or in combination with at least another component, implemented in one or more circuits, such as one or more analog circuits and/or one or more digital circuits. In yet another example, while the embodiments described above refer to particular features, the scope of the present invention also includes embodiments having different combinations of features and embodiments that do not include all of the described features. In yet another example, various embodiments and/or examples of the present invention can be combined. Additionally, the methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem. The software program instructions may include source code, object code, machine code, or any other stored data that is operable to cause a processing system to perform the methods and operations described herein. Other implementations may also be used, however, such as firmware or even appropriately designed hardware configured to perform the methods and systems described herein. The systems' and methods' data (e.g., associations, mappings, data input, data output, intermediate data results, final data results, etc.) may be stored and implemented in one or more different types of computer-implemented data stores, such as different types of storage devices and programming constructs (e.g., RAM, ROM, EEPROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, application programming interface, etc.). It is noted that data structures describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program. The systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, DVD, etc.) that contain instructions (e.g., software) for use in execution by a processor to perform the methods' operations and implement the systems described herein. The computer components, software modules, functions, data stores and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that a module or processor includes a unit of code that performs a software operation and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code. The software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand. The computing system can include client devices and servers. A client device and server are generally remote from each other and typically interact through a communication network. The relationship of client device and server arises by virtue of computer programs running on the respective computers and having a client device-server relationship to each other. This specification contains many specifics for particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations, one or more features from a combination can in some cases be removed from the combination, and a combination may, for example, be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Although specific embodiments of the present invention have been described, it will be understood by those of skill in the art that there are other embodiments that are equivalent to the described embodiments. Accordingly, it is to be understood that the invention is not to be limited by the specific illustrated embodiments, but only by the scope of the appended claims. | 56,367 |
11943320 | DETAILED DESCRIPTION Methods, systems, and computer readable media for managing content items having multiple resolutions are provided. Users may be able to interact and view content items, such as images, on a user device in a first resolution while a second resolution version of the content item or items downloads, or attempts to download, in the background. As used herein, a “background” process may be any process that is not directly related to the user's current interaction with the user device. It is noted that the terms “device” and “content management system” are used herein to refer broadly to a wide variety of storage providers and data management service providers, electronic devices and user devices. It is also noted that the term “content item” is user herein to refer broadly to a wide variety of digital data, documents, text content items, audio content items, video content items, portions of content items, and/or other types of data. Content items may also include files, folders or other mechanisms of grouping content items together with different behaviors, such as collections of content items, playlists, albums, etc. The term “user” is also used herein broadly, and may correspond to a single user, multiple users, authorized accounts, an application or program operating automatically on behalf of, or at the behest of a person, or any other user type, or any combination thereof. The term “gesture” and “gestures” are also used herein broadly, and may correspond to one or more motions, movements, hoverings, inferences, signs, or any other such physical interactions with one or more sensors, or any combination thereof, including vocal commands or interpretations of eye movements based on retinal tracking. The term “continuous real-time image” is also used herein broadly, and may correspond to live images captured via one or more image capturing components, continuous images captured, recorded images, or any other type of image that may be captured via an image capturing component, or any combination thereof. The present invention may take form in various components and arrangements of components, and in various techniques, methods, or procedures and arrangements of steps. The referenced drawings are only for the purpose of illustrating embodiments, and are not to be construed as limiting the present invention. Various inventive features are described below that can each be used independently of one another or in combination with other features. FIG.1shows an exemplary system in accordance with various embodiments. System100may include user devices102aand102b, which may communicate with content management system104across network106. Persons of ordinary skill in the art will recognize that although only two user devices are shown within system100, any number of user devices may interact with content management system104and/or network106, and the aforementioned illustration is merely exemplary. Network106may support any number of protocols, including, but not limited to, Transfer Control Protocol and Internet Protocol (“TCP/IP”), Hypertext Transfer Protocol (“HTTP”), and/or wireless application protocol (“WAP”). For example, user device102aand user device102b(collectively102) may communicate with content management system104using TCP/IP, and, at a higher level, use a web browser to communicate with a web server at content management system104using HTTP. A variety of user devices102may communicate with content management system104, including, but not limited to, desktop computers, mobile computers, mobile communication devices (e.g., mobile phones, smart phones, tablets), televisions, set-top boxes, and/or any other network enabled device. Various types of user devices may include, but are not limited to, smart phones, mobile phones, tablet computers, personal digital assistants (PDAs), laptop computers, digital music players, and/or any other type of user device capable of including a touch-sensing display interface. Various touch-sensing display interfaces may include, but are not limited to, liquid crystal displays (LCD), monochrome displays, color graphics adapter (CGA) displays, enhanced graphics adapter (EGA) displays, variable-graphics array (VGA) displays, or any other display, or any combination thereof. In some embodiments, the touch-sensing display interface may include a multi-touch panel coupled to one or more processors to receive and detect gestures. Multi-touch panels, for example, may include capacitive sensing mediums having a one or more of row traces and/or driving line traces, and one or more column traces and/or sensing lines. Although multi-touch panels are described herein as one example for touch-sensing display interface, persons of ordinary skill in the art will recognize that any touch-sensing display interface may be used. Furthermore, various types of user devices may, in some embodiments, include one or more image capturing components. For example, user devices102may include a front-facing camera and/or a rear facing camera. Content management system100may allow a user with an authenticated account to store content, as well as perform management tasks, such as retrieve, modify, browse, synchronize, and/or share content with other accounts. In some embodiments, a counter-part user interface (e.g., stand-alone application, client application, etc.) on user devices102may be implemented using a content management interface module to allow a user to perform functions offered by modules of content management system104. A more detailed description of system100is presented below, with reference toFIG.11. In some embodiments, user interface200may display a set of images, such as images202. Images202may, for example, be displayed in a grid view, which may include rows and columns of images. Persons of ordinary skill in the art will recognize that any amount of rows and columns may be used, and any amount of images may be displayed within set202. For example, set202may include nine (9), sixteen (16), one hundred (100), or one thousand (1,000) images, or any other amount of images. In some embodiments, the displayed set of images may include a certain amount of fully displayed images and some images that have only a portion displayed. The portion of images may correspond to images that are displayed within a proximate display window to the currently displayed window. In some embodiments, these images may be viewed in full by the user performing one or more actions, such as a swipe, a click, or a scroll. The images included within set202may be presented in any suitable format. For example, some or all of the images within set202may be high-definition images and/or videos, standard definition images and/or videos, or any other combination, thereof. The various formats of each image within set202may correspond to the display resolution of the image or images. The display resolution of an image may correspond to the number of pixels in each dimension of the image that may be displayed. In some embodiments, the resolution of the images presented within set202may be limited by the display resolution capabilities of the user interface (and thus the display screen displaying the user interface). Various resolutions of the images may include, but are not limited to, standard definition (e.g., 480i, 576i), enhanced definition (e.g., 480p, 576p), high-definition (e.g., 720p, 1080i, 1080p), and/or ultra-high-definition (e.g., 2160p, 4320p, 8640p), or any other resolution. In some embodiments, a user may select one or more images from set202to view in a full-screen mode. For example, the user may select image204from set202using finger206. Any gesture or combination of gestures may be used to select images. For example, finger206may select image204by tapping thereon. Selected image204may initially be displayed within set202in a first resolution, such as a thumbnail resolution (e.g., 75×75 pixels). Persons of ordinary skill in the art will recognize that thumbnail resolution may encompass multiple pixel levels including, but not limited to, 100×100 pixels, 160×160 pixels, 200×200 pixels, or any other combination, permutation, or within the range thereof. Presenting image204in a thumbnail resolution may be due, at least in part, to the difficulty in presenting multiple or all images from set202in the highest resolution available. For example, if the device does not have enough storage space to store every image included within set202in high-definition (e.g., 1080p), set202may initially be displayed and stored in a lower resolution (e.g., a thumbnail resolution). Upon selection of image204, a request to view the image in a full screen mode or a single image view may be sent to a content management system (e.g., content management system100) across a network (e.g., network104). For example, while locally the device may only store and display lower resolution images in set202, a user may have higher resolution versions of the images stored within their account on the content management system. In response to detecting a selection of one or more images from set202by the user, the content management system may locate and send a high quality or high resolution version of image204to the user device. In some embodiments, the content management system may include its own separate interface250, which may or may not be viewable to the user. In some embodiments, activities rendered on the content management system may be performed without a physical interaction from the user. However, for illustrative purposes, content management system interface250may be presented herein. Interface250may receive the request to display a single image view of image204in response to its selection from set202. Interface250, (and content management system100), may then locate high-quality image254within the user's account on the content management system. Image254may, in some embodiments, be substantially similar image204, with the exception that image254may be of a higher resolution. In some embodiments, image254may be sent to the user device and displayed within user interface200. Set202, which initially was displayed within user interface200, may be replaced by image254on user interface200, and may take up any amount of the display screens. For example, image254may be displayed in a “full screen” mode on the user interface, and may occupy all or substantially all of user interface200. Image254may also occupy only a portion of the display space available on user interface200. In some embodiments, both a portion of set202and image254may be displayed within user interface200. In some embodiments, the process of selecting image204, sending the request to the content management system for the higher resolution version (e.g., image254), and the presenting of image254in the single image view, may all occur at a substantially same time. For example, a user may select image204and subsequently be presented with image254in the single image view, with a minimal time delay. However, this may depend on a level of connectivity between the user device and the content management system, as transmission of the request and downloading the higher resolution image may depend on the network connections and/or other performance characteristics. In some embodiments, this may be resolved by locally caching one or more of the images from set202thereby reducing any time delay between selection of image204and viewing of image254. In some embodiments, image254and/or other high-quality versions corresponding to the images included within set202may be downloaded in the background. For example, as the user interacts with set202, various high-quality versions of images may download in the background, such as image254. This may allow the user to view a selected image in the single image view without waiting for the request to be sent to the content management system and the image sent back. In some embodiments, the higher resolution versions of the images may be dynamically prioritized for download to the user device. For example, images that have recently been viewed, recently been stored, and/or recently uploaded, may be prioritized to be downloaded. FIG.3shows an illustrative flowchart assigning priority levels to images based on a downloading priority order in accordance with various embodiments. Process300may begin at step302. At step302, a determination may be made as to whether images selected to be viewed in a single view mode are available. The determination may be performed on the user device, the content management system, or may be split between both (e.g., may begin on the user device and may complete on the content management system). For example, the selected image (e.g., image204) may not be available to be viewed in the single view mode because of a lack of network connectivity. The single view mode may require or may attempt to obtain a high-quality version of image204(e.g., image254), however due to a lack of network connectivity between the user device and the content management system, the high-quality image may not be immediately available. If at step302, access is unavailable, process300may proceed to step304. If access is available, process300may proceed to step308where a first priority level may be assigned to the one or more selected images. At step304, a determination may be made as to whether any additional images located within a same collection as the selected images are available. For example, image204may be located within a collection of images, such as set202. If one or more of the additional images within collection202is accessible, then the selected image or images may be assigned a second priority level. If not, process300may proceed to step306. At step306, a determination may be made as to whether or not an action has been taken with the one or more selected images. For example, if the user has shared, edited, and/or performed any other task to a selected image, that image may be assigned a third priority level. If not, process300may proceed back to step302and wait for access to become available to the one or more images in the single view mode. In some embodiments, the priority level assigned to the one or more selected images may determine the order for the downloading images from the user account on the content management system to the user device. Although the aforementioned example assigns a first, second, and third priority level to selected images, persons of ordinary skill in the art will recognize that any priority level and any assignment of priority levels may be performed based on any characteristic, and the previously described scenarios are merely exemplary. For example, many different paradigms of ordering may be used for dynamic prioritization. In some embodiments, the first priority level may be ranked higher than both the second priority level and the third priority level. For example, the ordering of dynamic prioritization may be the first priority level, the second priority level, and the third priority level. In this scenario, the first priority level is ranked higher, and therefore may download before the items assigned the second priority level. Also in this scenario, the second priority level may be ranked higher than the third priority level, and items assigned the second priority level may download before items assigned the third priority level. As another example, the ordering of the dynamic prioritization may have the first priority level ranked higher than the third priority level, which may be ranked higher than the second priority level. Persons of ordinary skill in the art will recognize that any permutation of priority levels may be implemented, and the aforementioned ordering and assigning of priority levels and rankings is merely exemplary. Furthermore, persons of ordinary skill in the art will also recognize that any number of items may be assigned any number of priority levels, and the user of three priority levels is merely exemplary. Persons of ordinary skill in the art will also recognize that any step from process300may be performed on the user device, an additional device, the content management system, or any combination thereof. FIG.4shows a schematic illustration of categorizing images within collections on a user device in accordance with various embodiments. Interface400may correspond to a particular portion of a content management system, and may include images having associated metadata. In some embodiments, interface400may correspond to a display output of a content management system (e.g., content management system100), if a user were capable of directly accessing and interacting with the content management system. For example, a user may access and attempt to view images stored within their user account on content management system100. In some embodiments, interface400may correspond to a display that the user may be presented with in response to an attempt to view some, or all, of the images stored within their user account. Interface400may display images402and404. Although only two images are included within interface400, persons of ordinary skill in the art will recognize that any number of images may be presented to the user and/or stored within the users account, and the use of two images is merely exemplary. In some embodiments, images displayed within interface400may include associated metadata. For example, images402and404may respectively include geographical information402aand404a, as well as temporal information402band404b. Geographical information402amay indicate the location where image402was captured (e.g., New York City), whereas temporal information402bmay indicate the time and/or date that image402was captured (e.g., Oct. 1, 2013, 12:00 PM). Geographical information404amay indicate the location where image404was captured (e.g., San Francisco), whereas temporal information404bmay indicate the time and/or date that image404was captured (e.g., Jul. 4, 2011, 3:00 PM). In some embodiments, date and time handling may be synchronized across multiple time zones in order to provide uniformity for processing the images. The associated metadata may, in some embodiments, also include exchangeable image file format (Exif) data received from the camera. The camera may assign a different extensions to images or videos, image types (e.g., panoramic), multi-shots, or any other capture content. In some embodiments, the associated metadata may correspond to recently uploaded, viewed, or shared images by the user. For example, images that have been recently shared may have shared indicator flag showing the value 1 or True, indicating that those images have been shared. In some embodiments, the associated metadata may include information regarding an amount of times a particular image or collection has been viewed. For example, each image may include a viewing history indicating when that image was viewed, how many times the image was viewed, or any other viewing criteria. As another example, recently shared images may be categorized together in a collection. In some embodiments, images included within the user account may be categorized using the associated metadata, and collected into one or more collections based on the associated metadata. User interface450may correspond to a user interface displayed on a user's user device (e.g., devices102). In some embodiments, user interface450may be substantially similar to user interface200ofFIG.2, with the exception that user interface450may include collections of images. User interface450may include collections460and470, which may be formed based on one or more pieces of associated metadata stored within images on the user account. For example, collection460may correspond to images from a user's trip to San Francisco, and may include images462,464,468, and470each having metadata substantially similar to geographical and/or temporal information404aand404b. As another example, collection470may correspond to images from the user's trip to New York City, and each image within collection470may have substantially similar geographical and/or temporal information402aand402b. In some embodiments, the categorization of images within a collection may be based on a user's recent viewing history, sharing history, any recently uploaded images, or content item type. For example, recently viewed images may be included within a specific collection. In some embodiments, the user may request to download one or more images from their user account to their user device. As the image or images download, they may be categorized by the content management system and/or the device based on the metadata associated with the images. For example, any images that include geographical information404aand/or temporal information404bmay be collected and placed within collection460on the user device. In some embodiments, one or more algorithms resident on the content management system, the user device, or split between both, may categorize and collect the images based on similar or substantially similar metadata. Any number of images may be categorized and collected into any number of collections. For example, all images stored within the user account may be categorized, however only a predefined amount may be sent within one or more collections to the user device. In some embodiments, images stored within the user account may be displayed in a grid of images or a page including multiple grids of images. A grid of images may include any number of images in a low resolution, or a lower resolution than would be used to display an image in a single view mode. For example, in the single image view, an image may have a resolution of 154×154 pixels, whereas an image from within the grid may have a resolution of 75×75 pixels. The resolution values described above are merely exemplary, and persons of ordinary skill in the art will recognize that any pixel value may be used for both the single view mode and the grid. In some embodiments, all images stored within the user account may be categorized and collected into one or more grids, which may be sent to the user device. For example, there may be one thousand (1,000) images stored within the user account, and ten (10) grids including one hundred (100) images may be formed within the user account. Any number of the ten grids may be sent to the user device, however certain factors (e.g., available storage space) may determine the amount sent at a single time. Collecting images into grids may be extremely beneficial because this may allow a user to see a large quantity of images that they have stored within their user account faster than if the images were to be viewed individually. Furthermore, because a grid of images may include images having a lower resolution, the total storage size of the grid may be small, thus making it easier and faster to be viewed on the user device regardless of any network latency or storage constraints. For example, by collecting images in grids in a low resolution, larger amounts of images may be sent to the user device because each grid may include a large number of images all having a low resolution, which in aggregate, may equal one or more images in a single image view. As a particular example, a single grid including nine (9) images having a resolution of 64×64 pixels may require less bandwidth to send to a user device than one (1) 512×512 pixel image. Thus, the user may be able to view and interact with a larger quantity of images faster than they would normally be able to if only a high resolution image where sent. FIG.5shows an illustrative diagram of grid views capable of being displayed on a user interface in accordance with various embodiments. User interface500may be displayed on a display screen of a user device (e.g., a touch-sensing display interface located on a user device). In some embodiments, user interface500may be substantially similar to user interface400, with the exception that user interface500may display various images in a grid view. Each grid may include any number of images, and the number of images may be set by the content management system and/or the user. For example, the system may calculate an amount of available storage space on the user device and, based on the calculation, create a number of grids to be sent to the user device. In some embodiments, the user may select how many images are to be included within a single grid view and/or may select the resolution of the images to be included within the grid. For example, the user may decide to have one hundred (100) images included in one grid, and based on the number specified by the user, one hundred images of a specific resolution (e.g., 75×75 pixels) may be created in a grid. As another example, the user may determine that the images included within the grid view may have a certain resolution and, based on this determination, an amount of images may be included within a grid based on the size and resolution of each image. User interface500may include images502in a grid view. Images502may be displayed in a 10×10 grid or array (e.g., ten images per row, ten images per column). Although images502include a square grid (e.g., equal amount of images per row and column), persons of ordinary skill in the art will recognize that any number of images may be included within the grid, and any amount of images per row or per column may be used. For example, images502of the grid may include an array of 20×10 (twenty by ten) images, 5×5 (five by five) images, 15×8 (fifteen by eight) images, or any other collection of images, or any combination thereof. In some embodiments, images included in grid views may be sent in blocks or pages of grid views. For example, images502may be sent in one page along with images512and images522in separate pages. Images502may be displayed within a window currently displayed on user interface500, for example, while images512and522may be included in non-current windows that may be displayed within user interface500. For example, while images502may initially be displayed in the current window of user interface500, one or more user interactions (e.g., a swipe, flick, tap, etc.) may cause images512to be display within the current window of user interface500. Images included within one of the non-current windows (e.g., images512and522), may be formatted in a lower resolution than the images displayed within the current window. For example, image514from images512of one of the non-current windows may be formatted in a lower resolution than image504from images502of the currently displayed window. For example, image504may have a thumbnail resolution of 75×75 pixels, whereas image514may have a resolution of 64×64 pixels. This may allow images included in non-current windows to occupy a smaller amount of storage space. In response to a gesture to transition from displaying a current window to a non-current window, images from the non-current window (e.g., images512) may be downloaded from the content management system in a higher resolution, such as a thumbnail resolution, for example. In some embodiments, the resolutions of images included within various grid views may be dynamic, and may increase/decrease automatically in response to a detection of one or more gestures. For example, as previously mentioned, in response to a detected gesture, images512may be increased to have a higher resolution (e.g., from lower than a thumbnail resolution to a thumbnail resolution), whereas images502may have their resolution decreased accordingly. The amount of images displayable within one or more grid views may be dependent on a variety of factors. For example, the user may determine how many images are to be included within a grid based on an amount of available storage space, an amount of total images stored within the user account, and/or a resolution of images displayable within the grid. In some embodiments, the user device and/or the content management system may determine how many images may be viewed within a grid view. For example, the user device may determine that only nine (9) images may be capable of being displayed on the user interface in one (1) grid. FIG.6shows a schematic illustration describing a formation of one or more grids in accordance with various embodiments. User interface600may display a user's current storage setting on their user device. For example, the user's user device may run a status check to determine an amount of available storage. In some embodiments, the amount of available storage may be dependent on specific file types. For example, there may be a first amount of storage space available for music, a second amount of storage space available for images, and a third amount of storage space available for documents. This is merely exemplary, and persons of ordinary skill in the art will recognize that any amount of storage availability may be used, and form of content may have a specific amount of storage space available for that content type, and the use of a first, second, and third amount of storage space for music, images, and/or documents, is merely exemplary. User interface600may include storage indicator602. Storage indicator602may detail an amount of storage available to the user to download or add items to their device. Total storage606may correspond to the total amount of available storage space on the user's user device. For example, the user device may include 200 GB of storage space, which may be filled with any form of media, software, applications, or other items capable of being stored on the user's user device. Although total storage606corresponds to 200 GB, persons of ordinary skill in the art will recognize that any storage amount may be used including, but not limited to, 1 GB, 10 GB, 100 GB, or 1 TB. Current storage604may correspond to the total amount of storage currently occupied on the user device. For example, the user device may have 200 GB of total storage, with 100 GB currently occupied by one or more items stored on the user device. User interface650may correspond to a displayable portion of the user account on a content management system, such as content management system100. In some embodiments, user interface650may be substantially similar to user interface250, with the exception that user interface650may include storage values of various media items within the user account. In some embodiments, user interface650may include photograph directory658and video directory668. Each directory may also respectively include its own storage indicator, such as storage indicator652corresponding to photograph directory658, and storage indicator662corresponding to video directory668. Storage indicators652and662may be substantially similar to storage indicator602, with the exception that each of storage indicators652and662may indicate an amount of available storage space occupied by various media items stored within the user account. Storage indicators652may include total storage value656, and storage indicator662may include total storage value666. Total storage value656may correspond to the total amount of storage available on the content management system for uploading photographs, whereas total storage value666may correspond to the total amount of storage available on the content management system for uploading videos. In some embodiments, total storage values656and666may be equal and may correspond to the total amount of storage available within the user account on the content management system. For example, the total storage available within the user account may be 8 GB, and total storage values656and666may indicate that each storage indicator may only have 8 GB to use for that particular media type. In some embodiments, current photograph storage value654may indicate the amount of storage occupied within the user account by photographs. For example, current photograph storage value654may indicate that the user has 1 GB of photographs stored within the user account. In some embodiments, current video storage value664may indicate the amount of storage occupied within the user account by videos. For example, current video storage value664may indicate that the user has 2 GB of videos stored within the user account. In some embodiments, 1 GB of storage for photographs and 2 GB of storage for videos may indicate that the user has 3 GB of storage used for media items out of 8 GB of total available storage space. However, the two current storage values need not be aggregated against the total value, and there may be 8 GB of storage space available for both videos and/or photographs. Persons of ordinary skill in the art will also recognize that any amount of storage space and current storage levels may be used, and the use of 8 GB of total space, 1 GB for photographs, and 2 GB for videos is merely exemplary. In some embodiments, the content management system may run one or more performance or storage algorithms on the user device to determine the current storage level on the user device. For example, a performance algorithm may be sent to the user device from the content management system, which may be used to calculate the total amount of storage available on the user device. The algorithm may then cause the user device to send the calculations back to the content management system. In response to calculating the storage level of the user device, the content management system may send viewing options back to the user device for the user to decide how to view some or all of the content stored within their account on the content management system. Although the aforementioned example has the algorithm sending to the device, calculating information on the device, sending the calculation back to the content management system, and then sending display options back to the device, persons of ordinary skill in the art will recognize that any one of these steps may be performed by either the content management system and/or the user device. In response to calculating the amount of storage space available on the user device, the user device may display various grid view options. The various grid view options may indicate to the user a variety of ways that some or all of the images stored within the user account may be viewed. Continuing with the aforementioned example, there may be 100 MB of storage space available on the user's user device, and 1 GB of photographs stored in the user account on the content management system. Display option610may indicate to the user that it may be possible to display 10 photographs in a grid view, with each image being 10 MB large, thus totaling 100 MB. Display option612may indicate that the user may view 100 photographs in a grid view each being 1 MB large, thus totaling 100 MB. Furthermore, display option614may indicate that the user may view 1000 photographs in a grid view, each being 100 kB large, thus totaling 100 MB as well. A user may select any of these options, or any additional options for displaying any amount of images in any size. For example, the user may select display option612, and user interface600may display image set622. Image set622may include 100 photographs each being 1 MB in size. FIG.7shows an illustrative block diagram displaying the resolution of various grid views based on the distance from a current display window in accordance with various embodiments. Current window702may correspond to an image or a set of images currently displayed by a user interface, such as user interface200ofFIG.2. The currently displayed images may be of a first resolution. For example, images displayed within the current user interface may be 1080p high-definition image (1920×1080, 2,073,600 pixels). As another example, images displayed within the current window may correspond to a grid of images, where each image in the grid has a resolution of 256×256 pixels. In some embodiments, there may be one or more additional images and/or sets of images which may be displayed within a non-current window. Non-current windows may correspond to any window that is not currently displayed within the user interface, but may be displayed within the user interface at some point in time. For example, non-current window504ofFIG.5may be displayed in response to a gesture by the user (e.g., a swipe), to transition from current window502to non-current window504. In order to minimize the amount of storage space occupied on the user's user device, images included within non-current windows may be stored in a lower resolution. The resolution of images included within either a currently displayed window and/or a non-current window may depend on a distance between the currently displayed window and the corresponding non-current window. For example, display window704may correspond a non-current display window having a distance d1away from current window702. In some embodiments, if a non-current window has a distance d1, the non-current window may include one or more images having a resolution lower than that of the resolution of the images within current window702. For example, non-current window704may include one or more images having a resolution of 900p (1600×900, 1,440,000 pixels). As another example, if current window702includes a grid of images having a resolution of 100×100 pixels, non-current window704may include a grid of images having a resolution of 75×75 pixels. In some embodiments, non-current windows706and708may respectively have distance d2and d3. The various distance may correspond to the resolution of the image or images included within the respective windows. For example, window706, which has distance d2, may include images having a resolution of 720p (1280×720, 829,440 pixels). As another example, window708, which has distance d3, may include images having an SVGA resolution (800×600, 480,000 pixels). As yet another example, non-current window706and/or708may include a grid of images having a resolution smaller than that of non-current window704, such as 64×64 pixels or smaller. In some embodiments, greater distances between a currently viewed window or image and a non-current window or image may lead to the non-current window or image having a smaller resolution. FIG.8shows an illustrative flowchart of a process for performing a background download of a content item, such as an image, in accordance with various embodiments. Process800may begin at step802. At step802, a user device may access a content management system over a communications network. For example, devices102may send one or more requests to access an authorized account (e.g., a user account) on content management system100across network104. In some embodiments, accessing the content management system may include a user inputting a user name, password, or any other identification information or credentials, or any combination thereof. In some embodiments, once the user is granted access to the content management system, the supplied login credentials may be stored on the user device and/or the content management system. At step804, a request to access one or more images from the content management system may be sent from the user device to the content management system. The request may be sent in response to the content management system granting access to the user device. For example, once logged into the user account on the content management system, the user may select one or more images stored within the user account. In response to selecting the one or more images, the user may request to download the selected images from the content management system to the user device. In some embodiments, the request may correspond to a request to download the one or more images from the content management system. At step806, the one or more images requested to be accessed may be categorized by an expected use for the one or more images. In some embodiments, the categorization may determine whether the expected use of the one or more images corresponds to the image(s) being downloaded in a first version. For example, the categorization may determine that the one or more images will be viewed in a single image view. In some embodiments, images viewed in a single image view may be downloaded in a high resolution, such as 1080p, as opposed to images included in a grid view, which may be viewable and/or downloaded in a lower resolution, such as a thumbnail resolution. In some embodiments, categorizing the one or more images may include collecting the one or more images based on various factors. For example, the categorization may collect the one or more images based whether local access to the one or more images may be available on the user device. As another example, the categorization may collect the one or more images based on whether the user has scrolled a page displayable on a user interface of the user device where the one or more images appears. As still yet another example, the categorization may collect the images based on whether an action has been taken on/with the one or more selected images. The categorization may collect the one or more images with other images that the user has previously or recently shared. In some embodiments, the one or more images may be collected with other shared images. The categorization may also collect images based on a recent upload by the user to the content management system, recently viewed images, or a time frame that an image was captured or viewed within. At step808, the one or more categorized images may be received in a second version. For example, if the selected images are to be viewed in a single image view but the connection between the user device and the content management system is low, then the images may first be downloaded in a low resolution. This may allow the user to view the images at a substantially same time as the categorization, enabling the user to fully interact with a version of the images without any latency period. At step810, a background download of the first version of the one or more images may be performed. The background download may download the first version of the one or more images to the user device while the second version of the one or more images may be viewed on the user device. For example, while the level of connection between the user device and the content management system is low, the user may first download a low resolution version of the image. In some embodiments, the background download of the first version of the one or more images may occur while the second version of the one or more images continues to download. While the low resolution version of the image is viewed on the user device, a high resolution version of the image may download in the background. Due to the level of connectivity being low, the higher resolution image may, for example, take a longer time to download. Thus the user may be able to view and interact with the low resolution image first, while the high resolution version downloads. In some embodiments, the request to download may include a request to download multiple images, which may be dynamically prioritized for downloading within a collection. In some embodiments, categorizing and dynamically prioritizing the images may include displaying the images on a user interface within the collection. For example, image404ofFIG.4may be downloaded and displayed within collection460on user interface400. In some embodiments, the categorization and dynamic prioritization may include associating any user actions performed on the images within the collection as if the content management system ran solely on the user device. This may allow the user to have a substantially local feel on their user device no matter the level of connectivity between the user device and the content management system. In some embodiments, the images may initially be presented within the user interface in a grid view. For example, each image included within the grid view may be presented in the second version (e.g., thumbnail resolution). In response to a selection by the user, the one or more images may be presented in the first resolution in the single image view. For example, a user may be presented with a plurality of images in grid view in a thumbnail resolution. In response to a user selection to view one or more of the plurality of images, the selected images may be presented in high-definition. In some embodiments, the images may be stored within the user account on the content management system. However, within the user account there may be more images than displayable within one window on the user interface displaying a grid view. For example, the grid view may be capable of displaying one hundred (100) images, whereas there may be one thousand (1,000) images in the user account. In this scenario, each image downloaded to be displayed within the grid view of a currently displayed window on the user interface may be in the second version (e.g., having a thumbnail resolution). In some embodiments, any of the additional images not currently displayed within the current window on the user interface may be downloaded in the second version, and the resolution type may be dependent on the proximity between the current window and the non-current window. For example, images included within a non-current but sequentially proximate window to the current window may have a lower resolution than the thumbnail resolution (e.g., 64×64 pixels), whereas images included within a non-current window that is not sequentially proximate to the current window may be downloaded in a lowest resolution available (e.g., 50×50 pixels). Thus, images having a small distance (e.g., close in proximity to the currently displayed window) may have a resolution that may be lower than the first resolution, whereas images having a large distance (e.g., not close or substantially far from the currently displayed window) may have a resolution that may be lower than the first resolution and the second resolution. For example, images not currently displayed on the user device may have a lower resolution than an image currently displayed on the user interface. The greater a distance away between the currently viewed image a particular image may be, the smaller the resolution may be. This may be beneficial to a user while they are scrolling through a large amount of images, as the scrolling may be much smoother because of the amount of latencies to access images may be minimized. In some embodiments, dynamically prioritizing the background download may be based on various factors. For example, the prioritization may be based on available access to the one or more images selected to be viewed in a single image view, and may be assigned a first priority level. As another example, the prioritization may be based on available access to one or more additional images located within the same collection as the selected one or more images belongs to. In this scenario, a second priority level may be assigned to the one or more selected images. As yet another example, the prioritization may be based on whether an action with/to the one or more selected images has been performed. In this scenario, a third priority level may be assigned to the one or more images. In some embodiments, the various priority levels may correspond to a ranking level for downloading images. For example, an image assigned a higher ranking level may be downloaded before an image with a lower ranking level. Any permutation of rankings may be used. For example, the first priority level may be ranked higher than the second and third priority levels, where the second priority level may be ranked higher than the third priority level or vice versa. Similarly, the second priority level may be ranked higher than the first and the third priority levels, where the first priority level may be ranked higher than the third, or vice versa. Additionally, the third priority level may be ranked higher than the first and second priority levels, where the first priority level may be ranked higher than the second, or vice versa. Persons of ordinary skill in the art will recognize that the use of a first, second, and third priority level is merely exemplary, and any number of priority levels, and any ordering of these levels, may be used. In some embodiments, metadata associated with the one or more images may be downloaded to the user device from the content management system. The metadata may be downloaded prior to the one or more images, or at a substantially same time. In some embodiments, the categorization of an expected use of the one or more images may be based, at least in part, on the downloaded metadata. For example, image404may include geographical information404aand temporal information404b. Geographical information404aand temporal information404bmay be downloaded along with, or prior to the download of, image404, and may be used to categorize image404within a collection (e.g., collection460). In some embodiments, the background download may be dynamically prioritized based, at least in part, on the downloaded metadata. FIG.9is an illustrative flowchart of a process for displaying images and caching images in accordance with various embodiments. Process900may begin at step902. At step902, a determination of an amount of images capable of being downloaded may be made. The determination may include calculating an amount of available storage space on a user device. The amount of images capable of being downloaded may be stored, initially, on another user/individual's device, one or more social media networks, and/or a content management system. In some embodiments, the user may have an authorized account on a content management system, and the quantity of images stored within the account that may be downloadable to the user device may be determined. In some embodiments, the determination may be based on one or more factors. For example, the amount of images that may be downloadable from the content management system may depend on a level of connectivity between the user device (e.g., client device102) and the content management system (e.g., content management system100). If the level of connectivity between the user device and the content management system is weak or low, then the amount of images capable of being downloaded may be lower than if the level of connectivity was high or strong. As another example, if there is no Wi-Fi signal available, and only cellular data signal, then the amount of images capable of being downloaded may be modulated based on the user's current cellular data plan. In this way, the user may not take up a substantial portion of their monthly or annual services, or incur any overage charges. In some embodiments, the user may be able to turn on a Wi-Fi only option, or a Wi-Fi preferred option. For example, the user may select an option within their user settings that would communicate to the device which network conditions to use for downloading one or more images. In some embodiments, the determination may be based on an amount of available storage space on the user device. For example, the user may only have 100 GB of available storage space, however there may be 1 GB of images stored within the user account on the content management system. In this scenario, the user can be presented with, or may have present, options for downloading the images in various resolutions, such as options610,612, and614inFIG.6. As another example, the amount of storage space needed to download images may be less a the total amount of free storage space on the user device. The content management system and/or the user device may limit the amount of images to be downloaded to the user device to correspond with the total amount of free storage space. The content management system and/or the user device may also reduce the size or resolution of the images to correspond with the total amount of free storage space. In some embodiments, the determination may be based on an amount of available battery charge on the user device. For example, if the user device has a substantially low level of available battery charge, downloading a large number of images, or even a single high-resolution image, may take up a large amount of processing power. This may be a hindrance for the user, especially in the unfortunate situation that the user may need to use their device for an emergency. Thus, if the user device has a substantially low battery level, the amount of images downloadable and/or the size of the images downloadable, may be modified to ensure that there may still be some remaining battery charge on the user device. In this way, the user may be able to use their device in an emergency situation no matter the image quality or quantity downloaded. At step904, the determined amount of images may be downloaded to the user device in a first resolution. Continuing with the aforementioned example, option610may indicate to the user that, based on the available storage space on the user device, the user may download 10 photographs, each being 10 MB in size (e.g., 100 MB total of storage space). At step906, a first subset of images from the downloaded images may be displayed in a first grid view on the user device. The images displayed within the first grid view may be displayed in a second resolution, in some embodiments. For example, the user may select to download 100 images, each being 1 MB in size. The user may display some or all of the 100 images within a grid view, such as set622within user interface600. In some embodiments, the images displayed within the grid view may be of the same resolution as that of the downloaded images. For example, if the user selected option612and downloads 100 images, each of a 1 MB resolution, set622may display 100 images having a 1 MB resolution. As another example, some or all of the 100 downloaded images may be displayed within the grid view in a lower resolution (e.g., 100 kB), in order to save space, or if a level of connectivity has decreased. At step908, any additional images from the downloaded images may be cached while the first subset of images may be displayed within the first grid view. For example, if the user downloads 100 images having a 1 MB resolution, but only displays 50 images, then the remaining 50 may be cached in temproray memory on the user device. This may allow the user to show or present only a select amount of images at a time, while still having the ability to display any of the other images at a later point. In some embodiments, the images initially downloaded may be of a thumbnail resolution (e.g., 75×75 pixels), while the images displayed within the grid view may be of a higher resolution than the thumbnail resolution (e.g., 256×256 pixels). For example, the user may download 1,000 images having a thumbnail resolution and total 100 MB in storage space. However, in response to a user input or selection to display 100 of the images in a grid view, the 100 images may be presented and/or downloaded to the user device in a higher resolution than the originally storage versions. In some embodiments, an input may be detected that may display a second subset of images. The second subset of images may include images from the first subset and/or the additionally cached images. For example, the user may decide to view all of the user's images from a recent trip to San Francisco. The user may provide one or more inputs to the user device and images corresponding to the user's trip may be detected and organized for display. The images may be organized based on any images stored currently on the user device. Thus, the organizational mechanism (e.g., an organization algorithm) may select images from the currently displayed subset of images and any of the cached images. In some embodiments, a determination may be made of an aspect ratio of each image stored within the user account on the content management system. The determined aspect ratio may then be used to prioritize the downloading of images. In some embodiments, the aspect ratio may be a factor used in addition to other factors used to determine the amount of images capable of being downloaded to the user device. For example, although it may be determined that all of the user's images may be downloadable based on the level of connectivity, the amount of available storage space, the battery charge of the device, and/or the data plan; some or all of these images may have an aspect ratio non-conforming to the user's user device, and therefore may not be displayable. Thus, in some embodiments, only images having an appropriate aspect ratio may be downloadable. In some embodiments, images having a best or most appropriate aspect ratio may be downloaded prior to any images which have an aspect ratio that may be displayable, but not optimal for viewing, on the user device. FIG.10is an illustrative flowchart of a process for downloading images based on determined distances between images in accordance with various embodiments. Process1000may begin at step1002. At step1002, a user may access an account on a content management system. In some embodiments, the account may be associated with the user and may be referred to as the user account. In some embodiments, the user may be required to provide login credentials (e.g., username, password, etc.), in order to be granted access to the user account. The login credentials may, in some embodiments, be stored on the user's user device and may be used for future access the content management system. At step1004, a first collection of images may be downloaded from the content management system to the user device. In some embodiments, the first collection of images may be downloaded in a first resolution and may be stored within the user account. For example, the first collection of images may be downloaded in a high-resolution, such as 1080p. Persons of ordinary skill in the art will recognize that any amount of images may be downloaded within the first collection of images, and the images may be of any resolution. In some embodiments, the resolution of the first collection of images may be based on one or more factors. For example, the downloading of the first collection of images may be based on a level of connectivity between the user device and the content management system. As another example, the downloading may be based on an amount of available storage space on the user device and/or an available amount of battery charge of the user device. In some embodiments, the downloaded first collection of images may be displayed within a user interface presented on a display screen of the user device. For example, images622ofFIG.6may be displayed within user interface600. In some embodiments, the first collection of images may be downloaded in a block or a page of images. For example, multiple images may be downloaded from the content management system to the user device in blocks of images. The blocks may be compressed files, file directories, or any other blocking, or any combination thereof. In some embodiments, the first collection of images may be displayed in a grid view within the user interface. At step1006, a distance metric between the first collection of images and any of the additional images stored within the user account may be determined. In some embodiments, the distance metric may correspond to a distance between a currently viewed window and a non-current window. For example, one or more images may be displayed within current window702ofFIG.7. The image(s) displayed within window702may, for example, be high-definition images having a 1080p resolution. The content management system and/or the user device may determine a distance metric between the first collection of images and any of the additional images. For example, window704ofFIG.7may include one or more additional images stored within the user account and/or on the user device, and the content management system and/or the user device may determine a distance metric between the first collection of images, and any images included within window704. In some embodiments, images that may be displayable within a non-current window which may be sequentially proximate to the currently displayed window may be assigned a second resolution. However, in some embodiments, images that are displayable within a non-current window which may not be sequentially proximate to the currently displayed window may be assigned a third resolution. For example, images included within window704and/or706may be assigned a second resolution (e.g., 75×75 pixels, 256×256 pixels, etc.), as they may be non-current windows that may be sequentially proximate to window702. However, window708, which may be a non-current window but not sequentially proximate to current window702, may be assigned a third resolution (e.g., 64×64 pixels, 50×50 pixels, etc.). At step1008, the additional images may be downloaded in a second and/or third resolution based on the determined distance metric between the additional images and the images within the first collection. For example, images included within window704may be downloaded in a high-definition resolution (e.g., 900p), whereas images included within window708may be downloaded in a SVGA resolution. Thus, images having a small distance (e.g., close in proximity to the currently displayed window) may have a resolution that may be lower than the first resolution, whereas images having a large distance (e.g., not close or substantially far from the currently displayed window) may have a resolution that may be lower than the first resolution and the second resolution. This may aid in appropriately and efficiently downloading images in accordance with the likelihood and/or expectance that the image will be viewed in a relatively small time frame compared to the currently displayed images. Thus, images that may be unlikely to be viewed right after an image that is currently displayed may not be initially downloaded in a high resolution, but rather in a lower resolution to save space. In some embodiments, each downloaded image may be categorized within the first collection of images and/or the additional images. The categorization may be based on metadata associated with each image. For example, an image may include geographical information404aand temporal information404b. Information404aand404bmay aid in categorizing image404within collection460, which may be displayed in a first resolution. In some embodiments, metadata associated with an additional image stored in the user account may collection the additional image in a collection of additional images not displayed within the user interface. Thus, significant storage space may be saved on the user device by categorizing images that are required or requested for display, and images that may not currently be needed to be viewed at a present moment in time. FIG.11shows an exemplary system in accordance with various embodiments. In some embodiments, system1100ofFIG.11may be substantially similar to system100ofFIG.1, with the exception that the former may present elements of system100at a more granular level (e.g., modules, applications, etc.). In some embodiments, user devices102may be used to create, access, modify, and manage content items, such as content items110aand110b(collectively110), stored locally within content item system108aand108b(collectively systems108) on user device102and/or stored remotely on content management system104(e.g., within data store118). For example, user device102amay access content items110bstored remotely with data store118of content management system104and may, or may not, store content item110blocally within content item system108aon user device102a. Continuing with the example, user device102amay temporarily store content item110bwithin a cache locally on user device102a, make revisions to content item110b, and the revisions to content item110bmay be communicated and stored in data store118of content management system104. Optionally, a local copy of content item110amay be stored on user device102a. In some embodiments, data store118may include one or more collections132of content items. For example, collections132may include one or more content items having similar properties (e.g., metadata) and/or including similar content. In some embodiments, user devices102may include camera138(e.g.,138aand138b) to capture and record digital images and/or videos. User devices102may capture, record, and/or store content items, such as images, using camera138. For example, camera138may capture and record images and store metadata with the images. Metadata may include, but is not limited to, the following: creation time timestamp, geolocation, orientation, rotation, title, and/or any other attributes or data relevant to the captured image. Metadata values may be stored as attribute112name-value pairs, tag-value pairs, and/or any other method, or any combination thereof, to associate the metadata with the content item and easily identify the type of metadata. In some embodiments, attributes112may be tag-value pairs defined by a particular standard, including, but not limited to, Exchangeable Image File Format (“Exif”), JPEG File Interchange Format (Jfif), and/or any other standard. In some embodiments, user devices102may include time normalization module146, and content management system104may include time normalization module148. Time normalization module146(e.g.,146aand146b) may be used to normalize dates and times stored with a content item. Time normalization module146, counterpart time normalization module148, and/or any combination thereof, may be used to normalize dates and times stored for content items. The normalized times and dates may be used to sort, group, perform comparisons, perform basic math, and/or cluster content items. In some embodiments, user devices102may include organization module136, and content management system104may include organization module140. Organization module136(e.g.,136aand136b) may be used to organize content items into clusters or collections of content items, organize content items to provide samplings of content items for display within user interfaces, and/or retrieve organized content items for presentation. Organization module136may utilize any clustering algorithm. Organization module136may be used to identify similar content items for clusters in order to organize content items for presentation within user interfaces on user devices102and content management system104. Similarity rules may be defined to create one or more numeric representations embodying information on similarities between each of the content items in accordance with various similarity rules. Organization module136may use numeric representations as a reference for similarity between content items in order to cluster the content items. In some embodiments, content items may be organized into clusters to aid with retrieval of similar content items in response to search requests. For example, organization module136amay identify that two images are similar and may group the images together in a cluster. Organization module136amay process content items to determine clusters independently and/or in conjunction with counterpart organization module (e.g.,140and/or136b). In other embodiments, organization module136amay only provide clusters identified with counterpart organization modules (e.g.,140and/or136b) for presentation. Continuing with this example, processing of content items to determine clusters may be an iterative process that may be executed upon receipt of new content items and/or new similarity rules. In some embodiments, user device102amay include classification module150a, while user device102bmay include classification module150b(collectively150), which may be used independently, in combination with classification module152include on content management system104, and/or any combination thereof to classify content items, rectify content items, and/or classify images. For example, the classification modules150and/or152may be used to determine if an image includes a document, and if there so, determine a type of document stored therein. Content item rectification may be performed to correct, perform further transformations, and/or crop content items to improve the display of the content items (e.g., correct the display of a document within an image). In some embodiments, user device102amay include search module142a, while user device102bmay include search module142b, which collectively may be referred to as search modules142. Content management system104may also be provided with counterpart search module144. Each of search modules142and144may be capable of supporting searches for content items located on both user devices102and/or content management system104. A search request may be received by search module142and/or144that requests one or more content items. In some embodiments, the search may be handled by searching metadata and/or attributes assigned to content items during the provision of management services. For example, cluster markers stored with content items may be used to find content items by date. In this particular scenario, cluster markers may indicate an approximate time, or average time, for the content items stored with the cluster marker, and the marker may be used to speed the search and/or return the search results with the contents of the cluster with particular cluster markers. Content items110managed by content management system104may be stored locally within content item system108of respective user devices102and/or stored remotely within data store118of content management system104(e.g., content items134in data store118). Content management system104may provide synchronization of content items managed thereon. Attributes112aand112b(collectively112) or other metadata may also be stored with content items110. For example, a particular attribute may be stored with a content item to track content items locally stored on user devices102that are managed and/or synchronized by content management system104. In some embodiments, attributes112may be implemented using extended attributes, resource forks, or any other implementation that allows for storing metadata with a content item that is not interpreted by a content item system, such as content item system108. In particular, attributes112aand112bmay be content identifiers for content items. For example, the content identifier may be a unique or nearly unique identifier (e.g., number or string) that identifies the content item. By storing a content identifier with the content item, the content item may be tracked. For example, if a user moves the content item to another location within content item system108hierarchy and/or modifies the content item, then the content item may still be identified within content item system108of user device102. Any changes or modifications to the content item identified with the content identifier may be uploaded or provided for synchronization and/or version control services provided by content management system104. A stand-alone content management application114aand114b(collectively114), client application, and/or third-party application may be implemented on user devices102aand102b, respectively, to provide a user interface to a user for interacting with content management system104. Content management application114may expose the functionality provided with content management interface module154and accessible modules for user device102. Web browser116aand116b(collectively116) may be used to display a web page front end for a client application that may provide content management104functionality exposed/provided with content management interface module154. Content management system104may allow a user with an authenticated account to store content, as well as perform management tasks, such as retrieve, modify, browse, synchronize, and/or share content with other accounts. Various embodiments of content management system104may have elements including, but not limited to, content management interface module154, account management module120, synchronization module122, collections module124, sharing module126, file system abstraction128, data store118, and organization module140. Content management interface module154may expose the server-side or back end functionality/capabilities of content management system104. For example, a counter-part user interface (e.g., stand-alone application, client application, etc.) on user devices102may be implemented using content management interface module154to allow a user to perform functions offered by modules of content management system104. The user interface displayed on user device102may be used to create an account for a user and/or authenticate the user to use the account using account management module120. Account management module120may provide the functionality for authenticating use of an account by a user and/or user device102with username/password, device identifiers, and/or any other authentication method. Account information130may be maintained in data store118for accounts. Account information may include, but is not limited to, personal information (e.g., an email address or username), account management information (e.g., account type, such as “free” or “paid”), usage information, (e.g., content item edit history), maximum storage space authorized, storage space used, content storage locations, security settings, personal configuration settings, content sharing data, etc. An amount of storage space on content management system104may be reserved, allotted, allocated, stored, and/or may be accessed with an authenticated account. The account may be used to access content items134and/or content items110within data store118for the account, and/or content items134and/or content items110made accessible to the account that are shared from another account. In some embodiments, account management module120may interact with any number of other modules of content management system104. An account on content management system104may, in some embodiments, be used to store content such as documents, text items, audio items, video items, etc., from one or more user devices102authorized by the account. The content may also include collections of various types of content with different behaviors, or other mechanisms of grouping content items together. For example, an account may include a public collection that may be accessible to any user. In some embodiments, the public collection may be assigned a web-accessible address. A link to the web-accessible address may be used to access the contents of the public folder. In another example, an account may include a photos collection that may store photos and/or videos, and may provide specific attributes and actions tailored for photos and/or videos. The account may also include an audio collection that provides the ability to play back audio items and perform other audio related actions. The account may still further include a special purpose collection. An account may also include shared collections or group collections that may be linked with and available to multiple user accounts. In some embodiments, access to a shared collection may differ for different users that may be capable of accessing the shared collection. Content items110and/or content items134may be stored in data store118. Data store118may, in some embodiments, be a storage device, multiple storage devices, or a server. Alternatively, data store118may be cloud storage provider or network storage accessible via one or more communications networks. Content management system104may hide the complexity and details from user devices102by using content item system abstraction128(e.g., a content item system database abstraction layer) so that user devices102do not need to know exactly where the content items are being stored by content management system104. Embodiments may store the content items in the same collections hierarchy as they appear on user device102. Alternatively, content management system104may store the content items in various orders, arrangements, and/or hierarchies. Content management system140may store the content items in a network accessible storage (SAN) device, in a redundant array of inexpensive disks (RAID), etc. Content management system104may store content items using one or more partition types, such as FAT, FAT32, NTFS, EXT2, EXT3, EXT4, ReiserFS, BTRFS, and so forth. Data store118may also store metadata describing content items, content item types, and the relationship of content items to various accounts, folders, collections, or groups. The metadata for a content item may be stored as part of the content item and/or may be stored separately. Metadata may be store in an object-oriented database, a relational database, a content item system, or any other collection of data. In some embodiments, each content item stored in data store118may be assigned a system-wide unique identifier. Data store118may, in some embodiments, decrease the amount of storage space required by identifying duplicate content items or duplicate chunks of content items. Instead of storing multiple copies, data store118may store a single copy of content item134and then use a pointer or other mechanism to link the duplicates to the single copy. Similarly, data store118may store content items134more efficiently, as well as provide the ability to undo operations, by using a content item version control that tracks changes to content items, different versions of content items (including diverging version trees), and a change history. The change history may include a set of changes that, when applied to the original content item version, produce the changed content item version. Content management system104may be configured to support automatic synchronization of content from one or more user devices102. The synchronization may be platform independent. That is, the content may be synchronized across multiple user devices102of varying type, capabilities, operating systems, etc. For example, user device102amay include client software, which synchronizes, via synchronization module122at content management system104, content in content item system108of user devices102with the content in an associated user account. In some cases, the client software may synchronize any changes to content in a designated collection and its sub-collection, such as new, deleted, modified, copied, or moved content items or folders. In one example of client software that integrates with an existing content management application, a user may manipulate content directly in a local folder, while a background process monitors the local content item for changes and synchronizes those changes to content management system104. In some embodiments, a background process may identify content that has been updated at content management system104and synchronize those changes to the local collection. The client software may provide notifications of synchronization operations, and may provide indications of content statuses directly within the content management application. In some embodiments, user device102may not have a network connection available. In this scenario, the client software may monitor the linked collection for content item changes and queue those changes for later synchronization to content management system104when a network connection is available. Similarly, a user may manually stop or pause synchronization with content management system104. A user may also view or manipulate content via a web interface generated and served by content management interface module154. For example, the user may navigate in a web browser to a web address provided by content management system104. Changes or updates to content in data store118made through the web interface, such as uploading a new version of a content item, may be propagated back to other user devices102associated with the user's account. For example, multiple user devices102, each with their own client software, may be associated with a single account, and content items in the account may be synchronized between each of user devices102. Content management system104may include sharing module126for managing sharing content and/or collections of content publicly or privately. Sharing module126may manage sharing independently or in conjunction with counterpart sharing module152a, located on user device102a, and sharing module152blocated on user device102b(collectively sharing modules152). Sharing content publicly may include making the content item and/or the collection accessible from any device in network communication with content management system104. Sharing content privately may include linking a content item and/or a collection in data store118with two or more user accounts so that each user account has access to the content item. The sharing may be performed in a platform independent manner. That is, the content may be shared across multiple user devices102of varying type, capabilities, operating systems, etc. For example, one or more share links may be provided to a user, or a contact of a user, to access a shared content item. The content may also be shared across varying types of user accounts. In particular, the sharing module126may be used with collections module124to allow sharing of a virtual collection with another user or user account. A virtual collection may be a collection of content identifiers that may be stored in various locations within content item systems108of user device102and/or stored remotely at content management system104. In some embodiments, the virtual collection for an account with a content management system may correspond to a collection of one or more identifiers for content items (e.g., identifying content items in storage). The virtual collection is created with collections module124by selecting from existing content items stored and/or managed by content management system and associating the existing content items within data storage (e.g., associating storage locations, content identifiers, or addresses of stored content items) with the virtual collection. By associating existing content items with the virtual collection, a content item may be designated as part of the virtual collection without having to store (e.g., copy and paste the content item to a directory) the content item in another location within data storage in order to place the content item in the collection. In some embodiments, content management system104may be configured to maintain a content directory or a database table/entity for content items where each entry or row identifies the location of each content item in data store118. In some embodiments, a unique or a nearly unique content identifier may be stored for each content item stored in data store118. In some embodiments, metadata may be stored for each content item. For example, metadata may include a content path that may be used to identify the content item. The content path may include the name of the content item and a content item hierarchy associated with the content item (e.g., the path for storage locally within a user device102). Content management system104may use the content path to present the content items in the appropriate content item hierarchy in a user interface with a traditional hierarchy view. A content pointer that identifies the location of the content item in data store118may also be stored with the content identifier. For example, the content pointer may include the exact storage address of the content item in memory. In some embodiments, the content pointer may point to multiple locations, each of which contains a portion of the content item. In addition to a content path and content pointer, a content item entry/database table row in a content item database entity may also include a user account identifier that identifies the user account that has access to the content item. In some embodiments, multiple user account identifiers may be associated with a single content entry indicating that the content item has shared access by the multiple user accounts. To share a content item privately, sharing module126may be configured to add a user account identifier to the content entry or database table row associated with the content item, thus granting the added user account access to the content item. Sharing module126may also be configured to remove user account identifiers from a content entry or database table rows to restrict a user account's access to the content item. The sharing module126may also be used to add and remove user account identifiers to a database table for virtual collections. To share content publicly, sharing module126may be configured to generate a custom network address, such as a uniform resource locator (“URL”), which allows any web browser to access the content in content management system104without any authentication. To accomplish this, sharing module126may be configured to include content identification data in the generated URL, which may later be used to properly identify and return the requested content item. For example, sharing module126may be configured to include the user account identifier and the content path in the generated URL. Upon selection of the URL, the content identification data included in the URL may be sent to content management system104which may use the received content identification data to identify the appropriate content entry and return the content item associated with the content entry. To share a virtual collection publicly, sharing module126may be configured to generate a custom network address, such as a uniform resource locator (URL), which allows any web browser to access the content in content management system100without any authentication. To accomplish this, sharing module126may be configured to include collection identification data in the generated URL, which may later be used to properly identify and return the requested content item. For example, sharing module126may be configured to include the user account identifier and the collection identifier in the generated URL. Upon selection of the URL, the content identification data included in the URL may be sent to content management system104which may use the received content identification data to identify the appropriate content entry or database row and return the content item associated with the content entry or database row. In addition to generating the URL, sharing module126may also be configured to record that a URL to the content item has been created. In some embodiments, the content entry associated with a content item may include a URL flag indicating whether a URL to the content item has been created. For example, the URL flag may be a Boolean value initially set to 0 or “false” to indicate that a URL to the content item has not been created. Sharing module126may be configured to change the value of the flag to 1 or “true” after generating a URL to the content item. In some embodiments, sharing module126may also be configured to deactivate a generated URL. For example, each content entry may also include a URL active flag indicating whether the content should be returned in response to a request from the generated URL. For example, sharing module126may be configured to only return a content item requested by a generated link if the URL active flag is set to 1 or true. Changing the value of the URL active flag or Boolean value may easily restrict access to a content item or a collection for which a URL has been generated. This may allow a user to restrict access to the shared content item without having to move the content item or delete the generated URL. Likewise, sharing module126may reactivate the URL by again changing the value of the URL active flag to 1 or true. A user may thus easily restore access to the content item without the need to generate a new URL. Exemplary Systems In exemplary embodiments of the present invention, any suitable programming language may be used to implement the routines of particular embodiments including C, C++, Java, JavaScript, Python, Ruby, CoffeeScript, assembly language, etc. Different programming techniques may be employed such as procedural or object oriented. The routines may execute on a single processing device or multiple processors. Although the steps, operations, or computations may be presented in a specific order, this order may be changed in different particular embodiments. In some particular embodiments, multiple steps shown as sequential in this specification may be performed at the same time Particular embodiments may be implemented in a computer-readable storage device or non-transitory computer readable medium for use by or in connection with the instruction execution system, apparatus, system, or device. Particular embodiments may be implemented in the form of control logic in software or hardware or a combination of both. The control logic, when executed by one or more processors, may be operable to perform that which is described in particular embodiments. Particular embodiments may be implemented by using a programmed general purpose digital computer, by using application specific integrated circuits, programmable logic devices, field programmable gate arrays, optical, chemical, biological, quantum or nanoengineered systems, components and mechanisms may be used. In general, the functions of particular embodiments may be achieved by any means as is known in the art. Distributed, networked systems, components, and/or circuits may be used. Communication, or transfer, of data may be wired, wireless, or by any other means. It will also be appreciated that one or more of the elements depicted in the drawings/figures may also be implemented in a more separated or integrated manner, or even removed or rendered as inoperable in certain cases, as is useful in accordance with a particular application. It is also within the spirit and scope to implement a program or code that may be stored in a machine-readable medium, such as a storage device, to permit a computer to perform any of the methods described above. As used in the description herein and throughout the claims that follow, “a”, “an”, and “the” includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. While there have been described methods for managing content items having multiple resolutions, it is to be understood that many changes may be made therein without departing from the spirit and scope of the invention. Insubstantial changes from the claimed subject matter as viewed by a person with ordinary skill in the art, no known or later devised, are expressly contemplated as being equivalently within the scope of the claims. Therefore, obvious substitutions now or later known to one with ordinary skill in the art are defined to be within the scope of the defined elements. The described embodiments of the invention are presented for the purpose of illustration and not of limitation. | 91,740 |
11943321 | DETAILED DESCRIPTION Techniques described herein support cross-platform compatibility between a communication process flow management service and a communication platform. A communication process flow management service may support creation, configuration, management, and deployment of a communication process flow that manages communications between a set of users and a tenant or organization. For example, an organization or tenant may use the communication process flow management service to schedule and manage communications between the organization and a set of users, which may be examples of subscribers, customers, or prospective customers of the organization. User's may receive electronic communications (e.g., emails, messages, advertisements) according to a communication process flow. The communication process flow may include various actions and message configurations, and a user's receipt of various communications may be dependent on attribute data associated with the users and user web behavior, among other parameters. Administrative users or employees associated with the tenant may access various services that monitor communication metrics associated with a communication process flow. For example, some services may provide statistics, such as open rate, click rate, unsubscribe rate, and the like, associated with one or more electronic communications controlled by a communication process flow. These statistics or metrics may be used to manually or automatically tweak aspects of the communication process flow. For example, these metrics may be used to support changing of content items (e.g., subject lines, images) included in an electronic communications, changing of communication frequency or transmission times, and other various communication configurations. The same or other services may also monitor these metrics to detect anomalies associated with the communications. For example, if the service detects that an open rate drops well below an expected open rate, then an alert may be surfaced to one or more administrative users. Thus, various aspects may be used to support communication process flow management and optimization. In some cases, these administrative users or employees associated with the tenant (e.g., a marketing team) may communicate, plan, and monitor aspects of a communication process flow using an external communication platform. For example, the external communication platform may support communication channels that are organized by topic, and team members may use these channels (e.g., chat room) to perform business communications associated with a communication process flow. However, because the external communication platform is separate from the communication process flow management service, the data associated with the communication process flow (e.g., communication metrics, events, anomalies) is siloed with the computing systems supporting the communication process flow management service. Additionally, the data may support decisions associated with a communication process flow, such as stopping, pausing, or modifying configurations of the process flow. Again, because these decisions may occur within the communication platform that is separate from the communication process flow, a user may be required to access the communication platform management service to activate such changes or actions. Techniques described herein support cross-platform compatibility between a communication process flow management service and an external communication platform In some cases, the techniques described herein support posting of various communication metrics, events, objects, and the like occurring in association with a communication process flow into the external communication platform as well as interaction with the communication process flow from the communication platform. These techniques thereby support improved workflow efficiencies as well as reduced communication resource overhead. Specifically, the techniques described herein may support posting of events or objects associated with a communication process flow into a communication channel of a communication platform that is associated with the tenant. Events associated with the communication process flow that may be posted into the communication platform may include change of a configuration for an action of a communication process flow, a creation of a communication process flow, deletion of an action or the like. In some cases, the events are posted to the communication platform according to a configuration associated with the communication process flow. In some cases, a user may select aspects of a communication process flow object, or portions thereof, and selectively cause posting of metadata associated with the object into the communication platform. Event and object data may be transmitted to the communication platform using a data object and a request to an endpoint at the communication platform. These and other techniques are described in further detail with respect to the figures. Aspects of the disclosure are initially described in the context of an environment supporting an on-demand database service. Aspects of the disclosure are further described with respect to computing architectures illustrating cross-platform compatibility and process flow diagrams. Aspects of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to techniques for cross-platform communication process flow object posting. FIG.1illustrates an example of a system100for cloud computing that supports techniques for cross-platform communication process flow object posting in accordance with various aspects of the present disclosure. The system100includes cloud clients105, contacts110, cloud platform115, and data center120. Cloud platform115may be an example of a public or private cloud network. A cloud client105may access cloud platform115over network connection135. The network may implement transfer control protocol and internet protocol (TCP/IP), such as the Internet. or may implement other network protocols. A cloud client105may be an example of a user device, such as a server (e.g., cloud client105-a), a smartphone (e.g., cloud client105-b), or a laptop (e.g., cloud client105-c). In other examples, a cloud client105may be a desktop computer, a tablet, a sensor, or another computing device or system capable of generating, analyzing, transmitting, or receiving communications. In some examples, a cloud client105may be operated by a user that is part of a business, an enterprise, a non-profit, a startup, or any other organization type. A cloud client105may interact with multiple contacts110. The interactions130may include communications, opportunities, purchases, sales, or any other interaction between a cloud client105and a contact110. Data may be associated with the interactions130A cloud client105may access cloud platform115to store, manage, and process the data associated with the interactions130. In some cases, the cloud client105may have an associated security or permission level. A cloud client105may have access to certain applications, data, and database information within cloud platform115based on the associated security or permission level, and may not have access to others. Contacts110may interact with the cloud client105in person or via phone, email, web, text messages, mail, or any other appropriate form of interaction (e.g., interactions130-a,130-b.130-c, and130-d). The interaction130may be a business-to-business (B2B) interaction or a business-to-consumer (B2C) interaction. A contact110may also be referred to as a customer, a potential customer, a lead, a client, or some other suitable terminology. In some cases, the contact110may be an example of a user device, such as a server (e.g., contact110-a), a laptop (e.g., contact110-b), a smartphone (e.g., contact110-c), or a sensor (e.g., contact110-d). In other cases, the contact110may be another computing system. In some cases, the contact110may be operated by a user or group of users. The user or group of users may be associated with a business, a manufacturer, or any other appropriate organization. Cloud platform115may offer an on-demand database service to the cloud client105. In some cases, cloud platform115may be an example of a multi-tenant database system. In this case, cloud platform115may serve multiple cloud clients105with a single instance of software. However, other types of systems may be implemented, including—but not limited to—client-server systems, mobile device systems, and mobile network systems. In some cases, cloud platform115may support CRM solutions. This may include support for sales, service, marketing, community, analytics, applications, and the Internet of Things Cloud platform115may receive data associated with contact interactions130from the cloud client105over network connection135, and may store and analyze the data. In some cases, cloud platform115may receive data directly from an interaction130between a contact110and the cloud client105. In some cases, the cloud client105may develop applications to run on cloud platform115. Cloud platform115may be implemented using remote servers. In some cases, the remote servers may be located at one or more data centers120. Data center120may include multiple servers. The multiple servers may be used for data storage, management, and processing. Data center120may receive data from cloud platform115via connection140, or directly from the cloud client105or an interaction130between a contact110and the cloud client105. Data center120may utilize multiple redundancies for security purposes. In some cases, the data stored at data center120may be backed up by copies of the data at a different data center (not pictured). Subsystem125may include cloud clients105, cloud platform115, and data center120. In some cases, data processing may occur at any of the components of subsystem125, or at a combination of these components In some cases, servers may perform the data processing. The servers may be a cloud client105or located at data center120. The cloud platform115and/or subsystem125may support a communication process flow management service. The communication process flow management service may be used to configure a communication process flow that manages electronic communications (e.g., emails, messages, advertisements) between a tenant (e.g., client105) of a multitenant system and a set of users (e.g., contacts110) associated with the tenant. The communication process flow may include various actions that are used to manage the electronic communications. The actions may include send email, decision splits, wait periods, and the like, and the communication process flow may include multiple routes (or sets of actions) that are configured via the management service. Whether a user receives messages according to various routes may depend on attribute and behavior data associated with the user. Such data may be stored in association with user identifiers at the data center120. Communication process flows may be configured by teams of administrators or users associated with the tenant (e.g., employees of the tenant organization). In some cases, various levels of configuration, review, activation, and monitoring may be performed by multiple users using the communication process flow management service. These multiple users may communicate regarding performance and planning associated with a communication process flow via communication platforms that are external from the cloud platform115and/or subsystem125that supports the communication process flow and the communication process flow management service. For example, the users may communicate via a communication platform that supports chat rooms or channels that may be organized by topic, teams, or the like. However, because the communication platform is external to the communication process flow management service, limited cross-platform compatibility mas exist. For example, data associated with an active communication process flow (e.g., a flow that is managing current and future communications) may only be accessible at the communication process flow management service. Thus, discussion of such data at a communication platform may require a user to manually post the data into the communication platform. Further, such discussion in a communication platform may result in a decision to interact with the communication process flow (e.g., by modifying the communication process flow or activating, pausing, etc. the communication process flow). As such discussion and decisions may occur within the communication platform, the user is required to access the communication process flow management service to modify the communication process flow or interact with the communication process flow. Thus, the separation of data and access between the communication process flow and the communication platform may result in workflow inefficiencies and limited cross-platform compatibility. Additionally, because a user may be required to interact with a communication process flow directly within the communication process flow management service, the communication process flow may utilize significant processing and communication resources by transmitting electronic communications before a user is able to interact with the communication process flow. Real time or near-real time interaction with a communication process flow may reduce wasteful communications. Techniques described herein may support cross-platform interaction and data access between a communication process flow management service supported by the cloud platform115and an external communication platform In some cases, the communication process flow management service and the cloud platform may be linked for intercommunication and interaction. The communication process flow management service may periodically, or upon satisfaction of some condition, post communication metrics associated with a communication process flow into one or more channels of the communication platform. The communication process flow management service may also post logs, updates, events, or the like associated with the communication process flow into one or more channels of the communication platform. The communication metrics and/or logs may be posted in the form of text, graphs, or a combination thereof. Additionally, a user may interact with the communication process flow management service directly from the communication platform. The interactions with the communication process flow from the communication platform may be performed in response to the posting of the metrics and/or events into the communication platform by the communication process flow management service. Various events occurring with respect to a communication process flow may be posted to a communication platform using the cross-platform compatibility techniques described herein. Events may include creation events, update events, or delete events occurring with respect to an action of the communication process flow or the communication process flow itself. Upon detection of such events, the communication process flow management platform may generate a data object that includes metadata associated with the event and transmit the data object with a request to the communication platform. The request may cause an entry to be posted into a channel of the communication platform. The entry may include the metadata such as event type, user that performed or caused the event, among other information. In some cases, the events are posted to the communication platform according to a configuration associated with the communication process flow. In some cases, a user may selectively cause posting a communication process flow object into the communication platform. For example, a user may select at least a portion of a communication process flow object to cause transmission of a data object to the communication platform to cause posting of an entry into the communication platform. The entry may display metadata associated with the selected object, which may result in discussion of the object in the communication platform. Example objects may include actions or activities, content items (e.g., emails, picture within emails, SMS messages, etc.), events occurring in associated with transmitted communications (e.g., logged open events), behavior objects (e.g., user behavior rules for triggering actions), and other communication process flow objects. Cross-platform interaction between the communication process flow management service and the communication platform may support improved workflow efficiencies and reduced processing overhead by reducing wasteful communications and data access requests. For example, as the techniques described herein support data associated with a communication process flow being accessible from the communication platform, the techniques may support reduced data access requests at the communication platform. As another example, as the techniques described herein support interaction with a communication process flow directly from the communication platform, the techniques mas support reduced use of wasteful communication resources. Additionally, the techniques support reduced overhead associated with switching between various platforms to perform various tasks. It should be appreciated by a person skilled in the art that one or more aspects of the disclosure may be implemented in a system100to additionally or alternatively solve other problems than those described above. Furthermore, aspects of the disclosure may provide technical improvements to “conventional” systems or processes as described herein. However, the description and appended drawings only include example technical improvements resulting from implementing aspects of the disclosure, and accordingly do not represent all of the technical improvements provided within the scope of the claims. FIG.2illustrates an example of a computing architecture200that supports techniques for cross-platform communication process flow object posting in accordance with aspects of the present disclosure. The computing architecture200includes a communication process flow management service210, a communication platform215, and a data services platform220. Each of the communication process flow management service210, the communication platform215, and the data services platform220may be implemented in a respective servers. In some cases, the server that supports the communication process flow management service210may represent aspects of the cloud platform115and subsystem125ofFIG.1. The data services platform220may also be implemented in aspects of the cloud platform115and subsystem125ofFIG.1. The systems supporting the communication platform215may be a logically or physical separate computing systems from the systems supporting the communication process flow management service210and/or data services platform220. As described herein, the communication process flow management service210may support creation, configuration, and implementation of various communication process flow (e.g., a communication process flow225) that controls electronic communications between a tenant and a set of users associated with the tenant. For example, users associated with the tenant may use the communication process flow management service210to configure actions (e.g., actions230) that are associated with processor executable instructions for management of electronic communications. For example, action230-amay be associated with instructions that are used to filter users into the communication process flow225. That is, action230-amay define a rule that is used to determine whether a user of a set of users (e.g., associated with a tenant) is to receive electronic communications based on the communication process flow225. The rule may be based on attribute data and/or web behavior data. For example, users that have purchased a product from the tenant organization in the last six months mas receive electronic communications from the tenant based on the communication process flow225. Users that do not satisfy this rule may not “enter” this example communication process flow225. Other actions230define message transmissions, decision splits, and other processes. For example, each user that satisfies the rule of action230-amay receive an email according to action230-b. The action230-bmay be associated with specific content that is to be emailed to the users. Action230-cmay define a decision split between users. For example, users that opened the email corresponding to action230-bmay be routed to action230-d, while users that did not open the email corresponding to action230-bmay be routed to action230-e. Additionally or alternatively, the decision split action230-cmay consider attribute data associated with users, web behavior data (e.g., web page interaction), among other parameters, to route users through the communication process flow225. Data services platform220may correspond to various services that monitor, aggregate, and display various metrics associated with the communication process flows supported by the communication process flow management service210. For example, the data services platform220may include a metric engine235that generates and/or displays engagement metrics, such as open rate, click rate, unsubscribe rate, send rate, and the like associated with one or more electronic communications of the communication process flows supported by the communication process flow management service210. The engagement metrics may be displayed in charts or graphs. The data services platform220may also support an artificial intelligence (AI) service240that analyzes communication data associated with the communication process flow supported by the communication process flow management service210. In some cases, the AI service240may identify, using AI technique, anomalies associated with the communications. For example, if a communication metric (e.g., open rate) for communication process flow225falls below an expected threshold, then the AI service240may surface an alert. The metric engine235and the AI service240may be implemented as part of the same service (e.g., supported by the same server) or separate/distinct services. The data services platform220may transmit queries or requests to a data store associated with or managed by the communication process flow management service210to support metrics and anomaly detections. That is, the data services platform220may receive communication data from the communication process flow management service210to support metric generation and AI services. The communication platform215may represent a chat or instant messaging service that is used to support business function. For example, teams associated with a tenant may use the communication platform215to communicate regarding various business functions, including communication process flows supported by the communication process flow management service210. The teams may use the communication platform to hold a continuous discussion regarding aspects of the communication process flow225, make decisions regarding the communication process flow225, and the like. For example, based on data generated by the data services platform220, the users may decide to reconfigure or interact with the communication process flow225. However, as described herein, the communication process flow management service210and the communication platform215are separate platforms, and as such, have limited cross-platform compatibility. Thus, if a decision is made regarding the communication process flow225within the communication platform215, a user may be required to separately access the communication process flow management service210to change or interact with the communication process flow225. Further, the data services platform220and the communication platform215may be separate systems, and as such, a user may be required to manually input data (e.g., metrics and/or anomalies) regarding the communication process flow225into a channel of the communication platform215to impact discussions. Techniques described herein support cross-platform compatibility between the communication process flow management service210and the communication platform215and the data services platform220and the communication platform215. To support such compatibility, the communication platform215may be configured with endpoints (e.g., a webhook or application) that are used by the communication process flow management service210and/or the data services platform220to transmit request to the communication platform215. The requests may include data objects (e.g., data object250) that are ingestible by the communication platform215for posting into one or more channels. Thus, the data objects250may include data regarding events occurring at the communication process flow management service210, metrics detected by the metric engine235, anomalies detected by the AI service240, and/or data associated with selected communication process flow objects. Further, the communication platform may be configured to transmit requests to the communication process flow management service210and/or the data services platform220. For example, a user may enter a command or active a user interface (UI) component within the communication platform215to request additional data associated with the communication process flow225(e.g., refined metrics or additional data associated with the anomaly). In some cases, a user may interact directly with the communication process flow225by entering a command or activating a UI component within the communication platform215. The interaction may include pausing the communication process now225in response to data being posted within the communication platform215. To support the cross-platform compatibility, the various services may be configured with endpoints and authorizations. For example, a user may manually enter an endpoint associated with a workspace (e.g., collection of communication channels) or a particular channel at the communication platform into the communication process flow management service210and/or the data services platform220. In some cases, an application may be downloaded to interact with the communication platform215. The application may include various authentication flows and setup flows to configure the endpoints for the various services. Thus, when setting up the application, the user may log into the account for the communication process flow management service210to authenticate the user and to setup the respective endpoints. After configuring the respective services with the endpoints, the endpoints may be used to transmit requests with data objects to post the data into the communication platform. The entries (e.g., an entry255) may be posted by a participant to the channel (e.g., a bot that is configured to post into the channel). As described herein, various events associated with the communication process flow may be relayed to the communication platform215for posting into a channel. For example, the communicating process flow management service may monitor changes associated with the communicating process flow %225and post the changes to the communication platform215such that the events are effectively logged. As such, the team members may monitor the changes to the communication process flow225without having to access the communication process flow management service to determine changes. In some cases, a configuration at the communication process flow management service indicates whether various changes are to be relayed to the communication platform215. Thus, when a change is detected that satisfies the configuration parameters, the communication process flow management service or platform may generate data object250. The data object250may include metadata associated with the change event. For example, the metadata includes the user that implemented the changes, the action23, that was impacted, a time stamp, or the like. Users of the communication platform215may then discuss the changes. In some examples, the data object250is configured with a link (e.g., uniform resource locator (URL)) to the communication process flow management service210. As such, the users of the communication platform215may efficiently access the communication process flow management service210based on the events. In some examples, a user may selectively share a communication process flow object associated with the communication process flow to the communication platform215. For example, a user may highlight a portion of an object, which may trigger activation of a UI component (e.g., a tooltip) that displays a share button. The share button may be clicked to trigger generation of data object250that includes metadata associated with the selected communication process flow object such that the metadata is displayed in an entry into the communication platform215. In some cases, the user may select a channel from the communication platform215where the object/entry is to be posted. For example, after selection of the share button on the UI component, a modal may be displayed that prompts the user to select one or more channels for sharing the object Thus, using these techniques, a user may selectively post objects from the communication process flow management service210or the data services platform220to the communication platform215, for further improved communication and cross-platform compatibility. FIG.3illustrates an example of a computing architecture3K) that supports techniques for cross-platform communication process flow object posting in accordance with aspects of the present disclosure. The computing architecture30) includes a communication process flow management service310and a communication platform315, which may be examples of the corresponding systems as described with respect toFIGS.1and2. As described with respect toFIG.2, the communication process flow management service310supports creation, configuration, and deployment of communication process flows. Additionally, the communication process flow management service310and the communication platform315are linked for cross-platform compatibility. For example, the communication process flow management service310and the communication platform315are configured with respective endpoints for communicating with the other systems. The communication process flow management service310supports a configuration page325that is used to configure object posting to the communication platform315. As illustrated in the configuration page325, the user may select which events are posted by selecting check boxes. Events may be associated with the communication process flow object itself (e.g., created, versioned, paused, activated, stopped) as well as various actions (e.g., actions230ofFIG.2) of the communication process flow. Actions may include entry sources, email message, short message service (SMS) message, push message, decision split, wait by attribute, wait by duration, engagement splits, or other types of actions that may be configured for a communication process flow. For each type of action for a communication process flow, the user may selectably activate object posting for creation, updates, and delete events associated with the action. The user may also enter a channel name of the communication platform315where the events are to be posted. The channel name may be used as part of the request to the endpoint such that the event is posted in the correct channel. A user may create, update, or otherwise modify a communication process flow. If object posting is activated, then the communication process flow management service310may determine whether the event satisfies the configuration parameters as configured at configuration page325. If so, then the communication process flow management service310may generate a data object350and transmit the data object350to the communication platform315for posting. In some examples, the detection of the event is triggered by a save action at the communication process flow management service310. A user may be performing various configurations for the communication process flow, such as adding new actions, linking actions, and the like. In some examples, the changes are not final changes. As such, rather than creating an event for each new configuration, the communication process flow management service310may wait until the communication process flow is saved. In response, the communication process flow management service310may perform a differential operation to determine the updates to the communication process flow and detect the corresponding events/updates. As described, the communication process flow management service310may transmit a request to the communication platform315. The request may include the data object350, which may be an example of a JavaScript object notation (JSON) object. The JSON object may include attribute value pairs with indications of the event and information about the event (e.g., user identifier, action type). In some examples, the data object350may include content associated with the action or event. For example, if the user adds an email action or updates an email action with new content (e.g., email subject line, images), then the data object may include indications of the content of the email. As illustrated inFIG.3, the communication platform posts an entry355(e.g., by user “Journey bot”) that includes an indication of the action as well as the content associated with the action, which may be identified from the data object350included in the request. The entry355also includes a UI component (e.g., button) that is associated with a link to the communication process flow management service310. Thus, a user may activate the UI component to be directed to the communication process flow management service310, and more specifically, a page associated with the communication process flow where the action or event occurred. In some examples, the entry mas include a UI component that a user may activate to view more details associated with the event. If activated, the communication platform315may transmit a request330(e.g., for more metadata) to the communication process flow management service310. In response, the communication process flow management service310may generate a new data object350with more metadata associated with the event or the impacted action. The new data object350may be transmitted to the communication platform415in a request and displayed to the user. When the additional data is requested, the additional metadata may be displayed in a new UI in the communication platform315. For example, the additional metadata may be displayed in a modal UI component. In some examples, updates to various actions are to be approved by an authorized user. In such cases, the data object350may be configured such that the posted entry355includes approval buttons that are to be activated by an authorized user. If approved, then the communication platform315may transmit the request330that includes an approval indication. If rejected, then the communication platform315may transmit the request330that includes a rejection indication. If rejected, then the communication process flow management service310may undo the event associated with the action In some examples, the communication process flow management service310and/or the communication platform315may determine that the user that activate the approval or rejection UI component is authorized (e.g., has permissions) to do so before processing the associated action. As described herein, the user may selectively share objects associated with a communication process flow to the communication platform315. For example, a user that is configuring or otherwise interacting with a communication process flow at the communication process flow management service310may select an object to share it to the communication platform315. As illustrated in UI345, a user is viewing a pane associated with a data extension action340-a, which may be an example of an action that determines whether a user is routed into a communication process flow. The user may highlight (e.g., using a cursor or touchscreen) a portion of aspects of the data extension action object. InFIG.3, the user highlights the title of the pane “Data Extension Summary,” and the highlighting may trigger display of a UI component360. The UI component360) may be referred to as a “tooltip” in some cases. The UI component360includes buttons for sharing and copying. If the user activates the button for sharing, then data object350may be generated to include metadata associated with the data extension action340-aand transmitted to the communication platform to be posted as an entry, such as entry355. In some examples, activation of the share button at the communication process flow management service310may cause a modal365to be displayed in UI345, and the modal365may include a fields that prompts the user to enter a channel of the communication platform at which the object is to be shared. The modal365may also be referred to as a prompt. In some examples, a user may enter a message or user mentions into the modal365, such that the message is displayed in the communication platform315entry and/or such that the users are tagged. Further, the modal365may display a message preview that previews the entry into the communication platform315. Various types of objects may be shared to the communication platform. Example objects may include user or subscriber objects, activity objects, objects corresponding to a whole communication process flow, objects corresponding to content (e.g., email objects, image objects, text objects) that may be included in messages control by the communication process flow, event objects, etc. A user may highlight just a portion of a displayed aspects of the object to trigger the UI component360for sharing to the communication platform315. In some examples, a share button or UI component may be included with each displayed object for sharing. Thus, a user may not be required to highlight to share. Other sharing triggering techniques are contemplated within the scope of the present disclosure. FIG.4illustrates an example of a process flow400that supports techniques for cross-platform communication process flow object posting in accordance with aspects of the present disclosure. The process flow400includes a communication process flow management service405and a communication platform410, which may be examples of the corresponding systems as described with respect toFIGS.1through3. At415, the communication process flow management service405may receive an indication of activation of event logging in the communication platform for the communication process flow. For example, a user may activate a UI component to trigger event logging. The activation may be performed before creation of the communication process flow, during configuration of the communication process flow, or after the communication process flow is active. In some cases, activation of the UI component prompts the user to enter a webhook URL for the communication process flow. The webhook URL may link to a workplace (e.g., a set of channels) at the communication platform, a particular channel, or a combination thereof. In some cases, to identify the webhook URL, the user provisions the communication platform410with webhook configurations by downloading an application, selecting a menu item, or the like, at the communication platform. The communication platform410may be provisioned with the webhook endpoint thereafter, and the user may post the webhook endpoint to the communication process flow management service405. In some cases, receiving the indication to activation of event logging includes receiving a request from the communication platform410to activate event logging. In such cases, the user may download an application to the communication platform, and the application may be configured to configure the endpoints for cross-platform compatibility between the communication process flow management service405and the communication platform410. For example, during setup of the application, the user may be prompted to authenticate to the communication process flow management service405, where the user may enter login information. That is, at420, the communication process flow management service405activates an authentication flow for the communication platform. If login is successful, then the communication process flow management service405and the communication platform410may programmatically configure the endpoints that each system is to use for communication with the other system. At425, the communication process flow management service405may detect a save action corresponding to the communication process flow. The communication process flow management service405may perform a differential operation to detect an event associated with the communication process flow. The differential operation may compare a saved communication process flow to a last version of the communication process flow to detect changed aspects of the communication process flow. At430, the communication process flow management service405may detect an event associated with the communication process flow that controls electronic communications between a tenant of a multitenant system and a set of users corresponding to the tenant. As described herein, the event may be detected after detecting a save action and performing a differential operation. The event may be a create event (e.g., creation of a communication process flow or creation of new action), an update event (e.g., change action configurations or update the communication process flow), or a delete event (e.g., deletion of an action) associated with a plurality of action types of the communication process flow. In some cases, the communication process flow management service405may detect the event based on a user selectively sharing an object. For example, the communication process flow management service405may detect, from a user, an indication of selection of a communication process flow object associated with the communication process flow. Detection of the event may include receiving, at the communication process flow management service405, an indication to share a communication process flow object associated with the communication process flow to the communication platform. Receiving the indication to share may include receiving an indication of activation of UI component (e.g., a button in a tooltip) displayed in association with the communication process flow object. In some cases, the communication process flow management service may cause based at least in part on detecting the event, display of a prompt (e.g., a prompt in a modal) associated with the communication platform. At435, the communication process flow management service405may determine based at least in part on detecting the event, that the event is configured to be posted in the communication channel in accordance with one or more configuration parameters associated with the communication process flow. In some cases, the determination may include determining that the event corresponds to an action of the communication process flow that is enabled for posting into the communication channel in accordance with the one or more configuration parameters. The one or more configuration parameters may include an indication of a channel name of the communication channel. The one or more configuration parameters may indicate whether a create event, an update event, or a delete event associated with a plurality of action types of the communication process flow are configured to be posted into the communication channel. At440, the communication process flow management service405may generate, based at least in part on detecting the event, a data object corresponding to the event and that includes metadata associated with the event. The data object may be an example of a JSON object that is ingestible by the communication platform for posting the entry associated with the event into the communication channel. The metadata may include a user that caused the event, event type, timestamp, changed configurations, or the like. The data object may also include a link (URL) to the configuration page to the communication process flow management service405. At445, the communication process flow management service405may transmit to the communication platform410, a request that includes the data object. The request may be configured to cause posting of an entry associated with the event into a communication channel of the communication platform that is associated with the tenant. The request may be transmitted via a webhook endpoint of the communication platform410. At450, the communication platform410may post the entry into the channel. The entry may display the metadata included in the data object. In some examples, the entry may include a button or UI component that links (e.g., via the URL included in the data object) to the communication process flow management service405. In some examples, the entry may include a button or UI component that may be activated to view additional details In such cases, if the user clicks the UI component, then the communication platform410may generate and transmit, and the communication process flow management service405may receive, a request for additional metadata associated with the event In response, the communication process flow management service405may generate a second data object and transmit a second request to the communication platform410. The second request may cause a user component (modal window) to display the additional metadata. FIG.5shows a block diagram500of a device505that supports techniques for cross-platform communication process flow object posting in accordance with aspects of the present disclosure. The device505may include an input module510, an output module515, and an object posting manager520. The device505may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The input module510may manage input signals for the device505. For example, the input module51) may identify input signals based on an interaction with a modem, a keyboard, a mouse, a touchscreen, or a similar device. These input signals may, be associated with user input or processing at other components or devices In some cases, the input module510may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUIX®, or another known operating system to handle input signals. The input module510may send aspects of these input signals to other components of the device505for processing. For example, the input module510may transmit input signals to the object posting manager520to support techniques for cross-platform communication process flow object posting. In some cases, the input module510may be a component of an I/O controller710as described with reference toFIG.7. The output module515may manage output signals for the device505. For example, the output module515may receive signals from other components of the device505, such as the object posting manager520, and may transmit these signals to other components or devices. In some examples, the output module515may transmit output signals for display in a user interface, for storage in a database or data store, for further processing at a server or server cluster, or for any other processes at any number of devices or systems. In some cases, the output module515may be a component of an I/O controller710as described with reference toFIG.7. For example, the object posting manager520may include an event detection component525, a data object generation component530, a request interface535, or any combination thereof. In some examples, the object posting manager520, or various components thereof, may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the input module510, the output module515, or both. For example, the object posting manager520may receive information from the input module510, send information to the output module515, or be integrated in combination with the input module510, the output module515, or both to receive information, transmit information, or perform various other operations as described herein. The object posting manager520may support data processing in accordance with examples as disclosed herein. The event detection component525may be configured as or otherwise support a means for detecting an event associated with a communication process flow that controls electronic communications between a tenant of a multitenant system and a set of users corresponding to the tenant. The data object generation component530may be configured as or otherwise support a means for generating, based at least in part on detecting the event, a data object corresponding to the event and that includes metadata associated with the event. The request interface535may be configured as or otherwise support a means for transmitting, from a communication process flow management service and to a communication platform, a request that includes the data object, wherein the request is configured to cause posting of an entry associated with the event into a communication channel of the communication platform that is associated with the tenant. FIG.6shows a block diagram600of an object posting manager620that supports techniques for cross-platform communication process flow object posting in accordance with aspects of the present disclosure. The object posting manager620may be an example of aspects of an object posting manager or an object posting manager520, or both, as described herein. The object posting manager620, or various components thereof, may be an example of means for performing various aspects of techniques for cross-platform communication process flow object posting as described herein. For example, the object posting manager620may include an event detection component625, a data object generation component630, a request interface635, an activation component640, a save detection component645, a configuration component650, a UI component655, an object selection component660, an object sharing component665, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The object posting manager620may support data processing in accordance with examples as disclosed herein. The event detection component625may be configured as or other vise support a means for detecting an event associated with a communication process flow that controls electronic communications between a tenant of a multitenant system and a set of users corresponding to the tenant. The data object generation component630may be configured as or otherwise support a means for generating, based at least in part on detecting the event, a data object corresponding to the event and that includes metadata associated with the event. The request interface635may be configured as or otherwise support a means for transmitting, from a communication process flow management service and to a communication platform, a request that includes the data object, wherein the request is configured to cause posting of an entry associated with the event into a communication channel of the communication platform that is associated with the tenant. In some examples, the activation component640may be configured as or otherwise support a means for receiving an indication of activation of event logging in the communication platform for the communication process flow, wherein the request is transmitted based at least in part on the indication of activation. In some examples, to support receiving the indication, the activation component640may be configured as or otherwise support a means for receiving the indication of a uniform resource locator for a webhook associated with the communication channel, a workspace in the communication platform, or a combination thereof. In some examples, the activation component640may be configured as or otherwise support a means for activating an authentication flow for the communication platform In some examples, the activation component640may be configured as or otherwise support a means for receiving an indication of a uniform resource locator for a webhook that is associated with the communication channel, a workspace in the communication platform, or a combination thereof via the authentication flow. In some examples, to support generating the data object, the data object generation component630may be configured as or otherwise support a means for generating a JavaScript object notation (JSON) object that is ingestible by the communication platform for posting the entry associated with the event into the communication channel. In some examples, the JSON object includes attribute value pairs corresponding to the metadata associated with the event. In some examples, to support generating the data object, the data object generation component630may be configured as or otherwise support a means for generating the data object that includes an indication of a link to the communication process flow management service. In some examples, the request interface635may be configured as or otherwise support a means for receiving, at the communication process flow management service and from the communication platform in response to transmitting the data object, a second request to view a user interface associated with the occurrence of the anomaly. In some examples, the UI component655may be configured as or otherwise support a means for displaying, based at least in part on receiving the second request, the user interface associated with the detected event. In some examples, the save detection component645may be configured as or otherwise support a means for detecting a save action corresponding to the communication process flow. In some examples, the event detection component625may be configured as or otherwise support a means for performing, based at least in part on detecting save operation, a differential operation to detect the event. In some examples, the configuration component650may be configured as or otherwise support a means for determining, based at least in part on detecting the event, that the event is configured to be posted in the communication channel in accordance with one or more configuration parameters associated with the communication process flow, wherein the data object is transmitted based at least in part on determining that the event is configured to be posted. In some examples, to support determining that the event is configured to be posted, the configuration component650may be configured as or otherwise support a means for determining that the event corresponds to an action of the communication process flow that is enabled for posting into the communication channel in accordance with the one or more configuration parameters. In some examples, the one or more configuration parameters include an indication of a channel name of the communication channel. In some examples, the one or more configuration parameters indicate whether a create event, an update event, or a delete event associated with a plurality of action types of the communication process flow are configured to be posted into the communication channel. In some examples, the one or more configuration parameters indicate whether a creation event, a versioned event, a draft created event, a paused event, a resumed event, a started event, or a stopped event associated with the communication process flow are configured to be posted in the communication channel. In some examples, to support detecting the event, the object selection component660may be configured as or otherwise support a means for detecting, from a user, an indication of selection of a communication process flow object associated with the communication process flow, wherein the request is transmitted based at least in part on detecting the indication of selection of the communication process flow object. In some examples, the communication process flow object is an email object, an action object, a content object, or a combination thereof. In some examples, to support detecting the event, the object sharing component665may be configured as or otherwise support a means for receiving, at the communication process flow management service, an indication to share a communication process flow object associated with the communication process flow to the communication platform, wherein the request is transmitted based at least in part on detecting the indication to share. In some examples, to support receiving the indication to share, the UI component655may be configured as or otherwise support a means for receiving an indication of activation of a user interface (U) component displayed in association with the communication process flow object. In some examples, the UI component655may be configured as or otherwise support a means for causing, based at least in part on detecting the event, display of a prompt associated with the communication platform, wherein the request is transmitted based at least in part on causing display of the prompt. In some examples, the UI component655may be configured as or otherwise support a means for receiving, at the prompt, an indication of the communication channel of the communication platform, wherein the request is transmitted to cause posting of the entry into the communication channel of the communication platform based at least in part on receiving the indication of the communication channel. In some examples, the request interface635may be configured as or otherwise support a means for receiving, in response to transmitting the request, a request for additional metadata associated with the event. In some examples, the data object generation component630may be configured as or otherwise support a means for generating, in response to receiving the request for the additional metadata, a second data object including the additional metadata. In some examples, the request interface635may be configured as or otherwise support a means for transmitting, from the communication process flow management service and to the communication platform, a second request that includes the second data object, wherein the request is configured cause a user interface including an indication of the additional metadata to be displayed in the communication platform. FIG.7shows a diagram of a system700including a device705that supports techniques for cross-platform communication process flow object posting in accordance with aspects of the present disclosure. The device705may be an example of or include the components of a device505as described herein. The device705may include components for bi-directional data communications including components for transmitting and receiving communications, such as an object posting manager720, an I/O controller710, a database controller715, a memory725, a processor730, and a database735. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more buses (e.g., a bus740). The I/O controller710may manage input signals745and output signals750for the device705. The I/O controller710may also manage peripherals not integrated into the device705. In some cases, the I/O controller710may represent a physical connection or port to an external peripheral. In some cases, the I/O controller710may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. In other cases, the I/O controller710may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some cases, the I/O controller710may be implemented as part of a processor730. In some examples, a user may interact with the device705via the I/O controller710or via hardware components controlled by the I/O controller710. The database controller715may manage data storage and processing in a database735. In some cases, a user may interact with the database controller715. In other cases, the database controller715may operate automatically without user interaction. The database735may be an example of a single database, a distributed database, multiple distributed databases, a data store, a data lake, or an emergency backup database. Memory725may include random-access memory (RAM) and ROM The memory725may store computer-readable, computer-executable software including instructions that, when executed, cause the processor730to perform various functions described herein. In some cases, the memory725may contain, among other things, a basic input/output system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor730may include an intelligent hardware device, (e.g., a general-purpose processor, a digital signal processor (DSP), a CPU, a microcontroller, an ASIC, a field programmable gate array (FPGA), a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor730may be configured to operate a memory array using a memory controller In other cases, a memory controller may be integrated into the processor730. The processor730may be configured to execute computer-readable instructions stored in a memory725to perform various functions (e.g., functions or tasks supporting techniques for cross-platform communication process flow object posting). The object posting manager720may support data processing in accordance with examples as disclosed herein. For example, the object posting manager720may be configured as or otherwise support a means for detecting an event associated with a communication process flow that controls electronic communications between a tenant of a multitenant system and a set of users corresponding to the tenant. The object posting manager720may be configured as or otherwise support a means for generating, based at least in part on detecting the event, a data object corresponding to the event and that includes metadata associated with the event. The object posting manager720may be configured as or otherwise support a means for transmitting, from a communication process flow management service and to a communication platform, a request that includes the data object, wherein the request is configured to cause posting of an entry associated with the event into a communication channel of the communication platform that is associated with the tenant. FIG.8shows a flowchart illustrating a method800that supports techniques for cross-platform communication process flow object posting in accordance with aspects of the present disclosure. The operations of the method800may be implemented by a server or its components as described herein. For example, the operations of the method800may be performed by a server as described with reference toFIGS.1through7. In some examples, a server may execute a set of instructions to control the functional elements of the server to perform the described functions. Additionally or alternatively, the server may perform aspects of the described functions using special-purpose hardware. At805, the method may include detecting an event associated with a communication process flow that controls electronic communications between a tenant of a multitenant system and a set of users corresponding to the tenant. The operations of805may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of805may be performed by an event detection component625as described with reference toFIG.6. At810, the method may include generating, based at least in part on detecting the event, a data object corresponding to the event and that includes metadata associated with the event. The operations of810may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of810may be performed by a data object generation component630as described with reference toFIG.6. At815, the method may include transmitting, from a communication process flow management service and to a communication platform, a request that includes the data object, wherein the request is configured to cause posting of an entry associated with the event into a communication channel of the communication platform that is associated with the tenant. The operations of815may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of815may be performed by a request interface635as described with reference toFIG.6. FIG.9shows a flowchart illustrating a method900that supports techniques for cross-platform communication process flow object posting in accordance with aspects of the present disclosure. The operations of the method900may be implemented by a server or its components as described herein. For example, the operations of the method900may be performed by a server as described with reference toFIGS.1through7. In some examples, a server may execute a set of instructions to control the functional elements of the server to perform the described functions. Additionally or alternatively, the server may perform aspects of the described functions using special-purpose hardware. At905, the method may include receiving an indication of activation of event logging in the communication platform for the communication process flow, wherein the request is transmitted based at least in part on the indication of activation. The operations of905may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of905may be performed by an activation component640as described with reference toFIG.6. At910, the method may include detecting an event associated with a communication process flow that controls electronic communications between a tenant of a multitenant system and a set of users corresponding to the tenant. The operations of910may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of910may be performed by an event detection component625as described with reference toFIG.6. At915, the method may include generating, based at least in part on detecting the event, a data object corresponding to the event and that includes metadata associated with the event. The operations of915may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of915may be performed by a data object generation component630as described with reference toFIG.6. At920, the method may include determining, based at least in part on detecting the event, that the event is configured to be posted in the communication channel in accordance with one or more configuration parameters associated with the communication process flow, wherein the data object is transmitted based at least in part on determining that the event is configured to be posted. The operations of920may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of920may be performed by a configuration component650as described with reference toFIG.6. At925, the method may include transmitting, from a communication process flow management service and to a communication platform, a request that includes the data object, wherein the request is configured to cause posting of an entry associated with the event into a communication channel of the communication platform that is associated with the tenant. The operations of925may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of925may be performed by a request interface635as described with reference toFIG.6. FIG.10shows a flowchart illustrating a method1000that supports techniques for cross-platform communication process flow object posting in accordance with aspects of the present disclosure. The operations of the method1000may be implemented by a server or its components as described herein. For example, the operations of the method1000may be performed by a server as described with reference toFIGS.1through7. In some examples, a server may execute a set of instructions to control the functional elements of the server to perform the described functions. Additionally or alternatively, the server may perform aspects of the described functions using special-purpose hardware. At1005, the method may include detecting an event associated with a communication process flow that controls electronic communications between a tenant of a multitenant system and a set of users corresponding to the tenant. The operations of1005may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1005may be performed by an event detection component625as described with reference toFIG.6. At1010, the method may include generating, based at least in part on detecting the event, a data object corresponding to the event and that includes metadata associated with the event. The operations of1010may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1010may be performed by a data object generation component630as described with reference toFIG.6. At1015, the method may include generating the data object that includes an indication of a link to the communication process flow management service. The operations of1015may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1015may be performed by a data object generation component630as described with reference toFIG.6. At1020, the method may include transmitting, from a communication process flow management service and to a communication platform, a request that includes the data object, wherein the request is configured to cause posting of an entry associated with the event into a communication channel of the communication platform that is associated with the tenant. The operations of1020may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1020may be performed by a request interface635as described with reference toFIG.6. FIG.11shows a flowchart illustrating a method1100that supports techniques for cross-platform communication process flow object posting in accordance with aspects of the present disclosure. The operations of the method1100may be implemented by a server or its components as described herein. For example, the operations of the method1100may be performed by a server as described with reference toFIGS.1through7. In some examples, a server may execute a set of instructions to control the functional elements of the server to perform the described functions. Additionally or alternatively, the server may perform aspects of the described functions using special-purpose hardware. At1105, the method may include detecting a save action corresponding to the communication process flow. The operations of1105may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1105may be performed by a save detection component645as described with reference toFIG.6. At1110, the method may include performing, based at least in part on detecting save operation, a differential operation to detect the event. The operations of1110may be performed in accordance with examples as disclosed herein In some examples, aspects of the operations of1110may be performed by an event detection component625as described with reference toFIG.6. At1115, the method may include detecting an event associated with a communication process flow that controls electronic communications between a tenant of a multitenant system and a set of users corresponding to the tenant. The operations of1115may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1115may be performed by an event detection component625as described with reference toFIG.6. At1120, the method may include generating, based at least in part on detecting the event, a data object corresponding to the event and that includes metadata associated with the event. The operations of1120may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1120may be performed by a data object generation component630as described with reference toFIG.6. At1125, the method may include transmitting, from a communication process flow management service and to a communication platform, a request that includes the data object, wherein the request is configured to cause posting of an entry associated with the event into a communication channel of the communication platform that is associated with the tenant. The operations of1125may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1125may be performed by a request interface635as described with reference toFIG.6. FIG.12shows a flowchart illustrating a method1200that supports techniques for cross-platform communication process flow object posting in accordance with aspects of the present disclosure. The operations of the method1200may be implemented by a server or its components as described herein. For example, the operations of the method1200may be performed by a server as described with reference toFIGS.1through7. In some examples, a server may execute a set of instructions to control the functional elements of the server to perform the described functions. Additionally or alternatively, the server may perform aspects of the described functions using special-purpose hardware. At1205, the method may include detecting, from a user, an indication of selection of a communication process flow object associated with the communication process flow. The operations of1205may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1205may be performed by an object selection component660as described with reference toFIG.6. At1210, the method may include causing, based at least in part on detecting the event, display of a prompt associated with the communication platform. The operations of1210may be performed in accordance with examples as disclosed herein In some examples, aspects of the operations of1210may be performed by a UI component655as described with reference toFIG.6. At1215, the method may include detecting an event associated with a communication process flow that controls electronic communications between a tenant of a multitenant system and a set of users corresponding to the tenant. The operations of1215may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1215may be performed by an event detection component625as described with reference toFIG.6. At1220, the method may include receiving, at the communication process flow management service, an indication to share a communication process flow object associated with the communication process flow to the communication platform. The operations of1220may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1220may be performed by an object sharing component665as described with reference toFIG.6. At1225, the method may include generating, based at least in part on detecting the event, a data object corresponding to the event and that includes metadata associated with the event. The operations of1225may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1225may be performed by a data object generation component630as described with reference toFIG.6. At1230, the method may include transmitting, from a communication process flow management service and to a communication platform, a request that includes the data object, wherein the request is configured to cause posting of an entry associated with the event into a communication channel of the communication platform that is associated with the tenant. The request may be transmitted based on detecting selection of the communication process flow object, based on display of the prompt, based on receiving the indication to share, or a combination thereof. The operations of1230may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1230may be performed by a request interface635as described with reference toFIG.6. A method for data processing is described. The method may include detecting an event associated with a communication process flow that controls electronic communications between a tenant of a multitenant system and a set of users corresponding to the tenant, generating, based at least in part on detecting the event, a data object corresponding to the event and that includes metadata associated with the event, and transmitting, from a communication process flow management service and to a communication platform, a request that includes the data object, wherein the request is configured to cause posting of an entry associated with the event into a communication channel of the communication platform that is associated with the tenant. An apparatus for data processing is described. The apparatus may include a processor, memory coupled with the processor, and instructions stored in the memory. The instructions may be executable by the processor to cause the apparatus to detect an event associated with a communication process flow that controls electronic communications between a tenant of a multitenant system and a set of users corresponding to the tenant, generate, based at least in part on detecting the event, a data object corresponding to the event and that includes metadata associated with the event, and transmit, from a communication process flow management service and to a communication platform, a request that includes the data object, wherein the request is configured to cause posting of an entry associated with the event into a communication channel of the communication platform that is associated with the tenant. Another apparatus for data processing is described. The apparatus may include means for detecting an event associated with a communication process flow that controls electronic communications between a tenant of a multitenant system and a set of users corresponding to the tenant, means for generating, based at least in part on detecting the event, a data object corresponding to the event and that includes metadata associated with the event, and means for transmitting, from a communication process flow management service and to a communication platform, a request that includes the data object, wherein the request is configured to cause posting of an entry associated with the event into a communication channel of the communication platform that is associated with the tenant. A non-transitory computer-readable medium storing code for data processing is described. The code may include instructions executable by a processor to detect an event associated with a communication process flow that controls electronic communications between a tenant of a multitenant system and a set of users corresponding to the tenant, generate, based at least in part on detecting the event, a data object corresponding to the event and that includes metadata associated with the event, and transmit, from a communication process flow management service and to a communication platform, a request that includes the data object, wherein the request is configured to cause posting of an entry associated with the event into a communication channel of the communication platform that is associated with the tenant. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for receiving an indication of activation of event logging in the communication platform for the communication process flow, wherein the request may be transmitted based at least in part on the indication of activation. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, receiving the indication may include operations, features, means, or instructions for receiving the indication of a uniform resource locator for a webhook associated with the communication channel, a workspace in the communication platform, or a combination thereof. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for activating an authentication flow for the communication platform and receiving an indication of a uniform resource locator for a webhook that may be associated with the communication channel, a workspace in the communication platform, or a combination thereof via the authentication flow. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, generating the data object may include operations, features, means, or instructions for generating a JavaScript object notation (JSON) object that may be ingestible by the communication platform for posting the entry associated with the event into the communication channel. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, the JSON object includes attribute value pairs corresponding to the metadata associated with the event. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, generating the data object may include operations, features, means, or instructions for generating the data object that includes an indication of a link to the communication process flow management service. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for receiving, at the communication process flow management service and from the communication platform in response to transmitting the data object, a second request to view a user interface associated with the occurrence of the anomaly and displaying, based at least in part on receiving the second request, the user interface associated with the detected event. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for detecting a save action corresponding to the communication process flow and performing, based at least in part on detecting save operation, a differential operation to detect the event. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for determining, based at least in part on detecting the event, that the event may be configured to be posted in the communication channel in accordance with one or more configuration parameters associated with the communication process flow, wherein the data object may be transmitted based at least in part on determining that the event may be configured to be posted. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, determining that the event may be configured to be posted may include operations, features, means, or instructions for determining that the event corresponds to an action of the communication process flow that may be enabled for posting into the communication channel in accordance with the one or more configuration parameters. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, the one or more configuration parameters include an indication of a channel name of the communication channel. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, the one or more configuration parameters indicate whether a create event, an update event, or a delete event associated with a plurality of action types of the communication process flow may be configured to be posted into the communication channel. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, the one or more configuration parameters indicate whether a creation event, a versioned event, a draft created event, a paused event, a resumed event, a started event, or a stopped event associated with the communication process flow may be configured to be posted in the communication channel. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for receiving, in response to transmitting the request, a request for additional metadata associated with the event, generating, in response to receiving the request for the additional metadata, a second data object including the additional metadata, and transmitting, from the communication process flow management service and to the communication platform, a second request that includes the second data object, wherein the request may be configured cause a user interface including an indication of the additional metadata to be displayed in the communication platform. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, detecting the event may include operations, features, means, or instructions for detecting, from a user, an indication of selection of a communication process flow object associated with the communication process flow, wherein the request may be transmitted based at least in part on detecting the indication of selection of the communication process flow object. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, the communication process flow object may be an email object, an action object, a content object, or a combination thereof. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, detecting the event may include operations, features, means, or instructions for receiving, at the communication process flow management service, an indication to share a communication process flow object associated with the communication process flow to the communication platform, w herein the request may be transmitted based at least in part on detecting the indication to share. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, receiving the indication to share may include operations, features, means, or instructions for receiving an indication of activation of a user interface (UI) component displayed in association with the communication process flow object. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for causing, based at least in part on detecting the event, display of a prompt associated with the communication platform, wherein the request may be transmitted based at least in part on causing display of the prompt. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for receiving, at the prompt, an indication of the communication channel of the communication platform, wherein the request may be transmitted to cause posting of the entry into the communication channel of the communication platform based at least in part on receiving the indication of the communication channel. It should be noted that the methods described above describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Furthermore, aspects from two or more of the methods may be combined. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components if just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that mas be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims. “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A. B. or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C) Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable ROM (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL or wireless technologies such as infrared, radio, and microwave are included in the definition of medium Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. | 95,331 |
11943322 | DETAILED DESCRIPTION Throughout this specification, the terms “interceptor hub application” and “interceptor hub” are defined as an interceptor hub application executing on a computing device. The terms may be used interchangeably. Reference will now be made in detail to the present examples of embodiments of the disclosure, several examples of which are illustrated in the accompanying drawings. FIG.1illustrates an example environment100in which various embodiments may be implemented. Application systems102,104,106,108,110,112may include one or more service-consuming applications and/or one or more service-providing applications (i.e., services). In some embodiments, some application systems such as, for example, application systems102,104,106may connect to an interceptor hub114via API management services118,120,122, respectively. Other application systems such as, for example, application systems108,110,112may connect to interceptor hub114via the network without using API management services. Interceptor hub114may include a computing device that intercepts a call to a service from a service-consuming application and accesses a database116to determine whether the service-consuming application is authorized to use the service, an address of an application system on which the service resides, characteristics of a first API and a first protocol used by the service consuming application to call the service, and characteristics of a second API and a second protocol used by the service to return a service response to interceptor hub114. Interceptor hub114may receive the call from the service-consuming application via the first API and the first protocol, may perform any required data conversions, and forward the call to the requested service using the second API and the second protocol. The requested service may generate a service response and provide the service response to interceptor hub114via the second API and the second protocol. Interceptor hub114may perform any required data conversions and may forward the service response to the service-consuming application via the first API and the first protocol. In embodiments in which service-consuming applications executing on at least some application systems communicate with interceptor hub114via a corresponding API management service, the service-consuming applications or users may log into a corresponding application executing on the corresponding API management service before any calls are made from the service-consuming applications to services via interceptor hub114. The corresponding API management services may authenticate the service-consuming applications or the users before accepting service calls from the service-consuming applications. In some embodiments, the API management services may include a DataPower® appliance (DataPower is a registered trademark of International Business Machines Corporation of Armonk, NY). The DataPower® appliance may provide security services such as, for example, authentication of service-consuming applications or users. In embodiments in which service-consuming applications do not communicate with corresponding API management services, authentication services may be provided by, for example, interceptor hub114or other servers within the network. FIG.2illustrates an example computing system200that may implement any of application systems102-112, API management services118-122, and interceptor hub114. Computing system200is shown in a form of a general-purpose computing device. Components of computing system200may include, but are not limited to, one or more processing units216, a system memory228, and a bus218that couples various system components including system memory228to one or more processing units216. Bus218represents any one or more of several bus structure types, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. Such architectures may include, but not be limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. Computing system200may include various non-transitory computer system readable media, which may be any available non-transitory media accessible by computing system200. The computer system readable media may include volatile and non-volatile non-transitory media as well as removable and non-removable non-transitory media. System memory228may include non-transitory volatile memory, such as random access memory (RAM)230and cache memory234. System memory228also may include non-transitory non-volatile memory including, but not limited to, read-only memory (ROM)232and storage system236. Storage system236may be provided for reading from and writing to a nonremovable, non-volatile magnetic medium, which may include a hard drive or a Secure Digital (SD) card. In addition, a magnetic disk drive, not shown, may be provided for reading from and writing to a removable, non-volatile magnetic disk such as, for example, a floppy disk, and an optical disk drive for reading from or writing to a removable non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media. Each memory device may be connected to bus218by at least one data media interface. System memory228further may include instructions for processing unit(s)216to configure computing system200to perform functions of embodiments of the invention. For example, system memory228also may include, but not be limited to, processor instructions for an operating system, at least one application program, other program modules, program data, and an implementation of a networking environment. Computing system200may communicate with one or more external devices214including, but not limited to, one or more displays, a keyboard, a pointing device, a speaker, at least one device that enables a user to interact with computing system200, and any devices including, but not limited to, a network card, a modem, etc., that enable computing system200to communicate with one or more other computing devices. The communication can occur via Input/Output (I/O) interfaces222. Computing system200can communicate with one or more networks including, but not limited to, a local area network (LAN), a general wide area network (WAN), a packet-switched data network (PSDN) and/or a public network such as, for example, the Internet, via network adapter220. As depicted, network adapter220communicates with the other components of computer system200via bus218. It should be understood that, although not shown, other hardware and/or software components could be used in conjunction with computer system200. Examples, include, but are not limited to, microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. In some embodiments, a service-consuming application executing on an application system may send a request to interceptor hub114, which may be an interceptor hub application executing on a computing device, for one or more services. In this example, the service-consuming application sends a single request that includes calls for multiple services.FIGS.3-5are flowcharts that illustrate an example process that may be performed by interceptor hub114in various embodiments when a request including calls for multiple services is received by interceptor hub114. The process may begin with interceptor hub114receiving a single request for multiple services from a service-consuming application executing on an application system (act302). Interceptor hub114may receive the request via an API between the service-consuming application and interceptor hub114, and if needed, may perform a data conversion to convert data included in the request to a format understood by interceptor hub114. In some embodiments, interceptor hub114may access database116to determine whether the service-consuming application is authorized to call each of the multiple services (act304). If interceptor hub114determines that the service-consuming application is not authorized to request, or call, each of the multiple services, then an error code or message may be provided to the service-consuming application. Otherwise, if interceptor hub114determines that the service-consuming application is authorized to request, or call, each of the multiple services, then interceptor hub114may call each of the multiple services at a corresponding address that each of the service-consuming applications is authorized to call, as indicated in, for example, database116(act306). Responsive to calling each of the multiple services, interceptor hub114may receive service responses from each of the multiple services (act308) and may provide the responses to the service-consuming application. FIG.4is a flowchart illustrating act306in more detail. The process may begin with interceptor hub114examining a first call for service (act402). Interceptor hub114then may determine whether any data in the call for service needs to be converted to a different format or protocol (act404). For example, a service caller, or requester, may want a service response to be in a particular format such as JSON, XML, or another format that may have delimited values, fixed length fixed position, or even a partial payload. As an example, a data set {a,bc,def} may be requested as comma-separated values (as shown) or may be separated by another delimiter {a/bc/def}. The service response may be requested in a fixed length format such as, for example, (a{circumflex over ( )}{circumflex over ( )}{circumflex over ( )}{circumflex over ( )}bc{circumflex over ( )}{circumflex over ( )}{circumflex over ( )}def{circumflex over ( )}{circumflex over ( )}), where {circumflex over ( )} represents a space character and each field may have a length of five characters as shown, or another length. A request may be made for a generic provided service to provide a partial payload that is relevant to a requesting application. For example, if only fields 1 and 2 are relevant to an application, application1, then interceptor hub114may return only (a,bc) to application1 instead of a complete payload. If any of the data in the call for service needs to be converted to a different format or to a different protocol, then interceptor hub114may perform the conversion (act406). Interceptor hub114then may send the call for the service to an application system on which the service is executing, as indicated by database116(act408). Interceptor hub114then may determine whether there are any additional calls for services (act410). If there are no additional calls for services, then the process may be completed. Otherwise, a next call for a service may be examined by interceptor hub114(act412) and acts404-410again may be performed. FIG.5is a flowchart that illustrates example processing of act308in more detail. The process may begin with interceptor hub114receiving a service response from a service executing on an application system (act502). The service response is provided via a protocol and is in a format that interceptor hub114expects from the service. Interceptor hub then may determine whether the service response needs to be converted to a protocol and a format that an originating service-consuming application expects (act504). If a conversion is needed to convert the format of the service response or change the protocol for the service response, then interceptor hub114may perform the conversion (act506) and may then provide the service response to the originating service-consuming application (act508). Interceptor hub114then may determine whether any more service responses are expected (act510). If more service responses are expected, interceptor hub114may wait for and process a next received service response (act502). In an alternative embodiment, instead of separately providing each service response to the originating service-consuming application, interceptor hub114may buffer each of the received service responses corresponding to each of the calls for service included in the call for services received from the originating service-consuming application during act302. Once all of the service responses have been received from the multiple services and buffered, interceptor hub114may combine the service responses into a single combined service response that is sent to the originating service-consuming application. Thus, for example, if a service provides employee name, scheduled start time, and last clocked action given for a request, the original service-consuming application may receive a response for a single employee or may request and receive a combined response for multiple employees. If the combined response is requested, interceptor hub114would recognize the request for the combined response as a batch type request and would submit individual requests to the service. Interceptor hub114then would consolidate individual responses from the service and send the consolidated individual responses back to the originating service-consuming application in aggregate. In some embodiments, a service called by a service-consuming application may call another service executing on a same or other application system.FIG.6illustrates example processing with respect to a service that calls another service. The process may start with an application system or other computing system such as, for example, a computing system that executes an interceptor application as well as the service-consuming application, receiving a call for the service from the service-consuming application via the interceptor application (act602). If the service is executing on a same computing system as the interceptor hub application, the call for the service from the service-consuming application may be received by the service via a second interceptor hub application executing on a second computing system. In this example, the call for the service is processed by the service, which sends a call for a different service while performing processing related to the service (act604). The call for the different service may be received by the interceptor hub application executing in the same computing system as the service or by a different interceptor hub application executing on another computing system, depending on a configuration for the service. The interceptor hub application receiving the call for the different service may determine whether the calling application is authorized to call the different service, and if the application is authorized, may perform a conversion, as previously discussed, and may send the call to the different service at a network address indicated by a database such as, for example, database116. The service may receive a service response from the different service (act606) via an interceptor hub application that provided the call to the different service. After receiving the service response from the different service, the service may form a second service response and may send the second service response to the interceptor hub application that provided the call to the service (act608). The interceptor hub application that receives the service response from the service, may perform any required conversions (acts610-612), and may deliver the service response to the originating service-consuming application (act614). In some embodiments, a service consuming application and/or interceptor hub114(i.e. interceptor hub application) may use representational state transfer (REST) to access Web services. A RESTful web service provides an application access to web resources in a textual representation. Reading and modification of web resources of a RESTful web service is performed by using a stateless protocol and a predefined operation set. Via a RESTful web service, a request to a resource's uniform resource indicator (URI) may produce a response formatted in Hypertext Markup Language (HTML), eXtended Markup Language (XML), Javascript Object Notation (JSON), or another format. Hypertext Transfer Protocol (HTTP) is commonly used and available HTTP methods include GET, HEAD, POST, PUT, PATCH, DELETE, CONNECT, OPTIONS and TRACE, but other protocols (e.g., HTTPS) may be implemented without departing from the scope of the present disclosure. FIG.7illustrates an example of a service-consuming application calling a RESTful web service via interceptor hub114, and for purposes of explanation, the HTTP protocol is described. This is illustrative only and not intended to be limiting. At step702, a service-consuming application executing in an application system makes a call to a service using the HTTP method GET to request data from the service. The call to the service may be routed to interceptor hub114based on a configuration of the application system on which the service-consuming application executes. Interceptor hub114may receive the call to the service at step703, and may access database116to determine whether the service-consuming application is authorized to call the service. If the service-consuming application is authorized to call the service, then interceptor hub114may call the service based on an address obtained from database116using the HTTP method GET at step704. The service may process the call to the service at step705and may provide a service response fulfilling the request for data by using the POST HTTP method at step706. At step707, interceptor hub114may receive the service response and may provide the service response to the service-consuming application using a second POST HTTP method at steps708-709. In some embodiments, interceptor hub114may receive a call to a service that is an aggregated group of calls to multiple services.FIG.8illustrates a dataflow regarding such a call. Although, interceptor hub114may perform protocol and data conversion, as previously discussed, the details of such conversion are not mentioned in this example in order to simplify the explanation of this example dataflow. At step802, a service-consuming application makes a single call that includes a call to multiple services. At step803, interceptor hub114receives the call, performs any conversion needed for interceptor hub114to understand the call, and determines whether the service-consuming application is authorized to call the multiple services by accessing database116. Assuming that the service-consuming application is authorized to call the multiple services, interceptor hub114may send calls to the multiple services at addresses obtained from database116(steps804,806, and808). In the example shown inFIG.8, three calls are made to three different services. However, this is illustrative only, and fewer calls to services or more calls to services could be made based on a single call to a service received from a service-consuming application. At steps810,812,814interceptor hub114receives service responses from the three different services. AlthoughFIG.8shows the service responses being received in an order in which the calls to services were sent, the service responses may be received by interceptor hub114in a different order. After all of the service responses corresponding to the calls to services are received by interceptor hub114from the corresponding services, interceptor hub114may provide, to the service-consuming application, a single service response that combines the service responses from each of the multiple services (step816). Thus, interceptor hub114may aggregate responses from multiple services that provide different data. The aggregated responses then may be sent back to the requesting service-consuming application as a single response to a single request. For example, assume that application A requests an amount of unpaid leave used year to date, annual salary, and current occupation code in a single request. The services may be provided by application B, application C, and application D, respectively. Through configuration or coding, interceptor hub114knows which services to request and makes the request or calls for the services. When the requests have been fulfilled, interceptor hub114may aggregate service responses into a single response to send back to the requesting service-consuming application. By avoiding a tight coupling between service-consuming applications and services, the abovementioned embodiments provide flexibility in maintenance and support of application systems utilizing interface technologies that change in a manner that can be managed by interceptor hub114with little or no impact to application systems exchanging information. For example, because of a loose coupling between a service consuming application and a service, if a particular application system changes, only an interface between and interceptor hub and the particular application system may be impacted instead of affecting other application systems that partner with the particular application system. In some embodiments, the interceptor hub may provide a single interface to a particular application system for use by other application systems that utilize different technologies and protocols. An interceptor hub can be implemented in a number of topologies including, but not limited to: a local network implementation, (on premises), serving local partners only, remote partners only, or local and remote partners; and a cloud implementation (Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS)), servicing local partners only, remote partners only, or local and remote partners. Flexibility of various embodiments are best illustrated by the following example scenarios. In a first scenario, application system ABC provides a service A. Application system XYZ uses service A via API management service A and protocol A. Application system XYZ changes to use API management service B. There are no changes to an interface to application system ABC. As application system XYZ is migrated to API management system B, interceptor hub can plug into API management system B to deliver service A data to application system XYZ. During a phased migration, the interceptor hub can use both API management service A and API management service B to deliver the service A data to application system XYZ. Sometime later, application system EFG would like to use service A provided by application system ABC with no changes to the interface to application system ABC. Application system EFG could use API management service A, API management service B, or another API management service. The interceptor hub may plug into one of the API management services used by application system EFG. Sometime later, application system ABC migrates from technical stack 1 to technical stack 2, both hardware and technology stacks are changing. The interceptor hub can plug into application system ABC to use technical stack 2 and provide service responses from service A to application system XYZ and application system EFG without any changes being made to application system EFG and application system XYZ. The interceptor hub can provide service responses from application system ABC, service A, from both technical stack 1 and technical stack 2 through configuration options that enable a phased migration of application system ABC from technical stack 1 to technical stack 2. When the migration is complete, the interceptor hub system may be unplugged from application system ABC, service A, technical stack 1 with no impact to usage of application system ABC, service A by application system EFG and application system XYZ. In a second scenario, application system XYZ utilizes application system ABC, service A, via API management service B utilizing protocol A. Application system XYZ elects to migrate to protocol B. There is no impact to application system ABC. In this scenario, the interceptor hub implemented multiple protocols for delivering services. The interceptor hub can configure delivery of service responses from application system ABC, service A, to application system XYZ utilizing protocol B. The migration could be configured as a phased migration. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising”, “includes”, “including”, “has”, “have”, “having”, “with” and the like, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or improvement over conventional technologies, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer-readable storage devices having instructions stored therein for carrying out functions according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. Each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The various functions of the computer or other processing systems may be distributed in any manner among any number of software and/or hardware modules or units, processing or computer systems and/or circuitry, where the computer or processing systems may be disposed locally or remotely of each other and communicate via any suitable communications medium (e.g., LAN, WAN, Intranet, Internet, hardwire, modem connection, wireless, etc.). For example, the functions of the present invention embodiments may be distributed in any manner among the various end-user/client and server systems, and/or any other intermediary processing devices. The software and/or algorithms described above and illustrated in the flowcharts may be modified in any manner that accomplishes the functions described herein. In addition, the functions in the flowcharts or description may be performed in any order that accomplishes a desired operation. In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. | 29,395 |
11943323 | DETAILED DESCRIPTION The invention and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known starting materials, processing techniques, components, and equipment are omitted so as not to unnecessarily obscure the invention in detail. It should be understood, however, that the detailed description and the specific examples, while indicating some embodiments of the invention, are given by way of illustration only and not by way of limitation. Various substitutions, modifications, additions, and/or rearrangements within the spirit and/or scope of the underlying inventive concept will become apparent to those skilled in the art from this disclosure. An ECM server can provide management capabilities for all types of content. The core of an ECM server can be a repository in which the content is stored securely under compliance rules in a unified environment, although content may reside on multiple servers and physical storage devices within a networked computing environment. FIG.1depicts a diagrammatic representation of an example of an ECM server architecture in which components of an ECM server100are contained in a monolithic structure (i.e., a container). A container is a standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries, and settings. In the example ofFIG.1, ECM server100is configured with modules for the functionalities of ECM server100, i.e., workflow, federation, migration, log purge, filescan, audit-trail, job scheduler, method launcher, audit purge, and replication. Because ECM server100is structured to perform these functionalities at any given time, ECM server100has a container size that requires a minimum of 2 GB of storage space and uses at least 8 GB of RAM while deploying and running. This makes applying a patch to ECM server100a complex and time consuming operation. Further, if the load for a specific module is more than other modules in ECM server100, an administrator or authorized user needs to enable the load balancing and high availability feature before deploying a new ECM server container. Often times, deploying an instance of ECM server100requires a new server machine with a huge RAM (e.g., at least 8 GB of RAM) and a large storage space (e.g., at least 2 GB of storage space). In the case of reduced load, ECM server100still requires the same huge amount of RAM and storage space to run and consumes the same large amount of resources. Consequently, the ECM server architecture shown inFIG.1is not ideal or suitable for scaling. FIG.2depicts a diagrammatic representation of an example of a new content server architecture and framework200according to some embodiments. In the example ofFIG.2, container210is instantiated without all the functionalities of a content server and only has a content server API212. This makes container210a lightweight container. In the example ofFIG.2, content server API212has two components: a caching component214and a controller216. Caching component214operates an in-memory cache and controller216works with master worker module(s)240in framework200. In some embodiments, ECM server components, such as those shown inFIG.1inside the monolithic structure of ECM server container100, are decomposed into microservices (e.g., a workflow microservice, a job scheduler service, a file scan service, a migration service, a method launcher service, etc.) provided through framework200. In computer programming, the term “framework” refers to a universal, reusable computing environment that provides particular functionality as part of a larger computing platform such as an ECM system. A framework usually includes support programs (e.g., for housekeeping), compilers, code libraries, tool sets, and APIs to help bootstrap user applications. Framework implementations are known to those skilled in the art and thus are not further described herein. As illustrated inFIG.2, framework200defines the overall architecture of a content server. The basic components of framework200and the relationships between them remain unchanged in any instantiation of framework200. When framework200is started up, content server API212is started as an ECM API service. The ECM API service receives requests from the client side (e.g., via an object oriented API and framework called “Documentum Foundation Classes” (DFC). DFC is a set of Java classes that make essentially all the ECM server functionalities described above available to client programs through a published set of APIs. DFC allows for accessing, customizing, and extending ECM server functionalities and can be described as an Object-Relational-Mapper for programmatically accessing and manipulating objects stored in a secure content repository. As a non-limiting example, when a request is received by the ECM API service, controller216routes the request to an appropriate microservice in framework200. In some embodiments, controller216is configured for creating instances of active controller applications240and for monitoring the load and status (e.g., using caching component214to store the load and status metadata in database220) of each instantiated microservice container (e.g., controller application250instantiated by controller216from a master worker module280in framework200). If any microservice bad (e.g., for controller application250) reaches a predetermined load cap, then an additional microservice container (e.g., controller application260) is instantiated. From this perspective, providing a microservice using framework200mainly involves two components: content server API210(or, more particularly, controller216) and master worker module280(from which instances of microservices are instantiated as controller applications such as controller applications250,260). As illustrated in the example ofFIG.2, master worker module280contains one master282and a number of workers284to handle the load. Each instance of a controller application240has a particular object type (e.g., “dm_controller_app” type) that can be used for keeping track of details of the master, a worker thread count, and plugin details (e.g., using database220). It can also be used to keep track of current running instances. Other types of metadata such as user attributes, system attributes, application attributes, and so on may also be included and stored in database220. A non-limiting example list of attributes can be found in Appendix A. To utilize the microservices provided by framework200, a user system should have two plugin modules: a master plugin and a worker plugin. The master plugin contains master plugin code for fetching activity requests/tasks from a content server (e.g., a controller application240), and for sending the activity requests/tasks to a queue. The worker plugin contains worker plugin code for fetching activity requests/tasks from the queue and processing each task by a worker. These plugin detailers should be configured in controller application objects (e.g., objects of the “dm_controller_app” type) along with worker thread counts. Once configured, microservices corresponding to the controller application objects are ready for use. FIG.3depicts a sequence diagram illustrating an example of the content server architecture and framework shown inFIG.2in operation according to some embodiments. In the example ofFIG.3, when framework300(which defines the overall architecture of a content server) is started up, a content server API service (which is a functionality provided by a content server API in a lightweight container referred to as CS310inFIG.3) and a set of default microservices are launched (e.g., by a controller of the content server API in CS310). In some embodiments, CS310is operable to check its configuration file in an object of the “dm_controller_app” object type), determine which microservices have to be launched, and launch all the specified microservices along with process names. As a non-limiting example, a process name can be an object name followed by an index (e.g., 1, 2, 3), indicating an order by which the process is started. As illustrated inFIG.3, CS310works with at least two modules—a master module352and worker module(s)354(which run in a container350). The number of worker modules in framework300can be based on a configuration parameter or metadata in the dm_controller_app object. As a non-limiting example, the default value is 3 worker modules. This default value is configurable. In some embodiments, CS310receives (via its content server API) a request for a content server function from a DFC390and routes (via its controller) the request to a microservice in framework300that corresponds to the requested content server function. The microservice stores the request in a repository (e.g., file store230shown inFIG.2). Master module352retrieves the request from the repository and places the request in a queue356. A worker354picks up the request from queue356and processes it. In some embodiments, the queue size is calculated using a formula below. Worker thread count (which has a configurable default value of 3)*30 (which is assigned and is not modifiable by a user)=90 tasks per queue by default Accordingly, master module352will try to fetch top 90 tasks from the repository and try to put them into queue356. If queue356is full, master module352sends a notification back to the controller in CS310. The controller in CS310is operable to check the frequency of the queue full size notification. If the frequency value is high (e.g., as compared to a predetermined threshold value), it launches a new instance of a master-worker module automatically, as shown inFIG.3. If queue356is not full, then worker threads354will fetch tasks from queue356and process the tasks one by one. If queue356is empty, then the worker module sends a notification to the controller in CS310which, in turn, sends a shutdown request to framework300to shut down the master module (i.e., to kill container350, which is an instance of the master module of framework300). Embodiments disclosed herein can provide many advantages and benefits. For example, decomposing a monolithic ECM server into smaller microservices can improve modularity, be easy to manage, consume less resources, and provide huge cost savings. For instance, when the framework is started, the container size for a content server is less than 1 GB, which is approximately 50% reduction in container size and the launch speed is approximately 70-80% faster. The reduction in container size and increase in launch speed allows multiple content server containers to be launched at the same time and/or on the same server machine. Further, because ECM functionalities are no longer bound by a monolithic structure and can run as microservices, applying a patch becomes a streamlined, efficient process. If a microservice's load reaches or exceeds a predetermined threshold, the framework can automatically scale up by launching a new instance of the master-worker module. When an instance of the master-worker module is no longer needed (e.g., when its queue is empty), the framework can automatically scale down by killing the instance that is no longer needed. This automated scalability allows the framework to utilize computational resources efficiently and, consequently, eliminate the need for requiring huge memory and storage space for content server deployment. The new framework architecture described above facilitates containerization of ECM server components in a cloud computing environment, resulting in horizontal scaling of required ECM server components. The new framework architecture also makes a content server application easier to understand, develop, and test, and become more resilient to architecture erosion. In summary, the new framework and design disclosed herein can help running the following modules as microservices:Workflow service (a Business Process Management (BPM) process)Job Scheduling serviceClean serviceMigration serviceMethod Execution serviceAudit trail serviceMethod server serviceEasy customizable for each service This new microservice-based ECM approach can result in the following gain:Optimized service is load basedSupport lightweight containersMemory optimizationHorizontal Scalable for each serviceServices are organized around business capabilitiesEasy deployable on service basis FIG.4depicts a diagrammatic representation of a data processing system for implementing an embodiment disclosed herein. As shown inFIG.4, data processing system400may include one or more central processing units (CPU) or processors401coupled to one or more user input/output (I/O) devices402and memory devices403. Examples of I/O devices402may include, but are not limited to, keyboards, displays, monitors, touch screens, printers, electronic pointing devices such as mice, trackballs, styluses, touch pads, or the like. Examples of memory devices403may include, but are not limited to, hard drives (HDs), magnetic disk drives, optical disk drives, magnetic cassettes, tape drives, flash memory cards, random access memories (RAMs), read-only memories (ROMs), smart cards, etc. Data processing system400can be coupled to display406, information device407and various peripheral devices (not shown), such as printers, plotters, speakers, etc. through I/O devices402. Data processing system400may also be coupled to external computers or other devices through network interface404, wireless transceiver405, or other means that is coupled to a network such as a local area network (LAN), wide area network (WAN), or the Internet. Those skilled in the relevant art will appreciate that the invention can be implemented or practiced with other computer system configurations, including without limitation multi-processor systems, network devices, mini-computers, mainframe computers, data processors, and the like. The invention can be embodied in a computer, or a special purpose computer or data processor that is specifically programmed, configured, or constructed to perform the functions described in detail herein. The invention can also be employed in distributed computing environments, where tasks or modules are performed by remote processing devices, which are linked through a communications network such as a LAN, WAN, and/or the Internet. In a distributed computing environment, program modules or subroutines may be located in both local and remote memory storage devices. These program modules or subroutines may, for example, be stored or distributed on computer-readable media, including magnetic and optically readable and removable computer discs, stored as firmware in chips, as well as distributed electronically over the Internet or over other networks (including wireless networks). Example chips may include Electrically Erasable Programmable Read-Only Memory (EEPROM) chips. Embodiments discussed herein can be implemented in suitable instructions that may reside on a non-transitory computer readable medium, hardware circuitry or the like, or any combination and that may be translatable by one or more server machines. Examples of a non-transitory computer readable medium are provided below in this disclosure. Suitable computer-executable instructions may reside on a non-transitory computer readable medium (e.g., ROM, RAM, and/or HD), hardware circuitry or the like, or any combination thereof. Within this disclosure, the term “non-transitory computer readable medium” is not limited to ROM, RAM, and HD and can include any type of data storage medium that can be read by a processor. Examples of non-transitory computer-readable storage media can include, but are not limited to, volatile and non-volatile computer memories and storage devices such as random access memories, read-only memories, hard drives, data cartridges, direct access storage device arrays, magnetic tapes, floppy diskettes, flash memory drives, optical data storage devices, compact-disc read-only memories, and other appropriate computer memories and data storage devices. Thus, a computer-readable medium may refer to a data cartridge, a data backup magnetic tape, a floppy diskette, a flash memory drive, an optical data storage drive, a CD-ROM, ROM, RAM, HD, or the like. The processes described herein may be implemented in suitable computer-executable instructions that may reside on a computer readable medium (for example, a disk, CD-ROM, a memory, etc.). Alternatively, the computer-executable instructions may be stored as software code components on a direct access storage device array, magnetic tape, floppy diskette, optical storage device, or other appropriate computer-readable medium or storage device. Any suitable programming language can be used to implement the routines, methods or programs of embodiments of the invention described herein, including C, C++, Java, JavaScript, HTML, or any other programming or scripting code, etc. Other software/hardware/network architectures may be used. For example, the functions of the disclosed embodiments may be implemented on one computer or shared/distributed among two or more computers in or across a network. Communications between computers implementing embodiments can be accomplished using any electronic, optical, radio frequency signals, or other suitable methods and tools of communication in compliance with known network protocols. Different programming techniques can be employed such as procedural or object oriented. Any particular routine can execute on a single computer processing device or multiple computer processing devices, a single computer processor or multiple computer processors. Data may be stored in a single storage medium or distributed through multiple storage mediums, and may reside in a single database or multiple databases (or other data storage techniques). Although the steps, operations, or computations may be presented in a specific order, this order may be changed in different embodiments. In some embodiments, to the extent multiple steps are shown as sequential in this specification, some combination of such steps in alternative embodiments may be performed at the same time. The sequence of operations described herein can be interrupted, suspended, or otherwise controlled by another process, such as an operating system, kernel, etc. The routines can operate in an operating system environment or as stand-alone routines. Functions, routines, methods, steps, and operations described herein can be performed in hardware, software, firmware or any combination thereof. Embodiments described herein can be implemented in the form of control logic in software or hardware or a combination of both. The control logic may be stored in an information storage medium, such as a computer-readable medium, as a plurality of instructions adapted to direct an information processing device to perform a set of steps disclosed in the various embodiments. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the invention. It is also within the spirit and scope of the invention to implement in software programming or code an of the steps, operations, methods, routines or portions thereof described herein, where such software programming or code can be stored in a computer-readable medium and can be operated on by a processor to permit a computer to perform any of the steps, operations, methods, routines or portions thereof described herein. The invention may be implemented by using software programming or code in one or more digital computers, by using application specific integrated circuits, programmable logic devices, field programmable gate arrays, optical, chemical, biological, quantum or nano-engineered systems, components, and mechanisms may be used. In general, the functions of the invention can be achieved by any means as is known in the art. For example, distributed, or networked systems, components, and circuits can be used. In another example, communication or transfer (or otherwise moving from one place to another) of data may be wired, wireless, or by any other means. A “computer-readable medium” may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, system, or device. The computer readable medium can be, by way of example only but not by limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, system, device, propagation medium, or computer memory. Such computer-readable medium shall generally be machine readable and include software programming or code that can be human readable (e.g., source code) or machine readable (e.g., object code). Examples of non-transitory computer-readable media can include random access memories, read-only memories, hard drives, data cartridges, magnetic tapes, floppy diskettes, flash memory drives, optical data storage devices, compact-disc read-only memories, and other appropriate computer memories and data storage devices. In an illustrative embodiment, some or all of the software components may reside on a single server computer or on any combination of separate server computers. As one skilled in the art can appreciate, a computer program product implementing an embodiment disclosed herein may comprise one or more non-transitory computer readable media storing computer instructions translatable by one or more processors in a computing environment. A “processor” includes any, hardware system, mechanism or component that processes data, signals or other information. A processor can include a system with a central processing unit, multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a geographic location, or have temporal limitations. For example, a processor can perform its functions in “real-time,” “offline,” in a “batch mode,” etc. Portions of processing can be performed at different times and at different locations, by different (or the same) processing systems. As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, product, article, or apparatus that comprises a list of elements is not necessarily limited only those elements but may include other elements not expressly listed or inherent to such process, product, article, or apparatus. Furthermore, the term “or” as used herein is generally intended to mean “and/or” unless otherwise indicated. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). As used herein, including the accompanying appendices, a term preceded by “a” or “an” (and “the” when antecedent basis is “a” or “an”) includes both singular and plural of such term, unless clearly indicated otherwise (i.e., that the reference “a” or “an” clearly indicates only the singular or only the plural). Also, as used in the description herein and in the accompanying appendices, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. Although the foregoing specification describes specific embodiments, numerous changes in the details of the embodiments disclosed herein and additional embodiments will be apparent to, and may be made by, persons of ordinary skill in the art having reference to this disclosure. In this context, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of this disclosure. The scope of the present disclosure should be determined by the following claims and their legal equivalents. | 24,281 |
11943324 | DESCRIPTION OF EXEMPLARY EMBODIMENTS The above-described features and the following detailed description are exemplary contents for helping a description and understanding of the present specification. That is, the present specification is not limited to this embodiment and may be embodied in other forms. The following embodiments are merely examples to fully disclose the present specification, and are descriptions to transfer the present specification to those skilled in the art. Therefore, when there are several methods for implementing components of the present specification, it is necessary to clarify that the present specification may be implemented with a specific one of these methods or equivalent thereof. In the present specification, when there is a description in which a configuration includes specific elements, or when there is a description in which a process includes specific steps, it means that other elements or other steps may be further included. That is, the terms used in the present specification are only for describing specific embodiments and are not intended to limit the concept of the present specification. Furthermore, the examples described to aid the understanding of the present specification also include complementary embodiments thereof. The terms used in the present specification have the meaning commonly understood by one of ordinary skill in the art to which the present specification belongs. Terms commonly used should be interpreted in a consistent sense in the context of the present specification. Further, terms used in the present specification should not be interpreted in an idealistic or formal sense unless the meaning is clearly defined. Hereinafter, embodiments of the present specification will be described with reference to the accompanying drawings. FIG.1is a conceptual diagram showing a structure of a wireless LAN system. FIG.1(A) shows a structure of an infrastructure network of Institute of Electrical and Electronic engineers (IEEE) 802.11. Referring to (A) ofFIG.1, the wireless system (10) shown in (A) ofFIG.1may include at least one basic service set (BSS) (100,105). A BSS is a set of an access point (hereinafter referred to as ‘AP’) and a station (hereinafter referred to as ‘STA’) that can perform communication between one another by successfully establishing synchronization and does not refer to a specific area. For example, a first BSS (100) may include a first AP (110) and a single first STA (100-1). A second BSS (105) may include a second AP (130) and one or more STAs (105-1,105-2). The infrastructure BSSs (100,105) may include at least one STA, APs (110,130) providing a distribution service, and a distribution system (DS) (120) that connects the multiple APs. The distribution system (120) may implement an extended service set (ESS) (140) by connecting the plurality of BSSs (100,105). The ESS (140) may be used as a term indicating a network that connects one or more APs (110,130) through the distribution system (120). One or more APs included in the single ESS (140) may have the same service set identifier (hereinafter referred to as ‘SSID’). A portal (150) may serve as a bridge for connecting the wireless LAN network (IEEE 802.11) to another network (e.g., 802.X). In the wireless LAN system having the structure shown in (A) ofFIG.1, a network between the APs (110,130) and a network between the APs (110,130) and the STAs (100-1,105-1,105-2) can be implemented. (B) ofFIG.1is a conceptual diagram showing an independent BSS. Referring to (B) ofFIG.1, a wireless LAN system (15) shown in (B) ofFIG.1may establish a network between STAs without the APs (110,130) such that the STAs can perform communication, unlike the wireless LAN system of (A) ofFIG.1. A network established between STAs without the APs (110,130) for communication is defined as an ad-hoc network or an independent basic service set (hereinafter referred to as ‘IBSS’). Referring to (B) ofFIG.1, the IBSS (15) is a BSS operating in an ad-hoc mode. The IBSS does not have a centralized management entity because an AP is not included therein. Accordingly, STAs (150-1,150-2,150-3,155-4,155-5) are managed in a distributed manner in the IBSS (15). All STAs (150-1,150-2,150-3,155-4,155-5) of the IBSS may be configured as mobile STAs and are not allowed to access a distributed system. All STAs of the IBSS configure a self-contained network. An STA mentioned in the present disclosure is an arbitrary functional medium including medium access control (MAC) conforming to regulations of Institute of Electrical and Electronics Engineers (IEEE) 802.11 and a physical layer interface for a wireless medium, and a broad meaning of this term may include both an AP and a non-AP station. The STA mentioned in the present disclosure may also be referred to by using various terms, such as a mobile terminal, a wireless device, a wireless transmit/receive unit (WTRU), a user equipment (UE), a mobile station (MS), a mobile subscriber unit, and, simply, a user. FIG.2is a conceptual diagram of a hierarchical architecture of a wireless LAN system supported by IEEE 802.11. Referring toFIG.2, the hierarchical architecture of the wireless LAN system may include a physical medium dependent (PMD) sublayer (200), a physical layer convergence procedure (PLCP) sublayer (210), and a medium access control (MAC) sublayer (220). The PMD sublayer (200) may serve as a transport interface for transmitting and receiving data between STAs. The PLCP sublayer (210) is implemented such that the MAC sublayer (220) can operate with minimum dependency on the PMD sublayer (200). The PMD sublayer (200), the PLCP sublayer (210), and the MAC sublayer (220) may conceptually include a management entity. For example, a manager of the MAC sublayer (220) is called a MAC layer management entity (MLME) (225). A manager of the physical layer is called a PHY layer management entity (PLME) (215). These managers may provide interfaces for performing layer management operation. For example, the PLME (215) may be connected to the MLME (225) to perform a management operation of the PLCP sublayer (210) and the PMD sublayer (200). The MLME (225) may be connected to the PLME (215) to perform a management operation of the MAC sublayer (220). In order to perform correct MAC layer operation, an STA management entity (SME) (250) may be provided. The SME (250) may be operated as an independent component in each layer. The PLME (215), the MLME (225), and the SME (250) may transmit and receive information based on primitive. The operation in each sublayer will be briefly described below. For example, the PLCP sublayer (210) transfers a MAC protocol data unit (MPDU) received from the MAC sublayer (220) to the PMD sublayer (200) or transfers a frame from the PMD sublayer (200) to the MAC sublayer (220) between the MAC sublayer (220) and the PMD sublayer (200) according to an instruction of the MAC layer. The PMD sublayer (200) is a sublayer of PLCP and may perform data transmission and reception between STAs through a wireless medium. An MPDU transferred from the MAC sublayer (220) is referred to as a physical service data unit (PSDU) in the PLCP sublayer (210). Although the MPDU is similar to the PSDU, an individual MPDU may differ from an individual PSDU when an aggregated MPDU corresponding to an aggregation of a plurality of MPDU is transferred. The PLCP sublayer (210) adds an additional field including information that is needed by a transceiver of the physical layer during a process of receiving a PSDU from the MAC sublayer (220) and transferring the PSDU to the PMD sublayer (200). At this point, the added field may be a PLCP preamble and a PLCP header added to the PSDU and tail bits needed for returning a convolution encoder to a zero state, and the like. The PLCP sublayer (210) adds the aforementioned field to the PSDU to generate a PLCP protocol data unit (PPDU) and transmits the PPDU to a receiving station through the PMD sublayer (200), and the receiving station receives the PPDU and obtains information needed for data restoration from the PLCP preamble and the PLCP header in order to restore (or recover) data. FIG.3is a diagram for describing an access period within a beacon interval. Referring toFIG.3, time of a wireless medium may be defined based on a beacon interval between a beacon frame and a beacon frame. For example, a beacon interval may be 1024 milliseconds (msec). A plurality of sub-periods within a beacon interval may be referred to as an access period. Different access periods within one beacon interval may have different access rules. For example, information on an access period may be transmitted, by an AP or Personal basic service set Control Point (PCP), to a non-AP STA or non-PCP. Referring toFIG.3, one beacon interval may include a Beacon Header Interval (hereinafter referred to as ‘BHI’) and a Data Transfer Interval (hereinafter referred to as ‘DTI’). For example, a BHI may be a time period starting from a target beacon transmission time (hereinafter referred to as ‘TBTT’) of a beacon interval and ending before the start (or beginning) of a DTI. The BHI ofFIG.3may include a Beacon Transmission Interval (hereinafter referred to as ‘BTI’), an Association Beamforming Training (hereinafter referred to as ‘A-BFT’), and an Announcement Transmission Interval (hereinafter referred to as ‘ATI’). For example, a BTI may be a time period starting from the beginning (or start) of a first beacon frame to the end of a last beacon frame, which are transmitted by a wireless UE within a beacon interval. That is, a BTI may be a period during which one or more DMG beacon frames may be transmitted. For example, an A-BFT may be a period during which beamforming training is performed by the STA that has transmitted the DMG beacon frame(s) during the preceding BTI. For example, an ATI may be a Request-Response based management access period between a PCP/AP and a non-PCP/non-AP STA. The Data Transfer Interval (hereinafter referred to as ‘DTI’) ofFIG.3may be a period during which frames are exchanged between multiple STAs. As shown inFIG.3, one or more Contention Based Access Periods (hereinafter referred to as ‘CBAP’) and one or more Service Periods (hereinafter referred to as ‘SP’) may be allocated to the DTI. A DTI schedule of the beacon interval shown inFIG.3may be communicated through an Extended Schedule element, which is included in the beacon frame (or Announcement frame). That is, an Extended Schedule element may include schedule information for defining multiple allocations that are included in the beacon interval. Detailed descriptions of the beacon frame are mentioned in Section 9.4.2.132 of the IEEE Draft P802.11-REVmc™/D8.0, August 2016 ‘IEEE Standard for Information Technology Telecommunications and information exchange between systems—Local and metropolitan area networks—Specific requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications (hereinafter referred to as IEEE 802.11)’, which was disclosed in August 2016. AlthoughFIG.3illustrates an example of two CBAPs and two SPs being allocated for one DTI, this is merely exemplary. And, therefore, it shall be understood that the present specification will not be limited only to this. FIG.4is a conceptual diagram of a time division duplex (TDD) SP structure. Referring toFIG.1toFIG.4, among a plurality of allocation fields (not shown) that are included in the Extended Schedule element, which is included in a beacon frame, an allocation field for a second Service Period (SP2) ofFIG.4may include a first subfield and a second subfield. For example, the first subfield being included in the allocation field for the second Service Period (SP2) ofFIG.4may be set to a value indicating SP allocation. Additionally, the second subfield being included in the allocation field for the second Service Period (SP2) ofFIG.4may be set to a value indicating that the second service period (SP2) is a TDD SP that is based on TDD channel access. In the present specification, when information for a TDD SP is included in the Extended Schedule element, the Extended Schedule element may be included in each beacon frame that is being transmitted. Additionally, when an Extended Schedule element is transmitted at least one time from a beacon interval, with the exception for any special cases, the content of the Extended Schedule element may not be changed. Referring toFIG.4, the structure of the second service period (SP2), which is a TDD SP, may include a plurality of consecutive and adjacent TDD intervals (TDD interval 1˜TDD interval Q, wherein Q is an integer). For example, a number of the plurality of TDD intervals ofFIG.4may be equal to Q. Additionally, each of the plurality of TDD intervals may include one or more TDD slots. For example, a first TDD interval (TDD interval 1) may include M+1 (wherein M is an integer) number of slots. For example, a time interval starting from a start point of the first TDD interval (TDD interval 1) up to before a start point of a first TDD slot (i.e., TDD Slot 0), may be defined as a first guard time (hereinafter referred to as ‘GT1’). For example, a time interval between each TDD slot included in the first TDD interval (TDD interval 1) may be defined as a second guard time (GT2). For example, a time interval starting from an end point of an (M+1)th TDD slot (TDD slot M) up to an end point of the first TDD interval (TDD interval 1) may be defined as a third guard time (GT3). For example, each of the plurality of TDD intervals (TDD interval 1˜TDD interval Q) may have the same length. Each of the M+1 number of TDD slots (e.g., TDD slot 0˜TDD slot M ofFIG.4) included in one TDD interval (e.g., TDD interval 1 ofFIG.4) may have a different length. Referring toFIG.4, the structure(s) of one or more TDD slots being included in the first TDD interval (i.e., TDD interval 1) may be repeatedly applied to the remaining TDD intervals (i.e., TDD interval 2˜TDD interval Q). FIG.5is a diagram for describing a physical configuration of a related art radio frame. ReferringFIG.5, it is assumed that a Directional Multi-Gigabit (DMG) physical layer commonly includes the fields shown inFIG.5. However, depending upon each mode, there may be differences in the regulation method and modulation/coding scheme(s) used for each separate field. A preamble of the radio frame shown inFIG.5may include a Short Training Field (STF) and a Channel Estimation (CE) field. Additionally, the radio frame may include a header field, a data field for a payload, and a Training (TRN) field for beamforming. FIG.6andFIG.7are detailed diagrams showing a header field included in the radio frame ofFIG.5. Referring toFIG.6, the diagram shows a case where a Single Carrier (SC) mode is used. In the SC mode, the header field may include information, such as information indicating an initial value for scrambling, a Modulation and Coding Scheme (MCS), information indicating data length, information indicating the presence or absence of an additional Physical Protocol Data Unit (PPDU), a packet type, a training length, performance or non-performance of aggregation, presence or absence of a beam training request, a last Received Signal Strength Indicator (RSSI), performance or non-performance of truncation, Header Check Sequence (HCS), and so on. Additionally, as shown inFIG.6, the header has 4 bits of reserved bits, and such reserved bits may be used as described below in the following description. Referring to 7, the diagram shows a detailed configuration of the header field, when an OFDM mode is applied. For example, when the OFDM mode is applied, the header field may include information, such as information indicating an initial value for scrambling, an MCS, information indicating data length, information indicating the presence or absence of an additional PPDU, a packet type, a training length, performance or non-performance of aggregation, presence or absence of a beam training request, a last RSSI, performance or non-performance of truncation, Header Check Sequence (HCS), and so on. As shown inFIG.7, the header has 2 bits of reserved bits, and such reserved bits may be used as described below in the following description, just as in the case ofFIG.6. Channel bonding and MIMO technology are adopted in the IEEE 802.11ay. In order to implement the channel bonding and MIMO technology in 11ay, a new PPDU structure is needed. That is, when using the conventional (or existing) 11ad PPDU structure, there are limitations in implementing the channel bonding and MIMO technology while supporting a legacy UE at the same time. In the present specification, a new field for an 11ay UE may be defined after the legacy preamble and legacy header field that are used for supporting the legacy UE. Herein, the channel bonding and MIMO technology may be supported based on the newly defined field. FIG.8is a diagram showing a PPDU structure according to an embodiment of the present disclosure. InFIG.8, a horizontal axis may correspond to a time domain, and a vertical axis may correspond to a frequency domain. When the channel bonding scheme is applied for two or more channels (e.g., CH1, CH2 ofFIG.8), a frequency band having a predetermined size (e.g., a 400 MHz band) may exist between frequency bands (e.g., 1.83 GHz) being used in each channel. In case of a Mixed mode, when a legacy preamble (e.g., L-STF, L-CE ofFIG.8) is duplicated and transmitted through each channel, the present embodiment may consider a transmission of a new STF and CE field (i.e., gap filling) together with the legacy preamble at the same time through a 400 MHz band between each channel. In this case, as shown inFIG.8, the PPDU structure according to the present disclosure may have a structure of transmitting an ay STF, ay CE, ay header B, and payload through a wideband after the legacy preamble, legacy header, and ay header A. Therefore, the ay header, ay payload fields, and so on being transmitted after the header field may be transmitted through channels that are used for bonding. Hereinafter, in order to differentiate the ay header from the legacy header, the ay header may also be referred to as an enhanced directional multi-gigabit (EDMG) header, and the corresponding term may be interchangeably used. For example, a total of 6 or 8 channels (each 2.16 GHz) may exist in 11ay, and a maximum of 4 channels may be bonded and transmitted to a single STA. Accordingly, the ay header and ay payload may be transmitted through 2.16 GHz, 4.32 GHz, 6.48 GHz, 8.64 GHz bandwidths. Alternatively, a PPDU format corresponding to a case where the legacy preamble is repeatedly transmitted without performing Gap-Filling may also be considered. In this case, since Gap-Filling is not performed, without the GF-STF and GF-CE fields, which are marked in dotted lines inFIG.8, the ay STF, ay CE, and ay header B are transmitted through a wideband after the legacy preamble, legacy header, and ay header A. FIG.9is a diagram showing a PPDU structure according to the present embodiment. When briefly summarizing the aforementioned PPDU format, the PPDU format may be as shown inFIG.9. As shown inFIG.9, the PPDU format that is applicable to the 11ay system may include L-STF, L-CEF, L-Header, EDMG-Header-A, EDMG-STF, EDMG-CEF, EDMG-Header-B, Data, and TRN fields, and the aforementioned fields may be selectively included in accordance with the format of the PPDU (e.g., SU PPDU, MU PPDU, and so on). Herein, a portion including the L-STF, L-CEF, and L-header fields may be referred to as a non-EDMG portion, and the remaining portion may be referred to as an EDMG portion. Additionally, the L-STF, L-CEF, L-Header, and EDMG-Header-A fields may be referred to as pre-EDMG modulated fields, and the remaining portions may be referred to as EDMG modulated fields. As described above, methods such as channel bonding, channel aggregation, and/or FDMA, which transmit data by using multiple channels at the same time, may be applied in the 11ay system that can apply the present disclosure. In particular, since the 11ay system uses signals of a high frequency band, a beamforming operation may be applied in order to transmit and/or receive signals at a high reliability level. FIG.10illustrates operations of performing beamforming for a channel according to the embodiment of the present disclosure. Referring toFIG.10, an STA that intends to transmit data through the beamforming operation is referred to as an initiator, and an STA that receives the data from the initiator is referred to as a responder. Additionally, althoughFIG.10shows only a total of two channels (e.g., CH1, CH2), it shall be understood that the structure of the present specification may be extendedly applied also to channel bonding/channel aggregation through 3 or more channels. As shown inFIG.10, beamforming training according to the present embodiment may be configured of a Sector Level Sweep (SLS) phase (or step), a channel bonding setup phase, and a channel bonding transmission phase. For reference, the SLS phase has the following characteristics. In order to communicate (or transfer) data or control information, and so on, with higher reliability in a 60 GHz band that is supported in the 11ay system, a directional transmission method may be applied instead of an omni-transmission method. STAs intending to transmit/receive data in the 11ay system may respectively know a TX best sector or RX best sector for the initiator and the responder through the SLS process. For reference, the SLS phase will hereinafter be described in more detail with reference toFIG.12andFIG.13, which will be described later on. FIG.11shows an example of a beamforming training process according to an embodiment of the present disclosure. Referring toFIG.1toFIG.11, in BF training, which occurs during Association Beamforming Training (A-BFT) allocation, an AP or PCP/AP is the initiator, and a non-AP and non-PCP/AP STA is the responder. In BF training, which occurs during SP allocation, a source (EDMG) STA of the SP is the initiator, and a target STA of the SP is the responder. In BF training, which occurs during Transmission Opportunity (TXOP) allocation, a TXOP holder is the initiator, and a TXOP responder is the responder. A link from the initiator to the responder will be referred to as an initiator link, and a link from the responder to the initiator will be referred to as a responder link. The BF training process starts with an SLS from the initiator. The purpose of the SLS phase is to enable communication between two STAs at a control PHY rate or higher MCS. Most particularly, the SLS phase provides only the transmission of the BF training. Additionally, when a request is made by the initiator or responder, a Beam Refinement Protocol (or Beam Refinement Phase) (BRP) may be performed subsequent to the SLS phase. The purpose of the BRP phase is to enable reception (RX) training and to enable iterative refinement of an Antenna Weight Vector (AWV) of all transmitters and receivers within all STAs. If one of the STAs participating in the beam training chooses to use a single transmission (TX) antenna pattern, the RX training may be performed as part of the SLS phase. A more detailed description of the SLS phase is as follows. Herein, the SLS Phase may include: 1) Initiator Sector Sweep (ISS) for training an initiator link, 2) Responder Sector Sweep (RSS) for training a responder link, 3) SSW feedback, and 4) SSW ACK. The initiator may start the SLS phase by transmitting a frame (or frames) of the ISS. The responder does not start the transmission of a frame (or frames) of the RSS before the ISS is successfully completed. However, a case where the ISS occurs within a BTI may be excluded. The initiator may not start the SSW feedback before the RSS phase is successfully completed. However, a case where the RSS occurs within a A-BFT may be excluded. The responder does not start the SSW ACK of the initiator within the A-BFT. The responder starts the SSW ACK of the initiator immediately after the SSW feedback of the initiator has been successfully completed. The BF frames being transmitted by the initiator during the SLS phase may include an (EDMG) beacon frame, an SSW frame, and an SSW feedback frame. During the SLS phase, the BF frames being transmitted by the responder may include an SSW frame and an SSW-ACK frame. If each of the initiator and the responder performs a Transmit Sector Sweep (TXSS) during the SLS phase, at the end of the SLS phase, each of the initiator and the responder may possess its own transmit sector. If the ISS or RSS employs receive sector sweep, the responder or initiator may possess its own receive sector. An STA does not change its transmission power (or transport power) during sector sweep. FIG.12andFIG.13are drawings showing examples of the SLS phase. Referring toFIG.12, the initiator has a large number of sectors, and the responder has one transmit sector and receive sector that is used in RSS. Accordingly, the responder transmits SSW frames through the same transmission sector, and, at the same time, the initiator may perform switching of the receive antenna(s). Referring toFIG.13, the initiator has a large number of sectors, and the responder has one transmit sector and receive sector that is used in RSS. In this case, the receive training for the initiator may be performed during the BPR phase. The SLS phase according to the present embodiment may be summarized as follows. The SLS phase is a protocol performing link detection in a 802.11ay system according to the present embodiment, and, herein, the SLS phase is a beam training method wherein the network nodes contiguously (or consecutively) transmit/receive frames including the same information of a reception channel link by changing only the beam direction, and wherein an indicator (e.g., Signal to Ratio (SNR), Received Signal Strength Indicator (RSSI), and so on) indicating the performance of a receive channel link, among the successfully received frames, selects the best beam direction. Additionally, the BRP phase may be summarized as follows. The BRP phase is a protocol finely adjusting the beam direction that may maximize the data transmission rate from the beam direction, which is selected in the SLS phase or by a different means, and the BRP phase may be performed when needed. The BRP phase performs the BRP training by using a BRP frame including beam training information and information reporting the training result, wherein the BRP frame is defined for the BRP protocol. For example, BRP is a beam training method, wherein a BRP frame is transmitted/received by using a beam that is determined during a previous beam training, and wherein beam training is substantially performed by using a beam training sequence, which is included at an end part of a successfully transmitted/received BRP frame. Although the SLS uses a whole frame (or the frame itself) for the beam training, BRP may be different from SLS in that it uses only the beam training sequence. The above-described SLS phase may be performed within a Beacon Header Interval (BHI) and/or Data Transfer Interval (DTI). Firstly, the SLS phase that is performed during a BHI may be the same as the SLS phase, which is defined in the 11ad system for its coexistence with the 11ad system. Subsequently, the SLS phase that is performed during a DTI may be performed in case beamforming training is not performed between an initiator and a responder, or in case a beamforming (BF) link is lost. At this point, if the initiator and the responder are 11ay STAs, the initiator and the responder may transmit a short SSW frame for the SLS phase instead of the SSW frame. Herein, the short SSW frame may be defined as a frame including a short SSW packet in a Data field of a DMG control PHY or DMG control mode PPDU. At this point, a detailed format of the short SSW packet may be configured differently depending upon the transmission purpose (e.g., I-TXSS, R-TXSS, and so on) of the short SSW packet. The characteristics of the above-described SLS phase may also be applied to all SLS phases that will hereinafter be described. FIG.14is a block diagram showing the inside of an electronic device supporting a variable compression rate based on a radio link condition according to the present embodiment. An electronic device1400according to the present embodiment may include a first device1410corresponding to a TV main body and a second device1420corresponding to a TV panel. The first device1410according to the present embodiment may include a processor1411, an audio video (AV) compression module1412, a Tx module1413, and an Rx module1414. The processor1411may control an overall operation for the electronic device1400. For example, upon determining that the radio link condition is good, image data may be relatively compressed less (e.g., when a compression rate is 1/4) so that an image of best quality is provided under the control of the second device1420. As another example, upon determining that the radio link condition is not good, the image data may be compressed more (e.g., when the compression rate is 1/10) so that screen stuttering is minimized under the control of the second device1420. For example, the processor1411may transfer a first control signal ‘Tx AWV_C’ associated with a transmit antenna weight vector (hereinafter, Tx AWV) to the TX module1413. In this case, the first control signal ‘Tx AWV_C’ may be used to monitor the radio link condition with respect to the second device1420. In addition, the processor1411may receive monitoring result information M_r1′, M_r2′, and M_r3′ associated with the radio link condition from the Rx module1414. In this case, the monitoring result information M_r1′, M_r2′, and M_r3′ associated with the radio link condition may be used to determine an available MCS based thereon. In addition, the processor1411may determine the available MCS based on the monitoring result information M_r1′, M_r2′, and M_r3′ associated with the radio link condition. For reference, a process of determining an available MCS based on the monitoring result information M_r1′, M_r2′, and M_r3′ associated with the radio link condition will be described in greater detail with reference toFIG.15to be described below. In addition, the processor1411may determine a compression rate for a data stream, based on the determined MCS. For example, the compression rate for the data stream may be 1/4, 1/5, 1/6, or 1/10. For example, the compression rate for the data stream may be associated with a ratio for image compression and image restoration. For reference, the process of determining the compression rate for the data stream, based on the determined MCS, will be described in greater detail with reference toFIG.15described below. In addition, the processor1411may transfer a second control signal ‘Sel’ associated with the determined compression rate to the AV compression module1412. The AV compression module1412may compress the data stream according to the received second control signal ‘Sel’. In addition, the AV compression module1412may transfer a compressed data stream Data_C to the Tx module1413in the form of a bit stream. In the present specification, the AV compression module1412may be referred to as a data compression module. The Tx module1413may be associated with an Rx module1421of the second device1420. The Tx module1413may support a TDD scheme in which the same frequency is time-divided. In the present specification, the Tx module1413may be referred to as a first Tx antenna module. The Tx module1413may transmit a signal to the second device1420according to the received first control signal ‘Tx AWV_C’. For example, the signal transmitted to the second device1420according to the first control signal ‘Tx AWV_C’ may be a signal transmitted in the SLS phase mentioned above with reference toFIG.10toFIG.13. In addition, the Tx module1413includes two antennas to increase a transfer rate, and may simultaneously transmit two bit streams. For example, the Tx module1413may transmit to the second device1420a compressed data stream Data_C received based on the two antennas. The Rx module1414may be associated with a Tx module1422of the second device1420. The Rx module1414may support a TDD scheme in which the same frequency is time-divided. In the present specification, the Rx module1414may be referred to as a first Rx antenna module. In addition, the Rx module1414includes two antennas, and may simultaneously receive two bit streams. The Rx module1414may receive monitoring result information M_r1′, M_r2′, and M_r3′ associated with the radio link condition from the Tx module1422. For example, the first monitoring result information M_r1′ received from the first device1410through a radio link between the first device1410and the second device1420may be the same information as the first monitoring result information M_r1 generated by the Rx module1421. In addition, the second monitoring result information M_r2′ received from the first device1410through the radio link between the first device1410and the second device1420may be the same information as the second monitoring result information M_r2 generated by the Rx module1421. In addition, the third monitoring result information M_r3′ received from the first device1410through the radio link between the first device1410and the second device1420may be the same information as the third monitoring result information M_r3 generated by the Rx module1421. In addition, the Rx module1414may transfer the received monitoring result information M_r1′, M_r2′, and M_r3′ to the processor1411. The second device1420according to the present embodiment may include the Rx module1421, the Tx module1422, an AV restoration module1423, and a screen module1424. The Rx module1421may be associated with the Tx module1413of the first device1410. The Rx module1421may support the TDD scheme in which the same frequency is time-divided. In the present specification, the Rx module1421may be referred to as a second Rx antenna module. In addition, the Rx module1421includes two antennas, and may simultaneously receive two bit streams. The Rx module1421may receive a signal for monitoring the radio link condition, based on a receive antenna weight vector (hereinafter, Rx AWV). For example, the Rx module1421may receive the signal for monitoring the radio link condition while changing the Rx AWV. In addition, the Rx module1421may measure channel capacity depending on a combination of the Tx AWV and the Rx AWV. Accordingly, the first monitoring result information M_r1 associated with the channel capacity depending on the combination of the Tx AWV and the Rx AWV and the second monitoring result information M_r2 associated with line of sight (LOS)/non-line of sight (NLOS) depending on the combination of the Tx AWV and the Rx AWV may be obtained. For example, an operation of determining the LOS and NLOS by means of the Rx module1421may be performed by using a profile of graphs illustrated inFIG.17andFIG.18. Further, the third monitoring result information M_r3 associated with maximum channel capacity depending on the combination of the Tx AWV and the Rx AWV may be obtained. In addition, the Rx module1421may transmit the first to third monitoring result information M_r1, M_r2, and M_r3 to the Tx module1422. In addition, the Rx module1421may transmit the received compressed data stream Data_C′ to the AV restoration module1423. The Tx module1422may be associated with the Rx module1414of the first device1410. The Tx module1422may support the TDD scheme in which the same frequency is time-divided. In the present specification, the Tx module1422may be referred to as a second Tx antenna module. In addition, the Tx module1422includes two antennas to increase a transfer rate, and may simultaneously transmit two bit streams. The Tx module1422may transmit to the first device1410the received first to third monitoring result information M_r1, M_r2, and M_r3. In this case, the first to third monitoring result information M_r1, M_r2, and M_r3 may be transmitted to the Rx module1414in the form of a bit stream. The AV restoration module1423may perform an operation of restoring the received compressed data stream Data_C′ to generate restored data stream Data_R. In the present specification, the AV restoration module1423may be referred to as a data restoration module. In addition, the AV restoration module1423may transfer the restored data stream Data_R to the screen module1424. For example, the AV restoration module1423may obtain information on a compression rate determined by the first device1410through a syntax of a low level. Alternatively, the AV restoration module1423may obtain the information on the compression rate determined by the first device1410through header information of the received compressed data stream Data_C. The screen module1424may include a plurality of organic light emitting diode (OLED) elements. For example, the screen module1424may reproduce an image, based on the received restored data stream Data_R. It will be understood that the internal block diagram ofFIG.14is for exemplary purposes only, and the present specification is not limited thereto. For example, the AV compression module1412may be a component included in the processor1411. FIG.15is a flowchart showing an operation method of an electronic device supporting a variable compression rate based on a radio link condition according to the present embodiment, from a viewpoint of a main body device. Referring toFIG.14andFIG.15, in step S1510, a first device (e.g.,1410) corresponding to a TV main body of an electronic device (e.g.,1400) according to the present embodiment may perform monitoring on a radio link condition with respect to a second device (e.g.,1420) corresponding to a TV panel. For example, according to a first control signal (e.g., the Tx AWV_C ofFIG.14) of the electronic device (e.g.,1400), the first device (e.g.,1410) may transmit to the second signal (e.g.,1420) a signal for monitoring the radio link condition, based on a Tx AWV associated with a Tx module (e.g.,1413) (i.e., while changing the Tx AWV according to the Tx AWV_C). In this case, the signal for monitoring the radio link condition, transmitted according to the first control signal (e.g., the Tx AWV_C ofFIG.14), may correspond to a signal transmitted in the SLS phase mentioned above with reference toFIG.10toFIG.13. In step S1520, the first device (e.g.,1410) may receive a plurality of pieces of monitoring result information M_r1′, M_r2′, and M_r3′ associated with the radio link condition from the second device (e.g.,1420). For example, the plurality of pieces of monitoring result information M_r1′, M_r2′, and M_r3′ may be received based on the Rx module1414. For example, the received first monitoring result information M_r1′ may be associated with channel capacity depending on a combination of the Tx AWV of the first device (e.g.,1410) and the Rx AWV of the second device (e.g.,1420). For example, the received second monitoring result information M_r2′ may be associated with LOS/NLOS depending on the combination of the Tx AWV of the first device (e.g.,1410) and the Rx AWV of the second device (e.g.,1420). For example, the received third monitoring result information M_r3′ may be associated with maximum channel capacity depending on the combination of the Tx AWV of the first device (e.g.,1410) and the Rx AWV of the second device (e.g.,1420). In step S1530, the first device (e.g.,1410) may determine an MCS for the radio link condition, based on the plurality of pieces of monitoring result information M_r1′, M_r2′, and M_r3′ associated with the radio link condition. The first device (e.g.,1410) according to the present embodiment may map the received first monitoring result information M_r1′ to a signal to noise ratio (SNR) value. In addition, the first device (e.g.,1410) according to the present embodiment may determine whether the radio link condition is an LOS condition or an NLOS condition with respect to the second device (e.g.,1420), based on the second monitoring result information M_r2′. For example, referring toFIG.15andFIG.17, when it is determined as an LOS environment according to the second monitoring result information M_r2′ and when the SNR value based on the first monitoring result information M_r1′ is 23 dB, the MCS that can be used in the radio link condition may be determined as an MCS21ofFIG.17. For example, referring toFIG.15andFIG.18, when it is determined as an NLOS environment according to the second monitoring result information M_r2′ and when the SNR value based on the first monitoring result information M_r1′ is 27 dB, the MCS that can be used in the radio link condition may be determined as an MCS17ofFIG.18. In step S1540, the first device (e.g.,1410) may determine an available compression rate for a data stream in the radio link condition, based on the determined MCS. Table 1 and Table 2 below may be used to determine a compression rate at which real-time transmission is possible in the radio link condition. In addition, information as shown in Table 1 and Table 2 below may be information pre-stored in the first device (e.g.,1410). Table 1 below shows an MCS capable of real-time transmission when the compression ratio is 1/4. An operation time of Table 1 below may be associated with a time for exchanging a packet between two electronic devices. For reference, the operation time of Table 1 may be set to 0.5 ms. TABLE 1MPDU Transmission CaseOperation Time (ms)0.500MCSMod1,670,5282BPSK4.4143BPSK3.5414BPSK2.9585BPSK2.7346BPSK2.5427QPSK2.2308QPSK1.7939QPSK1.50210QPSK1.39011QPSK1.2941216-QAM1.1371316-QAM0.9191416-QAM0.7741516-QAM0.7171616-QAM0.6691764-QAM0.7731864-QAM0.6281964-QAM0.5312064-QAM0.4932164-QAM0.461 Table 2 below shows an MCS capable of real-time transmission when the compression ratio is 1/6. An operation time of Table 2 below may be associated with a time for exchanging a packet between two electronic devices. For reference, the operation time of Table 2 may be set to 0.5 ms. TABLE 2MPDU Transmission CaseOperation Time (ms)0.500MCSMod1,177,5282BPSK3.1213BPSK2.5064BPSK2.0965BPSK1.9386BPSK1.8037QPSK1.5838QPSK1.2769QPSK1.07110QPSK0.99211QPSK0.9241216-QAM0.8141316-QAM0.6601416-QAM0.5581516-QAM0.5191616-QAM0.4851764-QAM0.5581864-QAM0.4551964-QAM0.3872064-QAM0.3612164-QAM0.338 For example, when the MCS value determined in step S1530is 20 or 21, the compression rate for the data stream may be determined to 1/4 associated with Table 1, in consideration of the operation time. As another example, when the MCS value determined in step S1530is 16, 18, or 19, the compression rate for the data stream may be determined to 1/6 associated with Table 2, in consideration of the operation time. However, when the MCS value determined in step S1530is 20 or 21, the compression rate for the data stream may be determined to 1/6 associated with Table 2, in consideration of the radio link condition and the operation time. It will be understood that Table 1 and Table 2 above are for exemplary purposes only, and the present specification is not limited thereto. In step S1550, the first device (e.g.,1410) may compress the data stream, based on the determined compression rate. For example, the first device (e.g.,1410) may control the AV compression module1412to perform compression on the data stream, based on a second control signal ‘Sel’ associated with the compression rate determined in the previous step. In step S1560, the first device (e.g.,1410) may transmit the compressed data stream to the second device (e.g.,1420). For example, the first device (e.g.,1410) may control the Tx module1413to transmit received compressed data stream Data_C. FIG.16is a flowchart showing an operation method of an electronic device supporting a variable compression rate based on a radio link condition according to the present embodiment, from a viewpoint of a panel device. Referring toFIG.15andFIG.16, in step S1610, a second device (e.g.,1420) of an electronic device (e.g.,1400) according to the present embodiment may perform a monitoring operation on a radio link condition with respect to a first device (e.g.,1410) corresponding to a TV main body. For example, according to an Rx AWV pre-set in the second device (e.g.,1420), the second device (e.g.,1420) may receive from the second device (e.g.,1420) a signal for monitoring the radio link condition while changing the Rx AWV associated with the Rx module (e.g.,1421). According to the present embodiment, the second device (e.g.,1420) may obtain a plurality of pieces of monitoring result information (e.g., M_r1, M_r2, and M_r3 ofFIG.14) by measuring channel capacity depending on a combination of the Tx AWV and the Rx AWV. For example, the second device (e.g.,1420) may obtain first monitoring result information M_r1 associated with channel capacity based on the Tx AWV and the Rx AWV and second monitoring result information M_r2 associated with LOS/NLOS depending on the combination of the Tx AWV and the RX AWV according to the monitoring operation of the previous step. For example, the operation of determining the LOS and NLOS by means of the Rx module1421may be performed by using a profile of graphs ofFIG.17andFIG.18. Further, the second device (e.g.,1420) may obtain third monitoring result information M_r3 associated with maximum channel capacity depending on the combination of the Tx AWV and the Rx AWV according to the monitoring operation of the previous step. In step S1620, the second device (e.g.,1420) according to the present embodiment may transmit the plurality of pieces of monitoring result information (e.g., M_r1, M_r2, and M_r3 ofFIG.14) to the first device (e.g.,1410). In step S1630, the second device (e.g.,1420) according to the present embodiment may obtain information on the compression rate determined by the first device1410. The information on the compression rate determined by the first device1410may be obtained through a syntax of a low level or may be obtained based on header information of the compressed data stream. In addition, the second device (e.g.,1420) may receive the compressed data stream from the first device (e.g.,1410). FIG.17andFIG.18show a graph used to select an MCS according to the present embodiment. Referring toFIG.17, packet error rate (PER) performance of a vertical axis is shown according to an SNR of a horizontal axis, when a radio link condition is associated with LOS and when 16QAM is applied for a radio link. As described above, when SNR is 23 dB in the LOS condition ofFIG.17, an MCS may be determined to 21. Referring toFIG.18, PER performance of a vertical axis is shown according to an SNR of a horizontal axis, when the radio link condition is associated with NLOS and when 64QAM is applied for a radio link. As described above, when SNR is 27 dB in the NLOS condition ofFIG.18, an MCS may be determined to 21. FIG.19shows an example of applying an electronic device supporting a variable compression rate, based on a radio link condition, according to the present embodiment. An electronic device1900according to the present embodiment may be a display device such as TV. The electronic device1900may include a first device1910corresponding to a main body device and a second device1920corresponding to a TV panel. For example, the first device1910may be understood based on the description on the first device1410ofFIG.14. In addition, the second device1920may be understood based on the description on the second device1420ofFIG.14. Although a detailed embodiment is described in the detailed description of the present specification, it will be apparent that various modifications can be made without departing from the scope of the present specification. And, therefore, the scope of the present specification shall not be limited only to the above-described embodiment and shall rather be determined based on the scope of the claims that will hereinafter be described as well as the equivalents of the scope of the claims of the present disclosure. | 48,500 |
11943325 | DETAILED DESCRIPTION In the following description, for purposes of explanation and non-limitation, specific details are set forth, such as particular nodes, functional entities, techniques, protocols, etc. in order to provide an understanding of the described technology. It will be apparent to one skilled in the art that other embodiments may be practiced apart from the specific details described below. In other instances, detailed descriptions of well-known methods, devices, techniques, etc. are omitted so as not to obscure the description with unnecessary detail. Sections are used in this Detailed Description solely in order to orient the reader as to the general subject matter of each section; as will be seen below, the description of many features spans multiple sections, and headings should not be read as affecting the meaning of the description included in any section. Overview Client computer systems communicate with one or more transaction processing systems using different communication pathways (e.g., different communication protocols). Submissions of new data transaction requests to transaction processing systems can be performed using one of at least two different communication protocols. A first communication protocol is slower than the second. How the transaction processing system processes new data transaction requests can be based on which of the protocols has been used for the submission process. In particular, those data transaction requests submitted with the slower protocol may be processed by taking into account private attributes (in addition to public attributes) while those requests submitted with the faster protocol may only be processed by using public attributes. Updates regarding how data transaction requests have been processed by a given transaction processing system can also be performed by using one of multiple different communication protocols. In certain examples, the type of information included in an update for a given protocol is based on whether the protocol is “fast” or “slow.” A slower protocol may include additional private attribute information related to the update. In contrast, an update sent out via the faster protocol may include only public attribute information. In certain examples, a slower protocol may include using a session manager component that handles communications from a given client to a given transaction processing system and updates from the given transaction processing system to the given client. In other words, this protocol may handle data transaction requests submissions and data feed updates. In certain examples, a faster submission protocol may use the FIX messaging standard. A faster update protocol may use the ITCH messaging protocol. FIG.1 By way of introduction,FIG.1illustrates a non-limiting example function block diagram that includes computer-implemented transaction processing system(s) (100A and100B) that communicate with client systems (110and120) using different communication protocols. Transaction processing systems100A and100B may be automated exchange computer systems (“exchange”(s) hereafter). Exchange100A includes a matching engine102A, order book104A, and network interfaces106A and108A. Similarly, exchange100B includes matching engine102B, order book104B and network interfaces106B and108B. As discussed herein, the similarly situated components of respective exchanges100A and100B may operate in similar manners and when one component is discussed (matching engine102A), the described functionality may apply to the other component (matching engine102B). In certain examples, different exchanges may process different types of resources (e.g., in a security example, different types of bonds may be handled by respectively different exchanges). In certain examples, there may be 10s or 100s of different exchanges included in an overall distributed computer system that handles data transaction requests. For ease of explanation, two separate exchanges are shown inFIG.1—but systems that implement one or more than two exchanges are contemplated. Exchanges may be implemented on one or more computer systems; such as the system shown inFIG.4. Matching engine102A can be a combination of hardware (e.g., a hardware processor, such as a central processing unit) and software or just hardware (e.g., a suitably designed application-specific integrated circuit or field programmable gate array). The matching engine102A handles incoming data transaction requests (e.g., data transaction request112A or112B) and attempts to match incoming data transaction requests against those data transaction requests stored in the electronic order book104A. In certain example embodiments, in addition or alternatively to matching incoming data transaction requests, the matching engine102A attempts to match data transaction requests already stored in the electronic order book104A (e.g., it attempts to match two “resting” or “passive” data transaction requests). In certain examples, market conditions (e.g., the state of the order book for a particular instrument) may change and cause two data transaction requests that were previously stored in the order book104A to match (or cross). In response to such a change, the matching engine102A may identify two data transaction requests that can match and trigger the execution (e.g., trade) between those two data transaction requests. Electronic order book104A can be a data structure (e.g., a database, flat file, list, etc. . . . ) that holds multiple entries in electronically accessible memory (e.g., RAM, cache, registers, hard disk drives, etc. . . . ). Typically, an electronic order book104A has two sides, side X and side Y, which can be bid and offer/ask or buy and sell sides for the same instrument in the same electronic order book. The two sides of the order book may be represented as a list pair data structure (one list for the buy side and one list for the sell side). In certain examples, each list of the list pair may be sorted. Further discussion of order books is described in U.S. Publication No. 2017/0004563, the entire contents of which are hereby incorporated by reference. Data transaction requests are initially submitted by client systems (110and/or120). Client systems can include personal computers, mobile devices, automated computer systems, and the like. Generally, client systems102can include any computer system programmed to interface with exchange(s)100A/100B (or components associated therewith) for the purpose of submitting data transaction requests. Data transaction requests include information that can specify an “order” (also called an electronic order herein). Orders are requests for one or more data transaction systems to take a given action (e.g., buy/sell) with respect to an identified resource (e.g., a ticker symbol). For ease of description, “orders” (or electronic orders) are used herein to also refer the submitted “data transaction requests.” Electronic Orders112A and112B are submitted in the form of electronic data messages (e.g., data messages constructed according to a communication protocol) for a corresponding order (e.g., a data transaction request to match the order to a pending, or future, order). The electronic order may specify a client ID that identifies the client sending the request (e.g., a company, person, etc. . . . ), an instrument ID that identifies a particular instrument (e.g., a ticker symbol or the like), transaction type ID that may identify, for example, whether the request is associated with a sell or buy instruction, an order attribute that specifies whether this a regular order, a discretion order, a midpoint order, or the like, a quantity value that indicates the quantity of the order, a MinOrder value that indicates a minimum order amount that this order can be matched against, and a price value that indicates a particular price for the order subject to the data transaction request. In certain examples, other fields may be defined in the electronic order and/or some may be optional. In certain examples (e.g., in the case of a discretion order), an electronic order may also specify the price for the displayed quantity of the order, the quantity to be publically shown, a number of discretion ticks for the hidden portion (a private attribute of the order) of the order (e.g., tick amount that is less than, such as a fraction of, a standard screen tick quantity), the total size of the order (a private attribute of the order), and the amount of the order that is eligible to be matched with discretion (a private attribute of the order). In certain examples, the electronic order may specify a total size value and a publically visible value (e.g., such that the size of the private part of the order is the difference between the total size and the visible value). In certain examples, the electronic order may specify a hidden size value (a private attribute of the order) and a publically visible value (e.g., such that the total size of the order is the sum of the hidden and visible values). In certain examples, an order may specify the amount of the order that may be matched at the price allowed by the discretion. This amount may be less than the total amount of the order. Thus, for example, a discretion order may display 25, have a total quantity of 1000 (e.g., 975 of the 1000 is a private attribute), and have 500 of that 1000 eligible to be traded at 1 discretion tick from the displayed or publically broadcast price. In certain examples, a private attribute includes a number of discretion ticks. Based on this value and the public price of the order, a discretion price can be calculated. As discussed herein, public refers to information (e.g., price information, size information, trade information, etc. . . . ) that is generally made available to third parties via an electronic data feed (e.g., via ITCH gateway124). The parties that receive this information are generally not associated with the order and/or trade. This aspect may also be referred to as “displaying” information to such third parties. Conversely, private (sometimes also referred to as hidden or dark herein) information generally refers to information for which the details are not (at least initially) provided to such third-parties. In certain instances, private information includes information regarding just the existence of private attributes of an order. Private information is made available to the parties associated with the trade, order, match, etc . . . . This also includes the exchange itself, third party regulatory systems, and the like. Thus, for example, an order may have a public price (e.g. that is “displayed” or provided to third parties), a private price (e.g. that is not expressly displayed or provided to third parties), or a combination thereof (e.g., a public price value and a private price value). In certain examples, when a match is determined between a “public” price and a “private” price, such match information is not made public. The “public” information in such a case is transformed into private information. It will be appreciated that discretion type orders may have private attributes (e.g., private discretion attributes) and midpoint type orders may also have private attributes (e.g., private midpoint attributes). Other order types with private attributes may be similarly used. In certain examples, private information regarding orders may be provided via one of a plurality of communication protocols used to deliver updates from exchange(s) to client computer systems—including client computer systems that are controlled by third parties (e.g., entities that are not a party to a trade or match, or other action to an order book). In certain examples, orders may only match against resting orders by using private information if the submitted order is marked as being from a specific source type. Examples of such matching are shown in FIG. 6 of U.S. Publication No. 2017/0004563 where orders may be marked as being from different types of clients via a specific data field of the order. This may include clients that access the exchanges via certain types of protocols, but not others. For example, orders may be flagged as being from a “GUI user” or a SessionManager user. A matching engine may use or not use private attributes as part of a matching process based on the order being submitted from a GUI user or a SessionManager. There are different communication protocols that clients can use to connect/communicate with exchanges100A/100B. A first communication protocol (e.g., included in a first communication path) uses session manager114and gateways (116A and116B) that respectively correspond to individual exchanges (100A and100B). A second communication protocol (e.g., included in a second communication path) uses gateways (e.g., FIX gateway122) that determine which exchange (among exchanges100A and100B) an incoming order (e.g.,112B) should be routed. For the first communication protocol, as shown inFIG.1, client110communicates with session manager114. Session manager114is generally responsible for managing a “session” that client110has with one or more of the exchange systems shown inFIG.1. This includes handling orders submitted by client110and providing information to client110from the various exchanges (e.g., market data feed information). With the first communication protocol, client110generates a new order and submits it to exchange(s)100A/100B via the session manager114. In certain examples, the session manager114receives the new order112A and then determines, based on the contents of the order and/or associated information, where order112A should be sent. Specifically, the session manager determines if order112A should be sent to exchange100A via gateway116A or should be sent to exchange100B via gateway116B. Gateways116A and116B are each associated with a corresponding exchange—100A and100B—such that exchanges100xmay have a 1-to-1 relationship with gateways116x. In certain example embodiments, the session manager114may annotate a newly received order with additional information indicating that the order has been received via the session manager. For example, a data field of the order may be updated with a value that indicates that this order is being submitted through the session manager114. As discussed herein, this information may be used to determine how the order may be processed by the corresponding exchange In general, session manager114and gateways116A/116B are separate computer systems that communicate with one another. Thus, session manager114may be implemented on a first computer system that communicates with gateways116A/116B that are implemented on separate computer systems (e.g., separate ones of the computer system shown inFIG.4). However, in certain examples, session manager114, gateways116A/116B, and/or exchanges100A/100B may be separate components (e.g., computer processes) implemented on the same physical computer system. In certain examples, each of the components may be implemented on different virtual machines that may or may not be running on the same underlying hardware as other components. In any event, orders that are received by gateways A or B are then transmitted to the respective exchanges100A or100B where the network interface (106A and106B) of those exchanges receive the new order. The new order is then passed to the matching engine (102A or102B) of the exchange that performs a matching process as discussed herein. In certain examples, the matching process results in a match being found and subsequent executed. In other examples, the matching process results in a new order being added to the order book. Additionally, whenever there is an update to the state of the order book (e.g.,104A or104B) of an exchange (e.g., a new order is added, an order is canceled, modified, etc. . . . ), an electronic data message may be generated that is based on how the order book has been updated. The electronic data message may be transmitted back to client110via network interface108A/108B, the gateway116A/116B, and the session manager114. Such information may include updates to the order book caused by orders from other clients interacting with the order book and/or updates or confirmations that have been generated based on those orders submitted by client110. For example, if a first client submits an order that is added to the order book, an update may be transmitted to a second client that indicates that the order book includes that new order. In certain example embodiments, updates that are provided via this first communication path may include information that is “private” in addition to information that is public. For example, if a first client submits a new order of quantity 1000 @ 100 that is added to the order book, where only 100 of that 1000 is “public”, information regarding the full 1000 may be included in the electronic data message that is transmitted out via this communication path. As explained herein, the “private” information may specify that additional quantity exists—but without reference to an exact number, or may include a range (e.g., that between 1000 and 10000 additional quantity is available), or may provide the exact amount that is available. In contrast to the first communication protocol that makes use of the session manager114, the second communication protocol uses a general gateway122and does not include the session manager component in its communication pathway. General gateway122may implement the FIX (Financial Information eXchange) messaging protocol for sending and receiving messages to/from an exchange. Accordingly client120may generate an electronic data message (e.g., using the FIX messaging protocol)112B and submit that message to gateway112. The message may be to submit a new order, modify and existing order, cancel an existing order, or request information from the exchange regarding resting orders. Gateway112receives the message and determines which of the exchanges (e.g., among100A and100B) will receive the message. In other words, gateway122may include functionality that is similar in nature to that found in the session manager114and gateways116A/116B. Accordingly, the newly received message may be routed to the appropriate exchange system based on the contents of the new data transaction request112B and/or other factors. A third communication protocol may be used to communicate data regarding the state of the order book to clients. In particular, an electronic data feed gateway124may communicate with network interfaces108A/108B to provides updates on the state of the order book(s) to client120. In certain examples, the electronic data feed gateway124may implement the ITCH protocol. Typically, with this third communication protocol only “public” data will be included in electronic data feed updates. Thus, for example, for the above submitted order with 1000 total quantity, of which only 100 is “public,” only the 100 will be include in updates provided via gateway124. In certain examples, private information may be included in the third communication protocol. In certain examples, the level of private information provided via the first communication protocol may be greater than the level of information provided via the third communication protocol. For example, the third communication protocol may generally indicate that additional quantity is available for a given order (or at a price level) while the first communication protocol includes information on the exact amount of quantity that is available. In certain examples, client120and gateways122and124may be co-located (as indicated by dashed line126) with one another (e.g. within the same data center). In certain examples, client110, session manager114, and gateways116A and116B may also be co-located with one another. In certain examples, exchanges100A and100B may be co-located with these components. It will be appreciated that even when the session manager, gateways, and clients are all co-located that the first communication protocol is generally slower than the second or third communication protocols discussed above. In certain examples, the same client computer system may use multiple different communication protocols. For example, client110may also be in communication with gateway122and gateway124. Accordingly, as the communication protocols operate at different speeds, data regarding the same updates may be received at client110at different times. In other words, the “state” of the order book for an exchange may be different depending on what communication protocol is being used by a given client. Furthermore, order submission times (e.g., the time from when client110transmits a new data transaction request to the time it “hits” the matching engine of an exchange) may be different. FIGS.2A-2B FIGS.2A-2Bshows a signal diagram of how data transaction requests from clients110and120and how updates back to those clients are communicated according to certain example embodiments. At200, an update to the order book occurs on one of the exchanges100x(A or B). For example, a new order is added to the order book or an order has been removed from the order book (e.g., because it was matched against an incoming order). As a result, an electronic data feed update is sent out at202. This electronic data feed is provided via gateway124to all clients (including client120that subscribes to that data feed). As discussed above, typically, the information provided via this data feed will only include information that is classified as “public” data. Thus, hidden size, price, discretion attributes, or other private or hidden information will typically not be included in the data messages that are published via this data feed. At204, information may also be transmitted out via the first communication protocol. In particular, those clients that have an active session with the exchange and/or order book for which the update relates may receive a session update message204that includes information regarding the order book update. The update message may be routed through gateway116A and session manager114before being received by client110. As noted herein, the session update message204may include private or hidden attribute information regarding orders that are in the order book. However, in this particular example, the update that triggered the session update204did not contain any private information (e.g., the update to the order book was based on lit or publically visible order information). At206, client120may generate an order that is to be submitted to exchange100x. The order may be generated based on the information received via the electronic data feed at202. As discussed herein, by the time client2has generated (and submitted) an order, the session update message204may still be “in-flight” and in the process of being delivered to client1. At208, client2submits an order to exchange100xvia gateway122. In certain examples, when clients submit orders via gateway122(e.g., a FIX gateway) the order may be automatically marked as such (e.g., that the order was submitted using the FIX messaging protocol). At210, exchange100xprocesses the order submitted at208. Whether or not a submitted order can match against the private attributes of other resting orders may be determined based on whether or not the order has been submitted via FIX or via the Session manager. Specifically, in certain examples, orders submitted via FIX may not match against the private data of resting orders, while orders (as discussed below) that are submitted via the session manager114may match against such data. At212, as a result of processing the newly submitted order, the order book of the corresponding exchange is updated and that update is then transmitted out via the ITCH electronic data feed at214and the session manager at216. As shown inFIG.3, the time between200and212may be less than the time it takes for session update204to arrive at client110. In other words, the “state” of the order book represented by204may be stale or old by the time client110receives the update message. Of course, client110may also be configured to receive updates via gateway124. FIG.2Bshows an example of how session manager based orders may be processed. At250, an update to the order book occurs and those updates cause updates to be propagated out to clients via different communication protocols at252and254. Unlike the update that was triggered inFIG.2A, this update includes order information that is both public data and private data. For example, the update to the order book may include information regarding an order added to the order book at a displayed price of101(e.g., public data) that also included two ticks of discretion (e.g., private data). At252, the information is propagated via gateway116A and session manager114to client110. The update includes both the publically display price for the order and the amount of discretion that the order can trade at (or simply an indicator that it can trade with discretion—but not the price level at which that would be). This information is transmitted from the exchange100xto gateway116A, to session manager114and finally to the client110. In certain examples, updates provided via252may only include the “top” price level for a given security or instrument. Thus, orders with hidden attributes that are not at the top price level may not be included in such updates (e.g., an order that does have the best price, but has 100 k hidden quantity will not be included in an update). In certain examples, the private information that is included in the update may simply indicate that an order with private attributes exist—but not indicate the precise details of the order. For example, the information provided via the update may indicate that additional quantity is available—but not provide an exact quantity amount. The information provided via252may be displayed to users on respective client compute system using a graphical user interface. The graphical user interface may show an indicator next to a price for an order (or price level) if discretion (either quantity or price) is available. The indicator may be an “*” or a “+” sign to indicate that additional “hidden” order flow is available for a given security. The information that is included in an update communicated via the session manager may include one or more of the following fields: 1) BID/OFFER disc_lit price−tick price level in the system to which discretion is applied; 2) BID/OFFER discretion ticks−number of ticks of discretion for “discretion price”: 3) BID/OFFER discretion price−Actual discretion price, for applications that don't want to compute based on discretion ticks; 4) BID/OFFER discretion size−Actual best discretion size (e.g., if this value is set to ‘0’ it will indicate that there is no price availability for this price level). Further, if the BID/OFFER discretion size field is greater than zero then discretion is present. But if it is set to zero then there is currently no discretion for that side at that level. At254, the ITCH data feed (e.g., using the ITCH messaging protocol) is propagated to client120via the ITCH gateway124. However, unlike the update provided by252, this update will not include any indication that discretion is available for the order that is displayed at101. At256, client110has received the update252and then decides to generate and submit, at258, a new order. For example, client110may decide to submit an order to match against the order that caused the update at250at a price level between the bid/offer spread. This submitted order is initially submitted to the session manager114. In certain examples, once the session manager receives the new order it may be automatically annotated with information indicate that this order is an order that is being submitted via the session manager114. Session manager114then routes that order to gateway116A, which then routes it to exchange110A. Once exchange100A receives the order, then it is processed at260(e.g., by the matching engine). Examples of how such processing may occur for orders that have hidden attributes is described in U.S. Publication No. 2017/0004563, the entire contents of which are hereby incorporated by reference. For example, a match may be determined at a price using the discretion attribute of the resting order. In any event, the order book is updated262, which triggers or causes the updates via the separate communication protocols at264and266. The update that is provided via266may include private information regarding the just performed match (e.g., simply an indicator that the match occurred with private information or more precise information the price and/or quantity). In certain examples, the update266is provided to any client computer systems that are using the communication protocol used for the 266 update. In other words, the update that includes the private information may be provided to third party client computer systems (e.g., those systems that are not associated with the cause of the update to the order book at262). The following is an example sequence of transactions for how updates and order submissions may be realized according to the example embodiments described herein. Initially, an order book includes a bid that publically shows 5@100. A data transaction requests is submitted that is for a non-elect order with discretion. That order is 10@100 with 1 discretion tick. The market data API (e.g., provided via the session manager) may show “15@100, BID_DISCR_LIT_PRICE=100, BID_DISCR_TICK=1, BID_DISCR_PRICE=100.0039, BID_DISCR_SIZE=10.” In other words, there is 15 available at100. Further, the data feed includes information that there is 1 discretion tick available, and 10 (of the 15) is available using the discretion tick at the indicated discretion price. Next, another order with the same discretion is received. In the order book is a Bid showing 15@100 (with 10 M having 1 Discr tick). A new request is received for non-elect discretion order and is entered for 10@100 with 1 discretion tick. The electronic data feed will then be updated with the following information 25@100, BID_DISCR_SIZE=20. Note that in this instance the other discretion fields are not included in the update because they have not changed. Next, another order is received with better discretion. The order book shows 25@100 (20 M with 1 Discr tick). The new request is a bid non-elect discretion for 10@100 with 2 discretion ticks. The electronic data feed is updated with the following, “35@100, BID_DISCR_TICK=2, BID_DISCR_PRICE=100.0020, BID_DISCR_SIZE=10.” Next, a counter side order is received. The order book shows (20 M with 1 Discr tick, 10 M with 2 Discr ticks). The next order is an offer 10@100 that is executed with discretion. The electronic data feed may be updated as follows “100, BID_DISCR_TICK=1, BID_DISCR_PRICE=100.0039, BID_DISCR_SIZE=20” (basically back to a prior state). Note that volumes may be updated on a delayed basis. Next the top level of the bid side is topped. The current state of the order book is 25@100 (20 M with 1 Discr tick). A new bid order is received for 10@100+. The electronic data feed send out updates like— “BID1 will show 10@100+, BID2 will show 25@100, BID_DISCR_SIZE=0.” Here the discretion is no longer applies to the top level of the book, so the discretion size further down in the order book is, essentially, listed as zero. The order book shows: 1) 10@100+; and then 2) 25@100 (20 M with 1 Discr tick—not displayed though). An offer is received and entered for 10@100+. This matches against the top order and executes at the full bid size. The electronic data feed is then updated as follows: BUY1 will show 0@100, BID2 shows 25@100, BID_DISCR_PRICE=100, BID_DISCR_TICK=1, BID_DISCR_PRICE=100.0039, BID_DISCR_SIZE=20. With the order book having 25@100 (20 M with 1 Discr tick), a new non-elect order with discretion is received and entered for 20@100 with 1 discretion ticks showing 1 with a reserve of 19. The electronic data feed may be updated as follows: 26@100+, BID_DISCR_SIZE=40. Note here that the discretion price/tick has not changed, but 20 M has been added to discretion. Additionally, as this is a non-elect discretionary reserve, the reserve portion of the new order (19) is not shown on SIZE field—instead the publically visible “1” is added to the previously available25. In another example (not continuing with the above order book state), an order book shows 25@100 (20 M with 1 Discr tick). A new bid order is received. The new order is specified as a SessionManager Elect Reserve order and is entered for 50@100 with 1 discretion ticks showing 1 with 49 in reserve. Accordingly, the reserve aspects of this order will only be available to other orders that have been submitted using the session manager communication path. The session manager market data will then be updated to show 75@100, BID_DISCR_SIZE=70. Furthermore, the ITCH based market data feed may only be updated based on the “displayed” size (e.g., only 26 will be included in the ITCH based data feed). Next an order modification is received for the session manager elect reserve order. The order is modified from 50@100 (1/49), to 20@100 (1/19). Basically, 30 is removed from the reserve and the market data API provided via the session manager is updated to show 45@100, BID_DISCR_SIZE=40. Note that in this instance, no updates are provided over the ITCH electronic data feed because the “public” aspects of the order have not been modified. With the order book having order1 showing 25@100 (20 M with 1 Discr tick) and order2 showing 20 M SessionManager Elect Reserve (for API)/1 Shown/19 Reserve (for ITCH) (e.g., the session manager API shows the top bid being 45@100), a new order is received. The new order is a SessionManager Elect order and is entered 30@100 showing 5 with 25 in reserve. The session manager market data feed will then be updated to show 70 @100. In other words, the size 70@100 is changed to indicate new SessionManager Elect size. Further, at the same time, the ITCH data feed will be updated to add 5 M (the displayed size). In this example, discretion did not change so there is no update for it. FIG.3 FIG.3is a signal diagram that illustrates example timings of communication protocols used according to certain example embodiments. In a first example, an update from exchange100A to client120using the ITCH gateway124(300) may happen within 130 microseconds. In contrast, a second example for that same update for the same order book update using gateway116A, session manager114, to client110(302A,302B,302C) may occur over 600 microseconds. A similar difference in timings is seen for order submissions where orders submitted from client120via FIX gateway122to exchange100A (310) may take around 100 microseconds. However, orders submitted from client110, via session manager114(312A), gateway116A (312B), to exchange (312C) may take around 650 microseconds. Naturally other timing may come about depending on the speed of communication protocols involved. It will be appreciated that the total time it takes for information to be received via the ITCH gateway and acted upon by clients in the form of an order submission via the FIX gateway (e.g., 100+130) is less than just update time for updates that occur via the session manager. In general, the techniques herein of providing hidden attribute data via a slower communication protocol (e.g., session manager updates) are beneficial when the timing for such updates is greater than the total time for acting upon updates using a faster communication protocol (e.g., receiving updates, submitting a new order based on those updates, and having it processed by the exchange). Such benefits may be particularly relevant in multiple exchange environments where updates in one exchange may cause clients to submit orders to another exchanges (or order books). With the techniques herein, matching based on private attribute information may be restricted to those orders submitted via the slower communication protocol (e.g. using the session manager). This allows, for example, clients that have orders with private attributes to be adjusted, canceled, or otherwise modified (e.g., via the FIX gateway) before any orders can be submitted (e.g., via the session manager) that could potentially match against that order based on quick changing updates. FIG.4: FIG.4is a block diagram of an example computing device400(which may also be referred to, for example, as a “computing device,” “computer system,” or “computing system”) according to some embodiments. In some embodiments, the computing device400includes one or more of the following: one or more processors402; one or more memory devices404; one or more network interface devices406; one or more display interfaces408; and one or more user input adapters410. Additionally, in some embodiments, the computing device400is connected to or includes a display device412. As will explained below, these elements (e.g., the processors402, memory devices404, network interface devices406, display interfaces408, user input adapters410, display device412) are hardware devices (for example, electronic circuits or combinations of circuits) that are configured to perform various different functions for the computing device400. In some embodiments, each or any of the processors402(a hardware processor) is or includes, for example, a single- or multi-core processor, a microprocessor (e.g., which may be referred to as a central processing unit or CPU), a digital signal processor (DSP), a microprocessor in association with a DSP core, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) circuit, or a system-on-a-chip (SOC) (e.g., an integrated circuit that includes a CPU and other hardware components such as memory, networking interfaces, and the like). And/or, in some embodiments, each or any of the processors402uses an instruction set architecture such as x86 or Advanced RISC Machine (ARM). In some embodiments, each or any of the memory devices404is or includes a random access memory (RAM) (such as a Dynamic RAM (DRAM) or Static RAM (SRAM)), a flash memory (based on, e.g., NAND or NOR technology), a hard disk, a magneto-optical medium, an optical medium, cache memory, a register (e.g., that holds instructions), or other type of device that performs the volatile or non-volatile storage of data and/or instructions (e.g., software that is executed on or by processors402). Memory devices404are examples of non-volatile computer-readable storage media. In some embodiments, each or any of the network interface devices406includes one or more circuits (such as a baseband processor and/or a wired or wireless transceiver), and implements layer one, layer two, and/or higher layers for one or more wired communications technologies (such as Ethernet (IEEE 802.3)) and/or wireless communications technologies (such as Bluetooth, WiFi (IEEE 802.11), GSM, CDMA2000, UMTS, LTE, LTE-Advanced (LTE-A), and/or other short-range, mid-range, and/or long-range wireless communications technologies). Transceivers may comprise circuitry for a transmitter and a receiver. The transmitter and receiver may share a common housing and may share some or all of the circuitry in the housing to perform transmission and reception. In some embodiments, the transmitter and receiver of a transceiver may not share any common circuitry and/or may be in the same or separate housings. In some embodiments, each or any of the display interfaces408is or includes one or more circuits that receive data from the processors402, generate (e.g., via a discrete GPU, an integrated GPU, a CPU executing graphical processing, or the like) corresponding image data based on the received data, and/or output (e.g., a High-Definition Multimedia Interface (HDMI), a DisplayPort Interface, a Video Graphics Array (VGA) interface, a Digital Video Interface (DVI), or the like), the generated image data to the display device412, which displays the image data. Alternatively or additionally, in some embodiments, each or any of the display interfaces408is or includes, for example, a video card, video adapter, or graphics processing unit (GPU). In some embodiments, each or any of the user input adapters410is or includes one or more circuits that receive and process user input data from one or more user input devices (not shown inFIG.4) that are included in, attached to, or otherwise in communication with the computing device400, and that output data based on the received input data to the processors402. Alternatively or additionally, in some embodiments each or any of the user input adapters410is or includes, for example, a PS/2 interface, a USB interface, a touchscreen controller, or the like; and/or the user input adapters410facilitates input from user input devices (not shown inFIG.4) such as, for example, a keyboard, mouse, trackpad, touchscreen, etc . . . . In some embodiments, the display device412may be a Liquid Crystal Display (LCD) display, Light Emitting Diode (LED) display, or other type of display device. In embodiments where the display device412is a component of the computing device400(e.g., the computing device and the display device are included in a unified housing), the display device412may be a touchscreen display or non-touchscreen display. In embodiments where the display device412is connected to the computing device400(e.g., is external to the computing device400and communicates with the computing device400via a wire and/or via wireless communication technology), the display device412is, for example, an external monitor, projector, television, display screen, etc . . . . In various embodiments, the computing device400includes one, or two, or three, four, or more of each or any of the above-mentioned elements (e.g., the processors402, memory devices404, network interface devices406, display interfaces408, and user input adapters410). Alternatively or additionally, in some embodiments, the computing device400includes one or more of: a processing system that includes the processors402; a memory or storage system that includes the memory devices404; and a network interface system that includes the network interface devices406. The computing device400may be arranged, in various embodiments, in many different ways. As just one example, the computing device400may be arranged such that the processors402include: a multi (or single)-core processor; a first network interface device (which implements, for example, WiFi, Bluetooth, NFC, etc. . . . ); a second network interface device that implements one or more cellular communication technologies (e.g., 3G, 4G LTE, CDMA, etc. . . . ); memory or storage devices (e.g., RAM, flash memory, or a hard disk). The processor, the first network interface device, the second network interface device, and the memory devices may be integrated as part of the same SOC (e.g., one integrated circuit chip). As another example, the computing device400may be arranged such that: the processors402include two, three, four, five, or more multi-core processors; the network interface devices406include a first network interface device that implements Ethernet and a second network interface device that implements WiFi and/or Bluetooth; and the memory devices404include a RAM and a flash memory or hard disk. As previously noted, whenever it is described in this document that a software module or software process performs any action, the action is in actuality performed by underlying hardware elements according to the instructions that comprise the software module. Consistent with the foregoing, in various embodiments, each or any combination of clients110/120, session manager114gateways116A/116B/122/124, transaction processing computer systems100A and100B, matching engines102A/102B, order books104A/104B, and network interfaces106A/106B/108A/108B, each of which will be referred to individually for clarity as a “component” for the remainder of this paragraph, are implemented using an example of the computing device400ofFIG.5. In such embodiments, the following applies for each component: (a) the elements of the400computing device400shown inFIG.4(i.e., the one or more processors402, one or more memory devices404, one or more network interface devices406, one or more display interfaces408, and one or more user input adapters410), or appropriate combinations or subsets of the foregoing) are configured to, adapted to, and/or programmed to implement each or any combination of the actions, activities, or features described herein as performed by the component and/or by any software modules described herein as included within the component; (b) alternatively or additionally, to the extent it is described herein that one or more software modules exist within the component, in some embodiments, such software modules (as well as any data described herein as handled and/or used by the software modules) are stored in the memory devices404(e.g., in various embodiments, in a volatile memory device such as a RAM or an instruction register and/or in a non-volatile memory device such as a flash memory or hard disk) and all actions described herein as performed by the software modules are performed by the processors402in conjunction with, as appropriate, the other elements in and/or connected to the computing device400(i.e., the network interface devices406, display interfaces408, user input adapters410, and/or display device412); (c) alternatively or additionally, to the extent it is described herein that the component processes and/or otherwise handles data, in some embodiments, such data is stored in the memory devices404(e.g., in some embodiments, in a volatile memory device such as a RAM and/or in a non-volatile memory device such as a flash memory or hard disk) and/or is processed/handled by the processors402in conjunction, as appropriate, the other elements in and/or connected to the computing device400(i.e., the network interface devices406, display interfaces408, user input adapters410, and/or display device512); (d) alternatively or additionally, in some embodiments, the memory devices402store instructions that, when executed by the processors402, cause the processors402to perform, in conjunction with, as appropriate, the other elements in and/or connected to the computing device400(i.e., the memory devices404, network interface devices406, display interfaces408, user input adapters410, and/or display device512), each or any combination of actions described herein as performed by the component and/or by any software modules described herein as included within the component. Consistent with the preceding paragraph, as one example, in an embodiment where an instance of the computing device400is used to implement transaction processing system100A, memory devices404could store data for order book104A and executable code for matching engine102A. Processors402may be used to execute the executable code for matching engine102A. In another example, ITCH gateway124may be implemented as an FPGA and session manager may be implemented as a software module that is executing on a computing device400that is separate from the computing device executing the matching engine. The hardware configurations shown inFIG.4and described above are provided as examples, and the subject matter described herein may be utilized in conjunction with a variety of different hardware architectures and elements. For example: in many of the Figures in this document, individual functional/action blocks are shown; in various embodiments, the functions of those blocks may be implemented using (a) individual hardware circuits, (b) using an application specific integrated circuit (ASIC) specifically configured to perform the described functions/actions (or FPGAs), (c) using one or more digital signal processors (DSPs) specifically configured to perform the described functions/actions, (d) using the hardware configuration described above with reference toFIG.4, (e) via other hardware arrangements, architectures, and configurations, and/or via combinations of the technology described in (a) through (e). Technical Advantages of Described Subject Matter In certain example embodiments, the subject matter described herein provides for alternative communication pathways (e.g., which may include different communications protocols and/or different physical attributes, like being co-located) for communicating information from a transaction processing system to client computer systems (and from client computer systems to the transaction processing system). Certain processing options (e.g., those that rely on private attributes) are only available for data transaction requests submitted via communication pathways that are slower that a faster communication option. This dual (or more) communication pathway implementation allows client computer systems more flexibility in how data transaction requests that are resting with the transaction processing system may be interacted with. In certain examples, providing additional information via a slower communication path can alleviate the latency stress (e.g., latency race) on client computer systems. Such latency concerns may also be alleviated due to requiring that data transaction requests be submitted through a slower communication pathway. Client systems can receive information quickly and act on that information quickly, before the newly received information can be acted upon against the private attributes of a data transaction request. This increases the stability of the transaction processing system. Selected Terminology Whenever it is described in this document that a given item is present in “some embodiments,” “various embodiments,” “certain embodiments,” “certain example embodiments, “some example embodiments,” “an exemplary embodiment,” or whenever any other similar language is used, it should be understood that the given item is present in at least one embodiment, though is not necessarily present in all embodiments. Consistent with the foregoing, whenever it is described in this document that an action “may,” “can,” or “could” be performed, that a feature, element, or component “may,” “can,” or “could” be included in or is applicable to a given context, that a given item “may,” “can,” or “could” possess a given attribute, or whenever any similar phrase involving the term “may,” “can,” or “could” is used, it should be understood that the given action, feature, element, component, attribute, etc. is present in at least one embodiment, though is not necessarily present in all embodiments. Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open-ended rather than limiting. As examples of the foregoing: “and/or” includes any and all combinations of one or more of the associated listed items (e.g., a and/or b means a, b, or a and b); the singular forms “a”, “an” and “the” should be read as meaning “at least one,” “one or more,” or the like; the term “example” is used provide examples of the subject under discussion, not an exhaustive or limiting list thereof; the terms “comprise” and “include” (and other conjugations and other variations thereof) specify the presence of the associated listed items but do not preclude the presence or addition of one or more other items; and if an item is described as “optional,” such description should not be understood to indicate that other items are also not optional. As used herein, the term “non-transitory computer-readable storage medium” includes a register, a cache memory, a ROM, a semiconductor memory device (such as a D-RAM, S-RAM, or other RAM), a magnetic medium such as a flash memory, a hard disk, a magneto-optical medium, an optical medium such as a CD-ROM, a DVD, or Blu-Ray Disc, or other type of device for non-transitory electronic data storage. The term “non-transitory computer-readable storage medium” does not include a transitory, propagating electromagnetic signal. Individual function or process blocks are shown in the figures. Those skilled in the art will appreciate that the functions of those blocks may be implemented using individual hardware circuits, using software programs and data in conjunction with a suitably programmed hardware, using applications specific integrated circuitry (ASIC), and/or using one or more digital signal processors (DSPs). The software program instructions and data may be stored on non-transitory computer-readable storage medium and when the instructions are executed by a computer, or other suitable hardware processor, control the computer or hardware processor to perform the functionality defined in the program instructions. Although process steps, algorithms or the like may be described or claimed in a particular sequential order, such processes may be configured to work in different orders. In other words, any sequence or order of steps that may be explicitly described or claimed does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order possible. Further, some steps may be performed simultaneously (or in parallel) despite being described or implied as occurring non-simultaneously (e.g., because one step is described after the other step). Moreover, the illustration of a process by its depiction in a drawing does not imply that the illustrated process is exclusive of other variations and modifications thereto, does not imply that the illustrated process or any of its steps are necessary to the invention(s), and does not imply that the illustrated process is preferred. A description of a process is a description of an apparatus for performing the process. The apparatus that performs the process may include, e.g., a processor and those input devices and output devices that are appropriate to perform the process. Although various embodiments have been shown and described in detail, the claims are not limited to any particular embodiment or example. None of the above description should be read as implying that any particular element, step, range, or function is essential. All structural and functional equivalents to the elements of the above-described preferred embodiment that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed. Moreover, it is not necessary for a device or method to address each and every problem sought to be solved by the present invention, for it to be encompassed by the invention. No embodiment, feature, component, or step in this specification is intended to be dedicated to the public. | 55,119 |
11943326 | DETAILED DESCRIPTION FIG.2is a block diagram that illustrates communication system200in accordance with one or more exemplary embodiments of the present invention. Generally speaking, communication system100includes client102and remote server160. Client device202may be, for example, a mobile communications device. Remote server160is a source of data that may be desired by client device202. Remote server160may be any source of data. In one or more exemplary embodiments of the present invention, remote server160is a source of video streaming. Various video streaming providers that provide video streaming services are known. Thus, in one example, client device or ‘client’202desires to receive video content from a video streaming provider. Communication system200further includes a virtual private network (VPN). In addition to client102communicating with remote server160, client102also wishes to communicate with a source of data via a VPN. The VPN, for example, provides data encapsulation (which may or may not include data encryption via encapsulate/decapsulate module122). One exemplary use of a VPN is to provide secure, encrypted data. Thus, client102wishes to communicate with remote server160as well as to communicate over a VPN. The above objective, to communicate with remote server160as well as to communicate over a VPN250may take several forms. In one form, communication with remote server160is outside of the VPN250, while further communication takes place with the VPN. The communication that takes place with the VPN250may be with remote server160or with another remote server170. In another example, communication with remote server160may be over a VPN while additional communication occurs with a VPN (the same VPN that is communicating with remote server160or different VPN). In the explanation set forth below, communication with remote server160is outside of a VPN while communication to remote server160or to additional remote server170occurs with a VPN, but this is merely an example. FIG.2illustrates a client device202communicating with the network via a VPN client220. Initially,FIG.2illustrates that client202is attempting to communicate with remote server160. As shown, client202may be, for example, a mobile communications device that wirelessly communicates with network135via one or more access points (that may include Ethernet, modem, cellular, Wi-Fi, etc.). ISP135and ISP136may each permit public access or restricted access. As an example, ISP135may include a communications network that is typically accessed over a wired connection, while ISP136may include a communications network that is accessed by cellular communications provider. Alternatively, or in addition, an ISP may be provided that permits both forms of communication and perhaps another form of communication. ISP135and ISP136are shown coupled to Internet140through communication protocols that are well known to one of ordinary skill in the art. In one example, ISP135and ISP136interface with Internet140via a fiber-optic or Ethernet Internet connection. While in one example ISP136is accessed by a cellular access point, ISP136may be accessed via other methods alternatively or as well, such as a LAN (e.g., a wireless home network), a combination of wired and/or wireless connections, and perhaps one or more intervening networks (such as a wide area network) so that access to Internet140may be obtained. In the example above, a user may use client202for voice communication. Assume client202is a cell phone such as a smartphone, and communication occurs via a Voice over IP (VoIP) application. Client application110communicates with ISP135, ISP136, or both (alternatively or simultaneously using technology such as channel bonding) via one or more access points and a digitized form of the user's voice is then transmitted to Internet140. From Internet140, the data that represents the user's voice is transmitted to remote server170. From remote server170, the data may be transmitted to another user (not shown) so that voice communication between the two users may occur. In another embodiment, a user may use client device202for secure voice communication. Data from application(s)110enters VPN client220via driver/receiver221. Voice communication data is encapsulated (which may or may not include encryption) via encapsulate/decapsulate (encap/decap) module223. Encapsulated data is then transmitted to ISP135(and/or ISP136) via one or more access points before reaching Internet140. From Internet140, the encapsulated data (i.e., the encapsulated voice communication data) is transmitted to VPN server250. Data is then decapsulated (which may or may not include decryption) via decapsulate/encapsulate (decap/encap) module252before being retransmitted to Internet140and remote server170. From remote server170, the data may be transmitted to another user (not shown) so that voice communication between two users may occur via a VPN. In another embodiment, client202streams video data from remote server160. Client202requests the video data from remote server160by transmitting a request through ISP135(and/or ISP136) and Internet140. Remote server160responds to the request by transmitting video via Internet140, and back to ISP135(and/or ISP136), so that it is eventually received by client202. Such video streaming may occur outside of the VPN250. The request to stream data may or may not be proceeded by a DNS request to provide the IP address of remote server160. The DNS request can be received and processed by DNS server180. In yet another example, remote server160serves two purposes: first, it is used as the source of streaming data (inside or outside a VPN) and second, it is used in combination with data that has been transmitted via the VPN. In another example, data is transmitted via VPN250, and further data is transmitted outside of the VPN (or outside of the VPN on another VPN). The data may be transmitted to at least two different servers (a remote server and VPN server). Alternatively, the data transmitted via the VPN and outside of the VPN (or outside on another VPN) may be transmitted to the same server. In the above description, when the phrase “outside of the VPN” is used, this may include non-encapsulated/unencrypted data (i.e., data not encapsulated/encrypted by a VPN) and/or encapsulated/encrypted data that has been encapsulated/encrypted by another VPN. Referring again toFIG.2, application(s)110participate in communications that include Internet140. In particular, application(s)110participate in communications that include VPN client220. At least one type of communication that includes VPN client220also includes encapsulation/encryption. At least another type of communication that includes VPN client220omits encapsulation/encryption (at least by VPN client220). First, a description is provided of communication that includes VPN client220and that omits encapsulation/encryption (at least by VPN client220). VPN client220includes driver (driver/receiver)221that receives data from one or more applications110. Driver221may be, for example, a TUN/TAP driver. A request for data (such as a request for data streaming) to be returned to application110(or the act of providing data) is transmitted from driver221and is received by routing module222. The purpose of routing module222is to determine whether the request for data will be encapsulated (for purposes of being transmitted via the VPN) or whether the request for data will be transmitted to local proxy224and not encapsulated (at least within VPN client220). In addition, when the request for data that is transmitted via the VPN arrives at its destination, the destination is advised that the source of the data was a VPN server (and not the actual source of the data) because the destination receives the IP address of the VPN server as the source, and client202(the actual source of the data) may be hidden to the destination as the destination will only “identify” the IP address associated with the VPN server250. By contrast, when a request for data that is not transmitted via the VPN arrives at its destination, the destinations identifies that the source of the data was client202. Among other things, when the request for data (or data, itself) has been received by routing module222, routing module222directs the request in one of two separate directions depending upon user selection. The first scenario to be described is with a VPN enabled. When a VPN is enabled, routing module222routes the request for data to VPN server250via encap/decap module223. From VPN server250, the request for data is further forwarded depending upon whether or not the request for data is a DNS request. If the request for data is a DNS request, VPN server250routes the request to DNS server180(because the IP address of the DNS server is in the packet header as the destination). If the request for data is a data (non-DNS) request, VPN server250routes the request to remote server160(when the IP address of remote server160is in the packet header as the destination). If the request is received by DNS server180, DNS server180resolves the DNS request and transmits the corresponding IP address to VPN server250. VPN server250then transmits the IP address via switch241and port299to encap/decap223. The IP address is subsequently transmitted to driver/receiver221and back to application110that initiated the DNS request. If the request is received by remote server160, remote server160responds to the request by transmitting data (e.g., streaming data) via switch241and port298to local proxy224. The data is subsequently transmitted to driver receiver221and back to application110that initiated the data request. The cellular access may be performed by driver/receiver232. The above explanation has been with regard to the transfer of data between client and server via a single TCP stream. In a further exemplary embodiment of the present invention, data is transferred between the client and the server(s) over multiple TCP streams, and data transfer on multiple streams may occur concurrently. By creating multiple TCP streams, multiple scenarios may be achieved. For example, the use of multiple TCP streams permits for a more consistent transmission rate to be obtained by enabling transmission and receiving procedures which utilize more than one TCP stream. As a further example, if packets are dropped while transmitting on a TCP stream, a TCP congestion control procedure may slow TCP transmission on a single TCP stream from a first rate to a new rate that is slower than the first rate, even if the cause of the problem is unrelated to congestion. By creating multiple TCP connections, while one TCP transmission stream may slow down (due to the congestion management procedure), the existence of one or more further TCP connections will lessen the total impact of the single slow TCP transmission. Data leaving a client application110is received by demultiplexer305inFIG.3. Demultiplexer305separates the data into multiple streams, and then passes the data to TCP/IP stack310. TCP/IP stack310assigns a different respective source port to each of the multiple streams it receives and transmits via multiple TCP connections to VPN client220. Data to be transmitted to remote server160(FIG.2) via VPN250is encapsulated by encap/decap223(FIG.2). Data to be transmitted to remote server160(FIG.2) outside of a VPN is transmitted to remote server160via local proxy224. Data proceeds through driver/receiver241(included in a NIC) and is transmitted to ISP136, ISP140, etc. (depending upon the port selected by VPN client220). Data within the VPN is transmitted to server160via VPN server250. Data outside the VPN is transmitted to server160without going through a VPN server. The data is communicated across the Internet320. FIG.3also illustrates NIC325, VPN client252, TCP/IP stack330and multiplexer335. NIC325, VPN client252, TCP/IP stack330and multiplexer335, which is situated at the location where data is transmitted through the TCP/IP stack330at the receiving end of the data communication. This operation may take place in VPN server250or remote server160. Decoded data obtained by VPN server250may be re-encoded and passed to remote server160. Decoded data obtained by remote server160may be used to access data on remote server160, which in turn is transmitted back to application(s)110. Data on multiple TCP connections is thus received by NIC325(with a driver/receiver) and is decapsulated before being forwarded to remote server160. Data on multiple TCP connections that is transmitted towards remote server160(without going through a VPN) is decoded and remote server160issues an appropriate response. Because each TCP connection is formed with a respectively different source port, data transmitted back to the client can be directed to the source port from which the data was transmitted. The above explanation has described the formation of multiple TCP connections, but it is also possible to form a UDP connection concurrently with multiple TCP connections. Depending upon numerous factors (for example, the type(s) of data being transmitted, the type(s) of application(s) using the data, etc.), a UDP connection may have certain advantages over a TCP connection. For this purpose, optional UPD/IP stacks312,332are illustrated inFIG.3. Thus, it may be desirable for data to be transmitted over a UDP connection concurrently with data transmission over multiple TCP connections. Also, the UDP connection may transmit data as an independent channel managing UDP traffic for UDP specific applications. In order to use multiple TCP connections, multiple TCP sockets are created. As is known to one of ordinary skill in the art, the definition of a TCP socket is: (source IP, source port, destination IP, destination port). Thus, in accordance with an exemplary embodiment of the present invention, in order to create multiple TCP sockets, the client operating system creates TCP sockets with respectively different source port numbers on the client for each socket. The destination (server) port number is specified when each connection is created. By creating sockets with respectively different source port numbers, it is possible to differentiate between the multiple TCP connections. In other words, when the operating system creates each socket, and when packets are read from and written to each socket, the operating system remembers the combination of the four socket attributes in order to differentiate between sockets. A flowchart diagram that illustrates operation of an exemplary embodiment of the present invention is illustrated inFIG.4. At operation405, source port numbers are selected for each respective TCP socket. At operation410, TCP sockets (which will be used for TCP connections) are created. operation405may occur before operation410occurs, or, or source ports are assigned at the time that respective TCP sockets are created. At operation415, and communication occurs between client and server over open TCP connections. At operation420, the quality of the communication between the client and the server may be evaluated. Exemplary parameters that may be evaluated to determine quality of the communication include delay, latency, jitter, error rate, QoS, etc. any parameter that is evaluated is compared with a corresponding threshold. If the parameters are identified as having passed a threshold (i.e., the parameter indicates that quality levels are not being met), then at optional operation435it is determined whether the number of connections currently being used for communication between the client and the server are at a maximum. If the number of connections is not at a maximum value (e.g., 2 TCP connections, 2 TCP connections and 1 UDP connection (3 total), 3 TCP connections and 1 UDP connection (4), etc.), or if optional operation435is not included, processing proceeds to operation405and410to create an additional TCP connection. As each TCP connection is added, if one or more quality thresholds have not been met and as long as there is no maximum number of TCP connections permitted between client and server, additional TCP connections are added. Processing then proceeds to operation425. At operation425, if communication between client and server is complete then processing proceeds to operation430and all open connections between client and server may be closed (although in some embodiments multiple connections may be kept open pending future communication). If communication is not completed then processing proceeds from operation425and back to operation420. In a further exemplary embodiment, if communication between the client and the server is ongoing and one or more quality thresholds have been met, one or more of multiple TCP connections may be closed. In a further embodiment, after one or more TCP connections have been closed, one or more attributes can be evaluated again to determine whether quality thresholds have been met, and if quality thresholds have not been met, then one or more TCP connections can be added to one or more TCP connections that are currently being used. The quality thresholds may include a maximum number of lost/dropped packets over a period of time, a latency time period, a jitter time period, and a data rate identified as a number of packets received over a period of time. FIG.5is a flowchart diagram that illustrates operation of a further exemplary embodiment of the present invention. Many of the operations illustrated inFIG.5are analogous to operations illustrated inFIG.4, such as505,515,520,525,535and530.FIG.5differs fromFIG.4, however, in thatFIG.5includes operations511and512. WhileFIG.4relates to creating multiple TCP connections, or increasing the number of TCP connections,FIG.5relates to creating/increasing not only the number of TCP connections but also creating a UDP connection. Certain types of data transfer may be better suited for transfer over a UDP connection than being transferred over a TCP connection(s). For example, live real-time events (e.g. sporting events) may be more optimal if transmitted via a UDP connection than via a TCP connection. Alternatively, use of a UDP connection may result in better quality of the data transfer between the client and the server. Again, quality may relate to one of multiple parameters including delay, latency, jitter, error, QoS, etc. Thus,FIG.5includes the operation of creating a UDP connection512which is used concurrently with multiple TCP connections. Whether or not to create a UDP connection may be based on a number of factors. In one example, the UDP connection is merely created on a “try it” basis to see if the UDP connection can maintain one or more quality attributes or thresholds. In another embodiment, characteristics of the data being transmitted may trigger the creation of the UDP connection shown at operation520. An example would be if data is continuously being transmitted from server to client (instead of being transmitted in spurts) which may indicate live video (such as sporting events) etc. Thus, in some situations a UDP connection may be created at operation520and processing proceeds to operation525in a manner analogous to operation415. According to one example embodiment, when a first TCP connection is established and used to exchange data between a client device and a server, the first TCP connection may be monitored for data network characteristics, such as one or more of packet latency as measured over time, packet exchanged round trip time, packet jitter as measured over time, bandwidth throughput as a data rate achieved over a period of time, packet loss as a number of packets over a period of time, etc. One common TCP connection concern may be, for example, latency. Also, one or more of the network data characteristics may be linked to another and may cause proprietary or default protocol actions which may include, for example, the automated slowing of a data rate or latency (time of one packet) of that first TCP connection. It may not be feasible to identify when the first TCP connection will automatically slow, although detecting the current data rate of the first TCP connection can be a routine process. When the TCP connection is identified as having slowed down from a target data rate or target latency value, an additional TCP connection(s) may be added. The process of adding additional TCP connections may include adding one TCP connection at a time, detecting a slow down condition of the previously added TCP connection, then adding another TCP connection. Ultimately, there is no limit on the number of TCP connections used during a single data session. The actual limit may be a very large number, such as thousands of connections. The term channel is used to denote the group of one or more connections during a data session. A channel may include connection mirroring, where one set of data is sent via two or more connections as the same data for redundancy, such as to avoid packet loss. A channel may also use bonding where the data sent across multiple connections is different so a total amount of data exchanged can be larger than if only one connection is used. The channel may include multiple (‘N’) number of TCP connections and/or UDP connections. Bonding and mirroring are generally used when there is more than one different network connection providing independent connections which may be bonded together to forward and receive data as a common bonded channel that uses more than one different network and in turn different network connections. The decision to add additional TCP connections may be based on a previously added TCP connection being slowed below a threshold data rate and/or latency rate or via a combination of other data network characteristics. Also, when adding the second, third, etc., TCP connections, the client device may be using an application that requires UDP, or another TCP connection or a TCP connection which is operating at a faster rate than the current available rate. This may invoke connection bonding so the channel includes multiple TCP connections. Also, the UDP packet data of a particular UDP application may be detected as it is forwarded via a TCP channel. In this example, the UDP data may be forwarded to a dedicated UDP connection. The UDP connection may be created when the first or second TCP connection is created, and may then remain dormant until the UDP specific application requires the UDP connection. In this example, a first TCP connection is established, a UDP connection is established and a second UDP connection is established when the first TCP connection is not maintaining the network characteristics parameters desired by a target data rate, latency rate, etc. Additionally, the VPN server250may be used to provide data security and data management between the client device and the data server by encrypting the data packets over the TCP and UDP connections. The VPN server250may be connected to the client device and the server during the data communication exchange over the ‘channel’ of connections. In one example, an application operating on a client device will use a VPN to provide data access to and from remote source (i.e., server), and this may include multiple TCP and/or UDP connections all managed by the same VPN. An application on the client device may create one network connection that is used to pass data through the VPN. The VPN can split the packets received over the network connection into multiple connections. However, there may be multiple TCP/UDP connections managed by the VPN which are available to use on a common network platform. The ‘connection’ is identified as a single connection or channel and the specific connections used to provide data access remain anonymous or hidden from the application operating on the client device. For example, a single network adapter may be used and the VPN may have its own virtual network adapter or virtual network interface as another individual component providing multiple connections. Additionally, the VPN server may provide data packet management services, such as encryption and other security measures to protect the integrity of data exchanged between client and server. FIG.6illustrates an example process600of creating multiple connections according to example embodiments. Referring toFIG.6, the process may include establishing a first transmission control protocol (TCP) connection between a client device and a server to form a virtual private network (VPN)612, permitting communication between the client device and the server on the first TCP connection614, monitoring communication over the first TCP connection to identify one or more connection parameters616, and establishing a second TCP connection between the client device and the server when the one or more connection parameters indicate a slowing of the first TCP connection below a threshold and below a previously measured connection rate618. The one or more connection parameters can include one or more of a data rate, an error rate, and a latency value, and the threshold is a data rate threshold, an error rate threshold, or a latency threshold. The previously measured connection rate could be a previously noted data rate, error packet loss rate or some rate that was measured previously, and which is now different. The measuring of rate and threshold(s) may be performed periodically to ensure TCP connection compliance and whether to add new connections in the event that the measurements are not according to one or more expected values. Responsive to establishing the first TCP connection or the second TCP connection, a user datagram protocol (UDP) connection may be established. The process may also include identifying UDP packets created by an application on the client device, forwarding the UDP packets identified from the client device on the UDP connection, and forwarding TCP packets identified from another application on the client device via one or more of the first and second TCP connection. When an application is using UDP packets for data packaging and transmission, the packets may be forwarded on a UDP connection that is created as a standby connection awaiting UDP packets. The UDP connection may be used as part of a bonded channel that bonds one or more TCP connections with the UDP connection or as a stand-alone channel dedicated for UDP traffic or other types of data traffic. The process may also include determining no UDP packets are being sent by the client device for a period of time, and closing the UDP connection. The UDP connection can be maintained for a period of time and removed when not in use. The process may also include adding at least one third TCP connection, and bonding the first, second and third TCP connections together as a single bonded channel. This may also include additional TCP connections which are added each time a data network condition is not maintained. The process may also include determining the first or second TCP connections has resumed the one or more connection parameters to be above the threshold, and removing one of the three TCP connections. In this example, if a slowed connection or a connection that is maintaining its parameters being monitored, then the connection can be released pending additional monitoring results. The decision to add or remove connections may be based on current connection performance of one or more of the connections. The above explanation has included examples in which various operations are taken to improve data connections, such as data streaming to a client such as a mobile device. It is understood, however, that the above examples may relate to the streaming of data to other devices such as to a server, or, data exchanges which do not include streaming. The above explanation has included multiple examples and multiple embodiments. It is understood to one of ordinary skill of the art that more than one of these examples and more than one of these embodiments can be combined in order to create further examples and embodiments. Also, disclosed features can be eliminated from various embodiments as desired. Also, some features of one embodiment may be combined with some features of another embodiment. In an exemplary embodiment of the present invention a computer system may be included and/or operated within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a local area network (LAN), an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The exemplary computer system includes a processing device, a main memory (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) (such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device, which communicate with each other via a bus. Processing device represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Processing device is configured to execute listings manager logic for performing the operations and operations discussed herein. Computer system may further include a network interface device. Computer system also may include a video display unit (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device (e.g., a keyboard), a cursor control device (e.g., a mouse), and a signal generation device (e.g., a speaker). Data storage device may include a machine-readable storage medium (or more specifically a computer-readable storage medium) having one or more sets of instructions embodying any one or more of the methodologies of functions described herein. Data storage may also reside, completely or at least partially, within main memory and/or within processing device during execution thereof by computer system; main memory and processing device also constituting machine-readable storage media. Virtual private network (VPN) device/server may indicate any similar system that encapsulates packets to transmit them to and from a client device and to and from a remote server. For example, a VPN may be a software defined network (SDN) or SD wide area network (SD-WAN), or a multi-path TCP (MPTCP) proxy device. Machine-readable storage medium may also be used to store the device queue manager logic persistently. While a non-transitory machine-readable storage medium is shown in an exemplary embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instruction for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present invention. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. The components and other features described herein can be implemented as discrete hardware components or integrated in the functionality of hardware components such as ASICs, FPGAs, DSPs or similar devices. In addition, these components can be implemented as firmware or functional circuitry within hardware devices. Further, these components can be implemented in any combination of hardware devices and software components. Some portions of the detailed descriptions are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. In the aforementioned description, numerous details are set forth. It will be apparent, however, to one skilled in the art, that the disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the disclosure. The above explanation has included multiple examples and multiple embodiments. It is understood to one of ordinary skill of the art that more than one of these examples and more than one of these embodiments can be combined in order to create further examples and embodiments. Also, disclosed features can be eliminated from various embodiments as desired. Also, some features of one embodiment may be combined with some features of another embodiment. Computer system may further include a network interface device. Computer system also may include a video display unit (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device (e.g., a keyboard), a cursor control device (e.g., a mouse), and a signal generation device (e.g., a speaker). A data storage device may include a machine-readable storage medium (or more specifically a computer-readable storage medium) having one or more sets of instructions embodying any one or more of the methodologies of functions described herein. The data storage may also reside, completely or at least partially, within main memory and/or within processing device during execution thereof by computer system; main memory and processing device also constituting machine-readable storage media. Machine-readable storage medium may also be used to store the device queue manager logic persistently. While a non-transitory machine-readable storage medium is illustrated in an exemplary embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instruction for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present invention. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. FIG.7is a computer readable medium and corresponding system configuration of an example device(s) configured to perform one or more operations associated with exemplary embodiments of the present invention. The operations of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a computer program executed by a processor, or in a combination of the two. A computer program may be embodied on a computer readable medium, such as a storage medium. For example, a computer program may reside in random access memory (“RAM”), flash memory, read-only memory (“ROM”), erasable programmable read-only memory (“EPROM”), electrically erasable programmable read-only memory (“EEPROM”), registers, hard disk, a removable disk, a compact disk read-only memory (“CD-ROM”), or any other form of storage medium known in the art. FIG.7illustrates an example network entity device configured to store instructions, software, and corresponding hardware for executing the same according to example embodiments.FIG.7is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the application described herein. Regardless, the computing node700is capable of being implemented and/or performing any of the functionality set forth hereinabove. In computing node700there is a computer system/server702, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server702include, but are not limited to, personal computer systems, server computer systems, thin clients, rich clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Computer system/server702may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server702may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As displayed inFIG.7, computer system/server702in cloud computing node700is displayed in the form of a general-purpose computing device. The components of computer system/server702may include, but are not limited to, one or more processors or processing units704, a system memory706, and a bus that couples various system components including system memory706to processor704. The bus represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. Computer system/server702typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server702, and it includes both volatile and non-volatile media, removable and non-removable media. System memory706, in one embodiment, implements the flow diagrams of the other figures. The system memory706can include computer system readable media in the form of volatile memory, such as random-access memory (RAM)710and/or cache memory712. Computer system/server702may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system714can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not displayed and typically called a “hard drive”). Although not displayed, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to the bus by one or more data media interfaces. As will be further depicted and described below, memory706may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of various embodiments of the application. Program/utility716, having a set (at least one) of program modules718, may be stored in memory706by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules718generally carry out the functions and/or methodologies of various embodiments of the application as described herein. As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method, or computer program product. Accordingly, aspects of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present application may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Computer system/server702may also communicate with one or more external devices720such as a keyboard, a pointing device, a display722, etc.; one or more devices that enable a user to interact with computer system/server702; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server702to communicate with one or more other computing devices. Such communication can occur via I/O interfaces724. Still yet, computer system/server702can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter(s)726. As depicted, network adapter(s)726communicates with the other components of computer system/server702via a bus. It should be understood that although not displayed, other hardware and/or software components could be used in conjunction with computer system/server702. Examples include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. One skilled in the art will appreciate that a “system” could be embodied as a personal computer, a server, a console, a personal digital assistant (PDA), a cell phone, a tablet computing device, a smartphone or any other suitable computing device, or combination of devices. Presenting the above-described functions as being performed by a “system” is not intended to limit the scope of the present application in any way but is intended to provide one example of many embodiments. Indeed, methods, systems and apparatuses disclosed herein may be implemented in localized and distributed forms consistent with computing technology. It should be noted that some of the system features described in this specification have been presented as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom very large-scale integration (VLSI) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, graphics processing units, or the like. A module may also be at least partially implemented in software for execution by various types of processors. An identified unit of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions that may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module. Further, modules may be stored on a computer-readable medium, which may be, for instance, a hard disk drive, flash device, random access memory (RAM), tape, or any other such medium used to store data. Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. It will be readily understood that the components of the application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the detailed description of the embodiments is not intended to limit the scope of the application as claimed but is merely representative of selected embodiments of the application. One having ordinary skill in the art will readily understand that the above may be practiced with operations in a different order, and/or with hardware elements in configurations that are different than those which are disclosed. Therefore, although the application has been described based upon these preferred embodiments, it would be apparent to those of skill in the art that certain modifications, variations, and alternative constructions would be apparent. While preferred embodiments of the present application have been described, it is to be understood that the embodiments described are illustrative only and the scope of the application is to be defined solely by the appended claims when considered with a full range of equivalents and modifications (e.g., protocols, hardware devices, software platforms etc.) thereto. | 48,142 |
11943327 | DESCRIPTION OF EXEMPLARY EMBODIMENTS Here, embodiments of the present disclosure will be described in the following order. 1. Communication System Configuration 2. Communication Processing 3. Other Embodiments 1. COMMUNICATION SYSTEM CONFIGURATION FIG.1is a block diagram showing a configuration of a communication system10, a browser terminal device100, and an administrator terminal device600according to an embodiment. In the present embodiment, the communication system10is a system that performs communication with the browser terminal device100. The communication system10includes a server device20, a terminal device400, and an electronic device500. The server device20is an information processing device including two server devices, a first server device200and a second server device300. The browser terminal device100, the first server device200, the second server device300, the terminal device400, and the administrator terminal device600are each connected to a network700. Further, the terminal device400and the electronic device500are each connected to a network800. The network700is a network outside the network800, and is, for example, a network such as the Internet, a local area network (LAN), or a wide area network (WAN). The network800is a network such as LAN or WAN. The terminal device400may serve as a gateway of the network800. In the present embodiment, each device in the network800is configured so as not to be able to receive a request according to the first protocol, which is a predetermined protocol, from outside the network800. In the present embodiment, the first protocol is hypertext transfer protocol (HTTP), but other protocols such as Telnet may be used. Further, in the present embodiment, HTTP is a concept including hypertext transfer protocol secure (HTTPS). The request processing and response data according to HTTP includes a request line or response line, an HTTP header, and a body which is data to be exchanged. When there is no data to be exchanged, the processing request and response data according to HTTP may not include the body. In the present embodiment, the terminal device400, which is the gateway of the network800, is configured to block the request according to the first protocol from outside the network800to devices inside the network800. However, other configurations may be used as long as each device in the network800can be configured so as not to accept the request according to the first protocol from outside the network800. For example, each device in the network800may be configured to, when receiving the request according to the first protocol from outside the network800, discard the received request. In the present embodiment, the browser terminal device100and the electronic device500indirectly perform communication according to the first protocol with the second server device300and the terminal device400as relays. FIG.2is a diagram showing an outline of communication between the browser terminal device100and each element of the communication system10. Hereinafter, each element of the browser terminal device100and the communication system10will be described with reference toFIGS.1and2. The browser terminal device100is an information processing device having a browser function. In the present embodiment, the browser terminal device100is used for remote maintenance of the electronic device500, but may be used for other purposes such as remote use of the electronic device500, confirmation of the usage status of the electronic device500, or the like. The browser terminal device100indirectly transmits, to the electronic device500, a processing request according to the first protocol, and acquires response data from the electronic device500, by performing communication with the first server device200and the second server device300through the browser. Here, the processing request is a request for executing processing, and in the present embodiment, the processing request is a request for providing a screen used for controlling the setting change of the electronic device500, the execution of a specified processing, and the like. In the following, the screen used for controlling the electronic device500will be referred to as a control screen. However, the processing request may be a request for other processing such as processing for providing a confirmation screen for setting information, processing for changing the setting, and processing for providing information indicating the usage status of the electronic device500. In the present embodiment, the electronic device500transmits, to a request source, the response data including screen data for the control screen as a response to the processing request. In the present embodiment, the response data is data for the web page displayed on the browser displayed by the browser terminal device100, but may be other data such as a dialog. The screen data for the control screen is, for example, image information about the control screen, information indicating each object disposed in the control screen, and the like. Further, when the request source of the processing request knows a displacement mode of each display object such as a button or a display block on the control screen in advance, the screen data for the control screen may be text data such as characters and numerical values displayed on the objects. The browser terminal device100includes a controller110, a communicator120, an input/output portion130, and a recording medium (not shown). The controller110includes a central processing unit (CPU), a random access memory (RAM), a read only memory (ROM), and the like. The communicator120includes a circuit for communicating with another device by wire or wirelessly. The controller110communicates with an external device through the communicator120. The input/output portion130includes an input portion such as a mouse, a keyboard, and a touch panel, and a display portion such as a display. The controller110implements the function of a browser controller111by executing a program recorded in the ROM or the recording medium. The browser controller111is a functional portion of displaying a browser on the input/output portion130and communicating with an external device through the browser. The browser controller111includes a request transmission portion111aand a display controller111b. The controller110displays the browser on the input/output portion130and accesses the first server device200through the displayed browser, by the function of the browser controller111. When accessing the first server device200, the controller110notifies the first server device200of a user's authentication information including the user name and the password. In response to the notification of the authentication information, the first server device200determines whether the user is to be authenticated, and when the user is authenticated, transmits, to the browser terminal device100, a screen used for instructing to transmit the processing request for the electronic device500. Hereinafter, this screen will be referred to as an instruction screen. In the present embodiment, the instruction screen is a screen including a list of candidates for transmission targets of the processing request, including the electronic device500. The controller110displays the instruction screen acquired from the first server device200in the browser displayed on the input/output portion130. Then, when the controller110detects the selection of the electronic device500through the instruction screen, the controller110determines that the electronic device500is selected as the target for transmitting the processing request. The request transmission portion111ais a functional portion for transmitting the processing request according to the first protocol to the first server device200and the second server device300different from the electronic device500when receiving the instruction to transmit the processing request through the instruction screen. When the electronic device500is selected as the target for transmitting the processing request through the instruction screen, the controller110transmits the processing request for the electronic device500to the first server device200by the function of the request transmission portion111a, as shown in (1) ofFIG.2. Here, the processing request for be transmitted includes information indicating the electronic device500that is the processing request destination and information indicating the browser terminal device100that is the request source of the processing request. In the present embodiment, the controller110transmits the processing request according to the first protocol to the first server device200. However, the controller110may transmit a processing request according to a protocol different from the first protocol, such as SPDY, to the first server device200. Then, as shown in (3) ofFIG.2, the controller110acquires an authentication key transmitted from the first server device200as a response to the processing request. Here, the authentication key is a key used for authenticating the browser terminal device100in the second server device300. In the present embodiment, the authentication key is a default character string unique to the corresponding processing request. The character string is, for example, a predetermined number of characters such as 3 characters, 5 characters, and 10 characters. However, the authentication key may be other information such as a bit string as long as it is unique to the corresponding processing request. In the present embodiment, the authentication key is a one-time key that is valid only once. However, the authentication key may be a key that is valid twice or more. When the controller110receives the authentication key from the first server device200, the controller110inputs the URL of the second server device300and the received authentication key into an address bar of the browser. However, the controller110may input the URL of the second server device300in the address bar of the browser based on the operation by the user through the input/output portion130. In that case, the controller110does not have to input the received authentication key in the address bar of the browser. Then, the controller110generates data for the processing request according to the first protocol for the electronic device500, including the URL input in the address bar of the browser. In the present embodiment, the controller110may generate HTTP request data of the GET method as the data for the processing request according to the first protocol for the electronic device500, but may generate the HTTP request of another method such as the POST method. Then, the controller110transmits, to the second server device300, instead of the electronic device500, the generated data for the processing request according to the first protocol with the authentication key input in the address bar included. The authentication key transmitted from the controller110to the second server device300together with the data for the processing request is an example of the second key. (4) ofFIG.2illustrates the data for the processing request according to the first protocol and the authentication key transmitted from the browser terminal device100to the second server device300. Thereby, the controller110transmits, to the electronic device500, the data for processing request according to the first protocol with the second server device300and the terminal device400as relays. As illustrated in (9) ofFIG.2, the display controller111bis a functional portion for displaying the control screen on the display portion of the input/output portion130based on response data from the electronic device500to the data for the processing request transmitted from the second server device300. By the function of the display controller111b, the controller110receives, from the second server device300, response data, which is response data of the electronic device500to the data for the processing request transmitted by the function of the request transmission portion111aand includes the information about the control screen. The controller110displays the control screen as a web page indicated by the received response data in the browser displayed on the display portion of the input/output portion130. Then, when the controller110receives a control instruction for the electronic device500through the control screen, the controller110transmits, to the electronic device500, the processing request based on the control instruction for the electronic device500with the second server device300and the terminal device400as relays, in the same manner as the processing request for the processing of providing the control screen. The first server device200is an information processing device for receiving the processing request for the electronic device500from the browser terminal device100, generating an authentication key associated with the electronic device500, and transmitting the generated authentication key to the browser terminal device100and the second server device300. The first server device200includes a controller210, a communicator220, and a recording medium230. The controller210includes a CPU, RAM, ROM, and the like. The communicator220includes a circuit for communicating with another device by wire or wirelessly. The controller210communicates with an external device through the communicator220. The recording medium230records correspondence information231indicating correspondence between the electronic device500in the network800and the terminal device400that manages the electronic device500, various programs, and the like. The controller210implements functions of an authentication portion211, a key generation portion212, and a key transmission portion213by executing the program recorded in the ROM or the recording medium230. The authentication portion211is a functional portion for performing authentication of the user of the browser terminal device100. The controller210acquires user authentication information including, for example, a user name and a password, from the browser terminal device100, and performs authentication of the user based on the acquired authentication information, by the function of the authentication portion211. The key generation portion212is a functional portion for generating an authentication key associated with the electronic device500when receiving the processing request for the electronic device500. The authentication key generated by the function of the key generation portion212is an example of the first key. The controller210receives a processing request for the electronic device500from the browser terminal device100by the function of the key generation portion212. Then, the controller210asks the administrator terminal device600whether or not to permit the browser terminal device100, which is the request source of the processing request, to handle the processing request for the electronic device500, which is the request destination indicated by the processing request. The administrator terminal device600is an information processing device used by the administrator of the electronic device500. For example, the controller210notifies the administrator terminal device600of a screen used for selecting whether or not to permit the browser terminal device100to handle the processing request for the electronic device500, and acquires the selection result through the screen from the administrator terminal device600. When the administrator terminal device600selects to permit the processing request for the electronic device500, the controller210generates an authentication key used for authenticating the browser terminal device100in the second server device300. In the present embodiment, the controller210generates a character string “AAA” as the authentication key, but may generate data in another format such as another character string or bit string as the authentication key. However, when the information about a target which is permitted to handle the processing request for the electronic device500is recorded in advance on the recording medium230, the controller210does not have to ask the administrator terminal device600. In that case, when, for example, the request source of the processing request received from the browser terminal device100is recorded on the recording medium230as the target which is permitted to handle the processing request for the electronic device500, the controller210may generate the authentication key. Then, the controller210associates the generated authentication key with the electronic device500, which is indicated by the processing request received from the browser terminal device100. Further, the controller210specifies the terminal device corresponding to the electronic device500, that is, the terminal device that manages the electronic device500, from the correspondence information231. In the present embodiment, the terminal device400manages the electronic device500. Therefore, the controller210specifies the terminal device400. The key transmission portion213is a functional portion that transmits the authentication key generated by the function of the key generation portion212to the browser terminal device100that is the request source of the processing request for the electronic device500. The controller210transmits, to the second server device300, the authentication key generated by the function of key generation portion212, information about the electronic device500associated with the authentication key, and information about the terminal device400corresponding to the electronic device500, by the function of the key transmission portion213. In the present embodiment, the controller210transmits the IP address in the network800of the electronic device500and the identification information about the terminal device400as the information about the electronic device500and the information about the terminal device400, respectively. Hereinafter, the identification information is referred to as a terminal ID. (2) ofFIG.2shows the authentication key transmitted from the first server device200to the second server device300, the IP address of the electronic device500, and the terminal ID of the terminal device400. In the second server device300, the transmitted authentication key is registered in association with the information about the electronic device500and the information about the terminal device400. Further, as shown in (3) ofFIG.2, the controller210transmits the authentication key generated by the function of the key generation portion212to the browser terminal device100, and instructs the browser terminal device100to transmit data for the processing request to the second server device300using the transmitted authentication key. In response to the instruction, as shown in (4) ofFIG.2, the browser terminal device100transmits, to the second server device300, the data for the processing request for the electronic device500according to the first protocol together with the received authentication key. However, the controller210does not have to instruct the browser terminal device100to transmit the data for the processing request to the second server device300. In that case, the browser terminal device100may transmit the processing request to the second server device300at a timing specified by the user, for example, through the input/output portion130. The timing specified by the user includes, for example, a timing at which the URL of the second server device300is input to the address bar of the browser. The second server device300is an information processing device for receiving the data for the processing request according to the first protocol for the electronic device500and the authentication key from the browser terminal device100, and performing authentication using the authentication key and communication with the terminal device400that manages the electronic device500. The second server device300includes a controller310, a communicator320, and a recording medium330. The controller310includes a CPU, RAM, ROM, and the like. The communicator320includes a circuit for communicating with another device by wire or wirelessly. The controller310communicates with an external device through the communicator320. The recording medium330records key information331including authentication key information, various programs, and the like. The controller310implements the functions of a registration portion311, a connection controller312, and a response notification portion313by executing a program recorded in the ROM or the recording medium330. The registration portion311is a functional portion for registering the authentication key information received from the first server device200. The controller310receives the authentication key, the information about the electronic device500, and the information about the terminal device400from the first server device200, by the function of the registration portion311. Then, the controller310associates the received authentication key with the information about the electronic device500and the information about the terminal device400, and records it on the recording medium330as the key information331. The connection controller312is a function of performing connection to the electronic device500corresponding to the authentication key received from the browser terminal device100, and includes a data transmission portion312a. The data transmission portion312ais a functional portion for transmitting, to the terminal device400, data according to a second protocol, which is a predetermined protocol different from the first protocol, including the processing request according to the first protocol. In the present embodiment, the second protocol is Message Queue Telemetry Transport (MQTT) protocol, which is a stateful protocol, but may be another protocol as long as it can deliver information from a device outside the network800to a device inside the network800. For example, the second protocol may be Constrained Application Protocol (CoAP), Extensible Messaging and Presence Protocol (XMPP), or the like. The controller310performs authentication of the browser terminal device100by using the authentication key received from the browser terminal device100together with the data for the processing request according to the first protocol, by the function of the connection controller312. More specifically, the controller310determines whether or not the same authentication key as the received authentication key is recorded on the recording medium330as the key information331. The controller310authenticates the browser terminal device100when it determines that the received authentication key is recorded, and does not authenticate the browser terminal device100when it determines that the received authentication key is not recorded. When the browser terminal device100is authenticated, the controller310specifies, from the key information331, the IP address of the electronic device500, which is the information about the electronic device500associated with the authentication key, and the terminal ID of the terminal device400. Then, the controller310establishes communication with the terminal device400indicated by the specified terminal ID according to the second protocol. Then, the controller310generates data according to the second protocol to be transmitted to the terminal device400by the function of the data transmission portion312a. More specifically, the controller310modifies the data for the processing request according to the first protocol received from the browser terminal device100as follows. That is, the controller310removes the authentication key from the data for the processing request and modifies the URL part of the second server device300to be changed to the IP address of the electronic device500. Then, in the present embodiment, as the data for the main body, the controller310generates, in a predetermined data format used in the second protocol, data including the modified data for the processing request according to the first protocol, identification information about the processing request, the IP address of the electronic device500acquired from the key information331by the function of the connection controller312, the predetermined port number used for communication related to the data for the processing request, and information indicating whether the data for the main body is Base64 format data. Hereinafter, the identification information about the processing request will be referred to as a request ID. In the present embodiment, the predetermined data format is Javascript (registered trademark) Object Notation (json) format, but may be other data formats such as XML format and Base64. When the controller310receives the data for the processing request for the electronic device500from the browser terminal device100, the controller310assigns a request ID that uniquely identifies the received data for the processing request. An example of the data according to the second protocol generated by the function of the data transmission portion312ais shown below. Data example: {“request_id”: “abcdefghij”, “host”: “**. **. **. **”, “port”: “***”, “Base64”: false, “body”: “GET/http/.”} In the example, the “request_id” key is a key indicating the request ID of the data for the processing request. In the example, the request ID for the data for the processing request is “abcdefghij”. The “host” key is a key indicating the IP address of the electronic device500that is the request destination of the data for the processing request. In the example, the IP address indicating the request destination of the data for the processing request is “**. **. **. **”. The “port” key is a key indicating a predetermined port number used for communication related to the data for the processing request. In the example, the predetermined port number used for communication related to the processing request is “***”. The “Base64” key is a key indicating whether or not the data for the main body is Base64 format data. In the example, since the value of the “Base64” key is “false”, the data for the main body is not in the Base64 format. The “body” key is a key indicating data for the main body. In the example, the data for the main body is “GET/http/” which is the data according to the first protocol indicating the processing request for the electronic device500. In the example, the data for the processing request according to the first protocol included in the data is not in the Base64 format, but may be in the Base64 format. In that case, for example, the terminal device400that receives this data may extract, from the received data, processing request for the Base64 format and may convert the extracted data for the processing request into the original format. The controller310transmits the generated data according to the second protocol to the terminal device400, as a predetermined communication destination, indicated by the terminal ID acquired from the key information331by the function of the connection controller312, as shown in (5) ofFIG.2. In this way, the controller310transmits the processing request according to the first protocol to a device in the network800by wrapping and encapsulating the processing request according to the first protocol with the data according to the second protocol. The terminal device400extracts the processing request according to the first protocol from the transmitted data and transmits the extracted processing request to the electronic device500. In this way, the controller310connects to the electronic device500with the terminal device400as a relay. The response notification portion313is a functional portion that transmits, to the browser terminal device100, which is the request source of the processing request, response data of the electronic device500to the processing request, which is transmitted from the terminal device400in response to the transmission of the data according to the second protocol by the function of the data transmission portion312a. The controller310receives the response data of the electronic device500to the processing request from the terminal device400by the function of the response notification portion313. In the present embodiment, when the terminal device400receives the data according to the second protocol transmitted by the function of the data transmission portion312a, the terminal device400extracts the processing request of the first protocol from the received data. Then, the terminal device400transmits the extracted processing request of the first protocol to the electronic device500as shown in (6) ofFIG.2, and receives response data according to the first protocol to the processing request from the electronic device500as shown in (7) ofFIG.2. The response data includes an HTTP header and a data main body. The terminal device400generates data according to a predetermined third protocol including the received response data according to the first protocol and the request ID of the processing request. In the present embodiment, the predetermined third protocol is HTTP, but and may be another protocol such as MQTT. In the present embodiment, the terminal device400converts the response data according to the first protocol received from the electronic device500into the Base64 format so as to be included in the data according to HTTP. As shown in (8) ofFIG.2, the terminal device400transmits the generated data according to the third protocol as a response to the data transmitted by the function of the data transmission portion312a. The controller310extracts the response data of the electronic device500according to the first protocol and the request ID of the processing request, from the data according to the third protocol received from the terminal device400. The controller310specifies, based on the extracted request ID, which processing request the extracted response data corresponds. Then, the controller310converts the Base64 format of the extracted response data according to the first protocol into the original format, and transmits the converted data to the browser terminal device100, which is the request source of the specified processing request, as shown in (9) ofFIG.2. The terminal device400is an information processing device for managing the electronic device500and relaying communication between the second server device300and the electronic device500. The terminal device400includes a controller410, a communicator420, and a recording medium (not shown). The controller410includes a CPU, RAM, ROM, and the like. The communicator420includes a circuit for communicating with another device by wire or wirelessly. The controller410communicates with an external device through the communicator420. The controller410implements the functions of a request notification portion411and a response data transmission portion412by executing a program recorded in the ROM or the recording medium. The request notification portion411is a functional portion for transmitting the processing request according to the first protocol to the electronic device500based on the data according to the second protocol transmitted from the second server device300. The controller410extracts the processing request according to the first protocol for the electronic device500from the data according to the second protocol received from the second server device300, by the function of the request notification portion411. Further, the controller410also extracts the request ID of the processing request, the port number, and the IP address of the electronic device500from the data according to the second protocol received from the second server device300. Then, the controller410records the extracted request ID in the RAM. Further, as shown in (6) ofFIG.2, the controller410transmits the extracted processing request according to the first protocol to the extracted port number of the electronic device500of the extracted IP address. In this way, the second server device300and the terminal device400transmit the processing request according to the first protocol, which is transmitted from the browser terminal device100outside the network800, to the electronic device500in the network800, by the functions of the data transmission portion312aand the request notification portion411. The response data transmission portion412is a functional portion for transmitting, to the second server device300, the response data transmitted from the electronic device500, as a response to the processing request transmitted by the function of the request notification portion411. As shown in (7) ofFIG.2, the controller410receives, from the electronic device500, the response data according to the first protocol of the electronic device500to the processing request transmitted by the function of the request notification portion411, by the function of the response data transmission portion412. The controller410converts the format of the received response data according to the first protocol into the Base64 format. By converting the format of the response data into the Base64 format, the response data can be included in the data according to the third protocol even when the response data includes binary data. Then, the controller410generates data according to the third protocol including the converted response data and the request ID of the processing request recorded in the RAM by the function of the request notification portion411. Then, as shown in (8) ofFIG.2, the controller410transmits the generated data according to the third protocol to the second server device300. In this way, the controller410transmits the data according to the third protocol that wraps the response data converted into the Base64 format to the second server device300, thereby transmitting the response data according to the first protocol to the second server device300. The electronic device500is a web server device having a function of receiving the processing request through the network and performing processing according to the received processing request, and performs communication with a device in the network800according to the first protocol. However, the electronic device500does not have to be a web server device as long as it has a function of receiving the processing request according to the first protocol and executing processing according to the received processing request. In the present embodiment, the electronic device500is a printer, but, may be another device such as a multifunction device, a scanner, a projector, a PC, an air conditioner, or a lighting device. The electronic device500includes a controller510, a communicator520, and a recording medium (not shown). The controller510includes a CPU, RAM, ROM, and the like. The communicator120includes a circuit for communicating with another device by wire or wirelessly. The controller510implements the function of a response portion511by executing a program recorded in the ROM or the recording medium. The response portion511is a functional portion for transmitting, to the transmission source of the processing request, response data to the processing request according to the first protocol from an external device. As shown in (6) ofFIG.2, when the controller510receives the processing request according to the first protocol from the terminal device400, the controller510generates response data according to the first protocol including the information about the control screen, by the function of the response portion511. Then, as shown in (7) ofFIG.2, the controller510transmits the generated response data according to the first protocol to the terminal device400, which is the transmission source of the processing request. As described above, the communication by the first protocol between the browser terminal device100and the electronic device500is indirectly carried out by the communication system10, and the control screen is displayed on the browser terminal device100. With the above configuration of the present embodiment, the communication system10transmits, to the electronic device500in the network800, the processing request according to the first protocol from the browser terminal device100outside the network800, through the second server device300and the terminal device400. Thereby, the communication system10can cause the electronic device500to execute the processing according to the processing request according to the first protocol even from outside the network800, which makes it possible to improve the convenience of the electronic device500. Further, in the present embodiment, the second server device300transmits the response data of the electronic device500, as the data according to the first protocol, to the processing request to the browser terminal device100which is the request source of the processing request. Thereby, the browser terminal device100transmits the processing request according to the first protocol, and receives response data according to the first protocol as a response to the transmitted processing request. Thereby, from the perspective of the browser terminal device100, communication with the electronic device500is carried out according to the first protocol. Thereby, the browser terminal device100only needs to exchange information with the electronic device500according to the first protocol, and thus it is not necessary to have a separate protocol for transmitting and receiving information, and it is possible to communicate with the electronic device500more easily. Further, in the present embodiment, the first protocol is HTTP, and the response data from the electronic device500is the data for the web page. Therefore, the browser terminal device100can perform communication with the electronic device500through the browser according to HTTP and display the response data on the browser. Thereby, the user of the browser terminal device100can perform exchange with the electronic device500by operating the browser. Further, in the present embodiment, the second protocol is MQTT protocol. Thereby, the communication system10can implement relatively light communication between the second server device300and the terminal device400, and can speed up the communication between the browser terminal device100and the electronic device500. Further, in the present embodiment, the first server device200transmits, to the second server device300, the information about the electronic device500and the terminal ID of the terminal device400together with the generated authentication key. The first server device200does not transmit, to the browser terminal device100, the information about the electronic device500and the terminal ID of the terminal device400together with the generated authentication key. Thereby, the information about the terminal device400in the network800and the electronic device500is not transmitted to the browser terminal device100, which is the request source of the processing request. Thereby, the communication system10can reduce the possibility that the information about the terminal device400and the electronic device500is leaked through the browser terminal device100. Further, in the present embodiment, the first server device200generates an authentication key used for authentication of the browser terminal device100, and the second server device300registers, as the key information331, the authentication key generated by the first server device200in association with the electronic device500of the processing request destination and the corresponding terminal device400. Then, the second server device300connects to the electronic device500associated with the same authentication key as the authentication key received from the browser terminal device100together with the processing request, among authentication keys registered as key information331. Thereby, the second server device300connects to the request destination of the processing request associated with the authentication key by receiving the authentication key. Thereby, when the second server device300receives a processing request without an authentication key in the communication relayed by another device between the browser terminal device100and the electronic device500, the second server device300does not connect to the electronic device500, which makes it possible to reduce unauthorized connections, and to improve security. 2. COMMUNICATION PROCESSING FIGS.3and4are sequence diagrams showing communication processing executed by the communication system10and the browser terminal device100. The sequence diagram ofFIG.3and the sequence diagram ofFIG.4are continuous, and are continuous from the lower end of the sequence diagram ofFIG.3to the upper end of the sequence diagram ofFIG.4. In S100, the controller110of the browser terminal device100notifies the first server device200of the user authentication information about the browser terminal device100and accesses the first server device200, by the function of the browser controller111. In S105, the first server device200performs authentication of the user of the browser terminal device100based on the authentication information notified in S100, and when the authentication is performed, transmits an instruction screen to the browser terminal device100. In S110, the controller110acquires the instruction screen transmitted from the first server device200in response to the access in S100by the function of the browser controller111. In S115, the controller110displays the instruction screen acquired in S110by the function of the browser controller111in the browser displayed on the input/output portion130. In S120, when the electronic device500is selected as the target for transmitting the processing request through the instruction screen displayed in S115, the controller110transmits the processing request for the electronic device500to the first server device200, by the function of the request transmission portion111a. In S125, the controller210of the first server device200receives a processing request for the electronic device500from the browser terminal device100by the function of the key generation portion212. Then, the controller210inquires the administrator terminal device600of whether or not to permit the processing request for the electronic device500from the browser terminal device100. In S130, When, as a result of the inquiry in S125, it is selected to permit the processing request for the electronic device500, the controller210generates an authentication key used for authentication of the browser terminal device100in the second server device300by the function of the key generation portion212, and associates the generated authentication key with the electronic device500, which is the request destination of the processing request received in S125. Further, the controller210specifies the terminal device400corresponding to the electronic device500from the correspondence information231. In S135, the controller210transmits, to the second server device300, the authentication key generated in S130, information about the electronic device500associated with the authentication key, and information about the terminal device400corresponding to the electronic device500, by the function of the key transmission portion213. In S140, the controller310of the second server device300receives the authentication key, the information about the electronic device500, and the information about the terminal device400from the first server device200, by the function of the registration portion311. Then, the controller310associates the received authentication key with the information about the electronic device500and the information about the terminal device400, and records it on the recording medium330as the key information331. In S145, the controller210of the first server device200transmits the authentication key generated in S130to the browser terminal device100by the function of the key transmission portion213, and instructs the browser terminal device100to transmit the processing request to the second server device300by using the transmitted authentication key. In S150, the controller110of the browser terminal device100generates data for the processing request according to the first protocol for the electronic device500by the function of the request transmission portion111a. Then, the controller110transmits, to the second server device300, the generated data for the processing request together with the authentication key transmitted in S145. In S155, the controller310of the second server device300determines whether or not the same authentication key as the authentication key received from the browser terminal device100in S150is recorded on the recording medium330as key information331, by the function of the connection controller312. The controller310authenticates the browser terminal device100when it determines that the received authentication key is recorded, and does not authenticate the browser terminal device100when it determines that the received authentication key is not recorded. When the browser terminal device100is authenticated, the controller310acquires, from the key information331, the information about the electronic device500associated with the authentication key and the information about the terminal device400. In S160, the controller310generates data according to the second protocol, including the data for the processing request according to the first protocol received from the browser terminal device100, the request ID of the processing request, the information about the electronic device500of the request destination of the processing request, the predetermined port number, and information indicating whether or not the data for the processing request is Base64 format data, by the function of the data transmission portion312a. In S165, the controller310transmits the data according to the second protocol generated in S160to the terminal device400by the function of the data transmission portion312a. In S170, the controller410of the terminal device400receives the data according to the second protocol transmitted in S165by the function of the request notification portion411. The controller410extracts the data for the processing request according to the first protocol for the electronic device500from the received data. Further, the controller410also extracts the request ID of the processing request, the port number, and the IP address which is the information about the electronic device500from the received data according to the second protocol. Then, the controller410records the extracted request ID in the RAM. In S175, the controller410transmits the processing request according to the first protocol extracted in S170to the port number extracted in S170of the electronic device500of the IP address extracted in S170, by the function of the request notification portion411. In S180, the controller510of the electronic device500receives the processing request according to the first protocol transmitted in S175, by the function of the response portion511. Then, the controller510generates the response data according to the first protocol including the information on the control screen. In S185, the controller510transmits the response data according to the first protocol generated in S180to the terminal device400, by the function of the response portion511. In S190, the controller410of the terminal device400receives the response data according to the first protocol transmitted in S185from the electronic device500, by the function of the response data transmission portion412. The controller410converts the format of the received response data according to the first protocol into the Base64 format. Then, the controller410generates data according to the third protocol including the converted response data and the request ID of the processing request recorded in the RAM in S170. In S195, the controller410transmits the data according to the third protocol generated in S190to the second server device300, by the function of the response data transmission portion412. In S200, the controller310of the second server device300receives the data according to the third protocol transmitted from the terminal device400in S195, by the function of the response notification portion313. The controller310extracts the response data according to the first protocol and the request ID of the processing request, from the data according to the third protocol received from the terminal device400. The controller310specifies which processing request the extracted response data according to the first protocol corresponds to, based on the extracted request ID. In S205, the controller310converts the Base64 format of the response data according to the first protocol extracted in S200to the original format, and transmits the format-converted data to the browser terminal device100which is the request source of the processing request specified in S200. In S210, the controller110of the browser terminal device100receives the response data according to the first protocol transmitted from the second server device300in S205, by the function of the display controller111b. Then, the controller110displays the control screen in the browser displayed on the display portion of the input/output portion130based on the received response data according to the first protocol. 3. OTHER EMBODIMENTS The above embodiment is an example, and various aspects can be employed. Further, the communication system10shown inFIG.1may be composed of a larger number of systems. For example, at least one of the elements200,300, and400of the communication system10may be composed of a plurality of devices. In the above-described embodiment, the server device20is composed of two server devices, the first server device200and the second server device300. However, the server device20may be one server device having functions of the first server device200and functions of the second server device300. Further, the server device20may be composed of three or more server devices among which functions of the first server device200and functions of the second server device300are distributed. Further, in the above-described embodiment, the second server device300performs authentication of the browser terminal device100using the authentication key. However, the second server device300does not have to perform authentication of the browser terminal device100. In that case, the communication system10may not include the first server device200. In that case, the browser terminal device100transmits, for example, a processing request according to the first protocol to the second server device300without communicating with the first server device200. Further, in the above-described embodiment, the terminal device400is present for managing the electronic device500. However, the terminal device400that manages the electronic device500may not be present. In that case, in the network800, another gateway different from the terminal device400is configured. Then, the electronic device500can communicate according to the second protocol, and the second server device300may transmit the data according to the second protocol including the processing request according to the first protocol, to the electronic device500associated with the authentication key received from the browser terminal device100. Then, the electronic device500may extract the processing request according to the first protocol from the received data according to the second protocol, and transmit the response data including the control screen to the second server device300in response to the extracted processing request. Further, in that case, the second server device300registers the authentication key in association with the information on the request destination of the processing request, without association with information on the terminal device400that manages the electronic device500, which is the request destination of the processing request. Further, in the above-described embodiment, the authentication key generated by the first server device200is a one-time key that is valid only once. However, the authentication key may be a key having a finite validity period, for example, one hour, one day, one week, or the like. Thereby, by repeatedly using the same authentication key within the validity period when the browser terminal device100performs communication with the electronic device500, it is not necessary to generate the authentication key within the validity period. Further, after the validity period passes, the electronic device500cannot be accessed using the authentication key, and unauthorized access is suppressed. By using the authentication key having a finite validity period in this way, the communication system10can reduce the burden of the authentication key generation process while maintaining the security. Further, in the above-described embodiment, the second server device300registers the authentication key. In this case, the second server device300may register the authentication key in association with user information for the browser terminal device100. The user information is information that identifies the user, such as a user name and a user ID. For example, the first server device200acquires the user information for the browser terminal device100that transmits the processing request in S120, and transmits, to the second server device300, the acquired user information together with the authentication key. In that case, the second server device300may associate the authentication key received from the first server device200with the user information for the browser terminal device100and register it as the key information331. Then, the second server device300receives the authentication key and the user information from the browser terminal device100together with the processing request according to the first protocol. Then, the second server device300determines whether or not the received authentication key associated with the received user information is registered in the key information331. The second server device300authenticates the browser terminal device100when it is registered, and does not authenticate the browser terminal device100when it is not registered. Thereby, the server device20does not transmit the processing request to the electronic device500in which the authentication key and the user information are not valid, which makes it possible to improve security as compared with the case where authentication is performed without using the user information. Further, in the above-described embodiment, the browser terminal device100performs indirect communication with the electronic device500according to the first protocol. However, the browser terminal device100may perform communication with the electronic device500in another mode, and may perform indirect communication according to another protocol, such as the second protocol, through the second server device300and the terminal device400, for example. The server device, the terminal device, and the electronic device may be present in different networks. Further, the processing request according to the first protocol from the server device into the network in which the terminal device and the electronic device are present may not be permitted. Further, the present disclosure can be applied as a program or a method executed by a computer. Further, the program, and method as described above may be implemented as a single device or may be implemented by using parts provided in a plurality of devices, and include various aspects. In addition, it can be changed as appropriate, such as part being software and part being hardware. Further, the present disclosure is also established as a recording medium of programs. Of course, the recording medium for the program may be a magnetic recording medium, a semiconductor memory, or the like, and any recording medium to be developed in the future can be considered in exactly the same way. | 56,324 |
11943328 | The appended drawings are not necessarily to scale and may present a somewhat simplified representation of various preferred features of the present disclosure as disclosed herein, including, for example, specific dimensions, orientations, locations, and shapes. Details associated with such features will be determined in part by the particular intended application and use environment. DETAILED DESCRIPTION The present disclosure is susceptible of embodiment in many different forms. Representative examples of the disclosure are shown in the drawings and described herein in detail as non-limiting examples of the disclosed principles. To that end, elements and limitations described in the Abstract, Introduction, Summary, and Detailed Description sections, but not explicitly set forth in the claims, should not be incorporated into the claims, singly or collectively, by implication, inference, or otherwise. For purposes of the present description, unless specifically disclaimed, use of the singular includes the plural and vice versa, the terms “and” and “or” shall be both conjunctive and disjunctive, and the words “including”, “containing”, “comprising”, “having”, and the like shall mean “including without limitation”. Moreover, words of approximation such as “about”, “almost”, “substantially”, “generally”, “approximately”, etc., may be used herein in the sense of “at, near, or nearly at”, or “within 0-5% of”, or “within acceptable manufacturing tolerances”, or logical combinations thereof. As used herein, a component that is “configured to” perform a specified function is capable of performing the specified function without alteration, rather than merely having potential to perform the specified function after further modification. In other words, the described hardware, when expressly configured to perform the specified function, is specifically selected, created, implemented, utilized, programmed, and/or designed for the purpose of performing the specified function. Referring to the drawings, the left most digit of a reference number identifies the drawing in which the reference number first appears (e.g., a reference number ‘310’ indicates that the element so numbered is first labeled or first appears inFIG.3). Additionally, elements which have the same reference number, followed by a different letter of the alphabet or other distinctive marking (e.g., an apostrophe), indicate elements which may be the same in structure, operation, or form but may be identified as being in different locations in space or recurring at different points in time (e.g., reference numbers “110a” and “110b” may indicate two different input devices which may be functionally the same, but may be located at different points in a simulation arena). Disclosed herein is a vehicular communication controller apparatus that includes a microcontroller (MCU) located within a vehicle. A MCU within a vehicle is typically optimized to perform a particular function very efficiently. Further, a vehicle may contain multiple MCUs, each dedicated to a specialized function, but that are configured to communicate with each other. In the interest of efficiency, the communication protocols used within a vehicle may not be compatible with internet or cloud protocol standards. Such communication between devices, also referred to as D2D (device-to-device), have prompted various vehicle manufacturers to develop their own communications protocols. Further complications arise if a vehicle needs to communicate with an outside source, such as in the use of autonomous vehicles. Sometimes referred to as Car-2-X applications, these applications require interaction between vehicles and off-board systems. In such situations, an automotive system may need to provide secure on-board communication in addition to support of internet or cloud-based services. In addition, cloud-based services may require dedicated controls for security, such as secure cloud interaction and emergency vehicle preemption along with the ability to support remote and distributed services, such as remote diagnostics, over the air update, repair, and exchange handling. AUTOSAR™ (AUTomotive Open System Architecture) is a worldwide development partnership of vehicle manufacturers, suppliers, service providers and companies from the automotive electronics, semiconductor and software industry that is attempting to define standards for modularity, configurability, feasibility, producibility, and standardized interfaces. AUTOSAR™ has proposed an automotive middleware solution that may be used for control messages, referred to as Scalable service-Oriented MiddlewarE over IP (SOME/IP). Unfortunately, the current SOME/IP solution, while directed to compatibility of functionality within the automotive industry, it is not compatible with current Cloud computing protocols. In a representative use case, a vehicle may have multiple electronic devices controlling a wide variety of functions, for example, driving control systems, entertainment and infotainment systems, environmental controls, etc. Many of these functions depend on communications between other functional modules within the vehicle and outside of the vehicle. For example, there may be a need to dynamically deploy application and services to vehicles using a secure in-vehicle architecture. Typically, services are of mixed criticality. For example, some services may be directed to quality management, while others may be directed to critical systems. For example, in the industry there are automotive safety integrity levels (ASIL) ranging from A to D. D is the highest integrity level indicating a hazardous situation and therefore needs more stringent risk control criteria as compared to an A level. The ASIL ratings may apply to the types of available services. However, currently there is no way for an ASIL rated application or service to securely discover non-ASIL services running on other devices. FIG.1illustrates an example internet web session100of transporting data using an http protocol between a server and a web browser, according to an embodiment of the present disclosure. Internet web session100includes a web browser110and a server120. Internet web session100starts with a request130for the web browser110to the server120, where the request specifies the desired action, for example the delivery of a file, such as a video. Server120may then send the requested data150, which for example may be a streaming video file. In addition, server120sends a response140to the client, for example the web browser110, that the action it requested has been carried out. In another example, the response140may also be to inform the client that an error occurred in processing its request. FIG.2illustrates a cloud-based publish/subscribe system200, according to an embodiment of the present disclosure. The cloud-based publish/subscribe system200does not utilize the request/response algorithm as described inFIG.1. Rather, the publish/subscribe system200utilizes clients and brokers. For example, publish/subscribe system200may include a vehicle cloud client210, a server cloud client220, and a broker230. As an example, the vehicle cloud client may contain sensors and controllers, also referred to as a vehicular communication controller (not shown) that may sense and control the speed of the vehicle. In this example vehicle cloud client210intends to send data, e.g., the speed of the vehicle to a recipient, for example the server cloud client220. However, in the cloud-based publish/subscribe system200there is no direct contact between clients, i.e., vehicle cloud client210and server cloud client220, but rather communications are routed through broker230. The process may be initiated where vehicle cloud client210issues a connect message240to broker230. If a connection between vehicle cloud client210and broker230is possible, broker230issues a connect message245back to vehicle cloud client210to acknowledge and establish a connection. Once the connection is established, vehicle cloud client210may then be determined to be a publisher and may send its payload data, such as its speed, to broker230shown as publish speed250, along with other metadata identifying vehicle cloud client210as the source. Once the published speed250message is received by the broker230, which will check to see if there are subscribers for the topic name “speed” exist. In the example, server cloud client220is a subscriber260for the topic “speed.” Therefore, a message with the topic of “speed” received by the broker230is sent or published to a subscriber that has subscribed to the “speed” topic. Thus, the speed data of the vehicle cloud client210is published as a speed message265to server cloud client220. Server cloud client220may also concurrently act as a publisher. For example, given the speed message of the vehicle cloud client, the server cloud client may desire to publish a brake message to slow down the vehicle cloud client. In that example, server cloud client would publish a brake message270to broker230. Vehicle cloud client210may be a subscriber for a brake topic275, in which case vehicle cloud client would receive the published brake message280to automatically apply the brakes to slow the vehicle. FIG.3illustrates a large cloud-based publish/subscribe system300, according to an embodiment of the present disclosure. The cloud-based publish/subscribe system300includes vehicle cloud clients310, shown as vehicle cloud clients310-1,310-2, through to310-N, and server cloud clients320, shown as server cloud clients320-1,320-2, through to320-N. In addition, the cloud-based publish/subscribe system300includes brokers330, shown as broker330-1,330-2, through to330-N. In an embodiment, vehicle cloud clients310are decoupled from server cloud clients320and communicate through brokers330. Thus, communications between clients may be very stable, reliable, and scalable as there is no requirement for direct connections between the clients. FIG.4illustrates a MCU to SoC discovery architecture400, according to an embodiment of the disclosure. Architecture400includes a MCU410, a SoC420, and a cloud430. Further, MCU410may consist of multiple MCUs, as SoC420may represent multiple SoCs. In an embodiment, MCU may also include a software component or application. In one scenario, the software component may issue a command405, e.g., a “find service” command, to discover a desired service on a device outside of the MCU, for example, at a SoC. In such a scenario a command415may be issued from MCU410to SoC420. Command415may be a unicast command sent to a single particular SoC, or multicast to multiple SoCs. In an embodiment command405and command415from MCU410may be in an automotive/embedded communication protocol. Once command415is received by SoC420it may issue a series of remote procedure calls (RPC) to detect, or discover, if the requested desired service is within SoC420, shown by RPC425. SoC420may also issue an RPC432request to the cloud430, which may then issue additional RPC434discovery requests to other devices connected to the cloud430. If a discovery request is answered from cloud430then an RPC432command is returned to SoC420. In addition, if SoC420also, or in addition to cloud430, identifies a matching available service from within SoC420, then a unicast command422is returned to the original requesting MCU410. In an embodiment, RPC425, RPC432, and RPC434, may include a cloud-based protocol. As such, SoC420would convert the cloud-based protocol packets to an automotive/embedded communication protocol. Further, in an embodiment, SoC420may also include a local database of services. For example, the service may be directed to automotive body parts that may identify the type or assemblies in a vehicle such as doors and windows. In addition, the local database may also contain the capabilities associated with a particular service. Thus, in the case or automotive body parts for a particular vehicle, the metadata associated with that particular vehicle may identify it to include four doors, one windshield, one back window, and one window within each door. In another embodiment, this same type of database may be available in the cloud for access by multiple entities. FIG.5illustrates a Service Discovery Protocol500from a MCU for discovery of one or more SoCs, according to an embodiment of the disclosure. Service Discovery Protocol500includes a Destination Device510and a MCU520. A main goal of the Service Discovery Protocol500is to communicate the availability of functional entities called services in the in-vehicle communication including the controlling of the send behavior of event messages. Such an approach allows for the sending of just event messages to receivers requiring the service using a Publish/Subscribe approach. MCU520may also include a software component522, a run time environment524, a discovery service526and a transformer+528. The destination device510, which in some embodiments is an SoC, where the SoC may be the ultimate destination in a string of destination devices, or it may be an intermediary device. Destination device510may also include a streamer512, a bus514, and a discovery service516. Service Discovery Protocol500starts with software component522determining that communication with a service offered outside of MCU520is desired and thus generates find service request531. In an embodiment, the desired service may be available on an SoC, for example the destination device510. Find service request531may then be directed to run time environment524. Run time environment524may act as a communications bridge, here bridging find service request531to discovery service526as find service request533. Discovery service526may then integrate the find service request533into a payload of a find service message packet, generating find service message packet535that may be sent to transformer+528. Transformer+528may bind the payload of the find service message packet535to an automotive/embedded communication protocol, for example SOME/IP, at protocol binding537. The bound message is then returned to run time environment524as bound find service message packet539. Run time environment524may then send the bound find service message packet539as a multicast packet to multiple destination devices, for example multiple SoCs, or in an embodiment, as a unicast packet to a specific destination device, shown as packet541. FIG.5illustrates a destination device, which may be a device that is listening for service discovery messages. Further, the destination device may be located within or outside of a vehicle in which the MCU520exists. Destination device510then receives the packet541at streamer512. Streamer512may then deserialize a header of the packet541and generate a generic publish event545that may be published at545to a bus514that may be forwarded to discovery service516for processing. Discovery service516processing may start with deserializing the payload of the generic publish event545, for example by using auto-generated helper libraries to retrieve the initial find service request531. Discovery service516may also contain a database, or registry, that may include a listing of service metadata with corresponding service identifiers within destination device510. Discovery service516may then identify a corresponding service identifier associated with the find service request531. Once located, the discovery service516may then generate a service directory message that includes the service metadata from the database. Discovery service516may then build a cloud event containing a header and a payload containing a serialized offer service message, which may also be referred to as a solicited response. Discovery service may then publish the solicited response as cloud event553to bus514that forwards the cloud event as published cloud event555to streamer512. Streamer512may then serialize and bind the header of the published cloud event555to an automotive/embedded communication protocol, for example SOME/IP, at protocol binding557, creating solicited response event559. Solicited response event559may then be sent to MCU520and received by run time environment524, which may then be sent to transformer+528as message packet561to remove the header at process563and sent to the discovery service526as message packet565returning the solicited response567that may then be directed back to the software component522, thus completing the discovery request. FIG.6illustrates a Service Discovery Protocol600from a SoC for discovery of one or more MCUs, according to an embodiment of the disclosure. Service Discovery Protocol600includes a MCU610and a SoC620. MCU610may also include a run time environment612, a discovery service614and a transformer+616. The SoC620, may also include a streamer622, a bus624, and a discovery service626. Service Discovery Protocol600may also include a central discovery service630located outside the MCU610and the SoC620. Service Discovery Protocol600starts with an indication641by run time environment612of a change in service status. For example, when a new service becomes available or is started. Or, when an existing service is no longer available, or has been stopped. Indication641is received by discovery service614. Discovery service614may then generate offer service message643. Offer service message643may include either that there is a start of a service with a start offer service, or that there is a stopping of a service with a stop offer service, based on the indication641. The offer service is then sent to transformer+616that may bind the offer service message643at binder645, to an automotive/embedded communication protocol that supports remote procedure calls, event notifications and underlying serialization/wire format, creating a service discovery message647that may be sent to the run time environment612. Run time environment612may then send the service discovery message647to one or more SoC devices. Run time environment612may utilize unicast to send to a particular SoC, or destination, device or through multicast to reach multiple SoCs that may be listening on the multicast port. The unicast or multicast message649may then be received by SoC620with streamer622that may deserialize a header of the service discovery message at deserializer651to determine if the message is indeed a service discovery message. If it is a service discovery message, then it is published as published event653to bus624. Bus624may then forward the published event653as event655to the discovery service626. Discovery service626may then be notified of the event through the listener/callback and deserialize the payload to retrieve the service information at step657, e.g., the start offer service or the stop offer service. Once the offer service is retrieved, discovery service626may update the service instance information in its local database at659or in a cloud-based central discover service, CDS630at661. Further, discovery service626may update both its local database and CDS630such that the information contained in both is synchronized. FIG.7shows an exemplary embodiment of a method for service discovery. Method700begins at step705with generating, by a software component in a MCU located within a vehicle, a find service request configured to discover an offered service from one or more systems on a chip (SoC). A MCU may need to utilize a service to obtain information, for example static automotive services data and its associated services capabilities. As discussed, discovery may take the form of a MCU discovering SoCs, while it may also take the form of SoCs discovering MCUs. The SoC may be located within the same vehicle as the MCU, or it may be located outside the vehicle. Software component522may be a piece of software code or application that is being executed, or could be executed, within a MCU, for example, MCU520. The software component may issue a find service request if that service is not currently available within the MCU. Thus, in step705a find service request may be issued to locate a desired service that may be available on a SoC. At step710, a find service message packet is generated, in the MCU, where the find service request is inserted into a payload of the find service message packet. As discussed inFIG.5, the discovery service526within MCU520may integrate the find service request533into a payload of a find service message packet, thus generating a find service message packet535. At step715the payload is bound to an automotive/embedded communication protocol that supports remote procedure calls, event notifications and underlying serialization/wire format. As the MCU may be located within a vehicle, it may utilize an automotive/embedded communication protocol, for example SOME/IP. At step720the find service message packet is sent to one or more SoC receiving devices. The sending of the find service message packet may be accomplished using multicast such that the find service message packet is broadcast to multiple SoCs that may be listening on a multicast port. Or, as an alternative the find service message packet may be sent to a particular SoC using unicast. At step725the one or more SoC receiving devices deserialize a header of the find service message packet to generate a generic publish event. At step730the payload of the find service message packet is deserialized by the one or more SoC receiving devices to retrieve the find service request and determine, based on a local database and a cloud-based database, a corresponding service identifier. As discussed inFIG.5, discovery service516may also contain a database, or registry, that may include a listing of service metadata with corresponding service identifiers within destination device510. Further, as discussed inFIG.6, a central discovery database in the cloud may also exist and contain the same information. Thus, a determination may be made if the desired service is present and the associated capabilities of the service. Further, the databases may store static automotive services data and its associated services capabilities. It is not necessarily meant to store “real time” production data. At step735a service directory message may be generated by the one or more SoC receiving devices. And, based on service metadata from the corresponding service identifier in the local database or the cloud-based database, the SoC may publish a cloud event including a solicited response. For example, discovery service516may identify a corresponding service identifier associated with the find service request531and that once located, the discovery service516may generate a service directory message that includes the service metadata from the database. Discovery service516may then build a cloud event containing a header and a payload containing a serialized offer service message, or also referred to as a solicited response. Discovery service may then publish the solicited response as cloud event553to bus514that forwards the cloud event as published cloud event555to streamer512. At step740the header of the published cloud event is bound to automotive/embedded communication protocol to generate an event message. For example, inFIG.5, Streamer512may then serialize and bind the header of the published cloud event555to an automotive/embedded communication protocol, for example SOME/IP, at protocol binding557, creating solicited response event559. At step745the event message is sent to the MCU, wherein an automotive/embedded communication protocol transcoder in the MCU removes the header and directs the solicited response to the software component. For example, solicited response event559may be sent to MCU520and received by run time environment524, which may then be sent to transformer+528as message packet561to remove the header at process563and sent to the discovery service526as message packet565. returning the solicited response567that may then be directed back to the software component522, thus completing the discovery request. Method700may then end. The description and abstract sections may set forth one or more embodiments of the present disclosure as contemplated by the inventor(s), and thus, are not intended to limit the present disclosure and the appended claims. Embodiments of the present disclosure have been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries may be defined so long as the specified functions and relationships thereof may be appropriately performed. The foregoing description of the specific embodiments will so fully reveal the general nature of the disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance. The breadth and scope of the present disclosure should not be limited by the above-described exemplary embodiments. Exemplary embodiments of the present disclosure have been presented. The disclosure is not limited to these examples. These examples are presented herein for purposes of illustration, and not limitation. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosure. | 26,259 |
11943329 | DETAILED DESCRIPTION Overview Parallel Redundancy Protocol (PRP) using non-overlapping Resource Unit (RU) groupings may be provided. A first computing device may associate to a first Access Point (AP) at a virtual Media Access Control (MAC) address. Next, the first computing device may associate to a second AP at the virtual MAC address. Then data from a data frame may be replicated to a first one or more RUs in a channel. The first one or more RUs may be assigned to the first AP. Data from the data frame may then be replicated to a second one or more RUs in the channel. The second one or more RUs may be assigned to the second AP and may not overlap the first one or more RUs. Both the foregoing overview and the following example embodiments are examples and explanatory only, and should not be considered to restrict the disclosure's scope, as described and claimed. Furthermore, features and/or variations may be provided in addition to those described. For example, embodiments of the disclosure may be directed to various feature combinations and sub-combinations described in the example embodiments. Example Embodiments The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar elements. While embodiments of the disclosure may be described, modifications, adaptations, and other implementations are possible. For example, substitutions, additions, or modifications may be made to the elements illustrated in the drawings, and the methods described herein may be modified by substituting, reordering, or adding stages to the disclosed methods. Accordingly, the following detailed description does not limit the disclosure. Instead, the proper scope of the disclosure is defined by the appended claims. Parallel Redundancy Protocol (PRP) is a network protocol standard for Ethernet that may provide seamless failover against failure of any network component. PRP may be used for applications that cannot withstand packet loss such as industrial internet, smart grids, factory automation, autonomous driving, remote surgery, intelligent transportation systems, power utilities, and manufacturing. Consistent with embodiments of the disclosure, to carry out PRP, redundancy boxes (i.e., redboxes) may be used. A redbox may comprise a switch or a Work Group Bridge (WGB) that may make two copies of each incoming data frame (i.e., replicate) and then send the replicated data frames on two independent paths in a network. One of the PRP replicated packets may be discarded by another redbox at the destination if both of the replicated packets make it to the destination. As will be described in greater detail below, a redbox may make an association with two upstream APs and transmit using two non-overlapping Resource Units (RU) groups respectively corresponding to the two upstream APs. FIG.1shows an operating environment100. As shown inFIG.1, operating environment100may comprise a first computing device105, a first AP110, a second AP115, and a second computing device120. First computing device105and second computing device120may each comprise a redbox that may replicate and discard data frames as described above. First AP110and second AP115may provide wireless access for a client device connected to first computing device105and may operate using the IEEE 802.11 standard for example. First computing device105may comprise a redbox operating in a WGB mode. A WGB may comprise a small stand-alone unit that may provide a wireless infrastructure connection for Ethernet-enabled devices for example. Devices that do not have a wireless client adapter in order to connect to a wireless network may be connected to the WGB through an Ethernet port. The WGB may associate to first AP110and second AP115through a wireless interface. Through the WGB, client devices may obtain access to the wireless network. A client device that the WGB may provide wireless network access may, for example, correspond to an autonomous vehicle in motion or a robot moving about in a factory. The client device may comprise, but is not limited to, a smart phone, a personal computer, a tablet device, a mobile device, a cable modem, a remote control device, a set-top box, a digital video recorder, an Internet-of-Things (IoT) device, a network computer, a mainframe, a router, or other similar microcomputer-based device. The elements described above of operating environment100(e.g., first computing device105, first AP110, second AP115, and second computing device120) may be practiced in hardware and/or in software (including firmware, resident software, micro-code, etc.) or in any other circuits or systems. The elements of operating environment100may be practiced in electrical circuits comprising discrete electronic elements, packaged or integrated electronic chips containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic elements or microprocessors. Furthermore, the elements of operating environment100may also be practiced using other technologies capable of performing logical operations such as, for example, AND, OR, and NOT, including but not limited to, mechanical, optical, fluidic, and quantum technologies. As described in greater detail below with respect toFIG.4, the elements of operating environment100may be practiced in a computing device400. FIG.2Ais a diagram illustrating Orthogonal Frequency-Division Multiple Access (OFDMA). First computing device105, first AP110, and second AP115may be compatible with the IEEE 802.11ax specification standard, for example, and may support OFDMA technology to provide media access to client devices. As shown inFIG.2A, the media may be divided into time slots along a time axis205and may have a channel width along a frequency axis210. When using OFDMA to provide media access, an AP may partition a channel into smaller sub-channels know as Resource Units (RUs) so that simultaneous multiple-user transmissions may occur. The channel width may comprise, for example, 20 MHz, broken into eight, 2 MHz RUs. Each RU may be separated from the next one with a few KHz of empty channel so with the eight, 2 MHz RUs and empty space, together the channel may be 20 MHz. An AP may determine RU allocation for multiple stations for both downlink and uplink OFDMA. In other words, the AP may determine how RUs may be assigned to stations (i.e., user0, user1, user2, and user3) within a given channel. The stations may provide feedback to IEEE 802.11ax compatible APs using, for example, solicited or unsolicited buffer status reports, however, the AP may make the decision in regards to RU allocation for synchronized Uplink (UL)-OFDMA from multiple client devices. FIG.2Bis a diagram illustrating non-overlapping RU groupings. As shown inFIG.2B, a timeslot215may comprise a first RU grouping220and a second RU grouping225. First RU grouping220may comprise, for example, two, 2 MHz RUs and second RU grouping225may comprise, for example, two, 2 MHz RUs. As will be described in greater detail below, first RU grouping220may comprise a first one or more RUs in a channel assigned to first AP110and second RU grouping225may comprise a second one or more RUs in the channel assigned to second AP115. Embodiments of the disclosure may provide multiple UL RU blocks from a device (e.g. first computing device105) to multiple APs (e.g., first AP110and second AP115) while leveraging IEEE 802.11ax compliant hardware. In doing so, embodiments of the disclosure may abide by the specification and related constraints of Off-the-shelf (OTS) commercial chipsets. As such, embodiments of the disclosure may use existing Trigger Frame (TF) and Multiuser Physical layer Protocol Data Unit (MU-PPDU) structures. Consequently user devices may be triggered by one AP that allocates the RUs (and associated MCS or data-rate) for its UL transmission. However, one constraint of IEEE 802.11ax may be that a single client device identified by an Association ID (AID) may only be assigned a single RU (unless that client device is part of, for example, a Multicast (MCAST) group). However, embodiments of the disclosure may assign multiple AIDs to a single virtual Media Access Control (MAC) address, for example, in software. In this case, for example, the TF may contain an RU assignment (and associated MCS or data-rate) for AID0, AID1, and AIDn corresponding to the association (AID) at each Basic Service Set Identifier (BSSID). In other words, a virtual MAC address of a device may contain two or more underlying AIDs exchanged between the cooperating APs. On the TF itself, an efficient bandwidth process may be to designate a primary trigger AP and have all remaining APs operate in High Efficiency (HE) Trigger-Based (TB) Uplink (UL) MU-PPDU listen mode using a shared virtual Receiver MAC address (RA) described in more detail below. The selection of a primary AP may be based on a reasonable metric, for example, best uplink Received Signal Strength Indicator (RSSI)/Signal-to-Noise Ratio (SNR) of a frame received and may be chosen by a PRP redbox (e.g., first computing device105) dynamically for each WGB. Each AP may still associated with the WGB on it's per BSSID AID and may send control information and even non-PRP data using distinct (un-related) RUs as well as legacy Single User (SU) (e.g., IEEE 802.11ac) operating modes. As the PRP client device moves, the primary AP may be changed and/or the AP Transmit (TX) power and related limits (e.g., OBSS_PD Min/Max, OFDMA Power offset) may be manipulated to maximize the probability of the PRP client device receiving the trigger and being heard my multiple APs. FIG.3is a flow chart setting forth the general stages involved in a method300consistent with embodiments of the disclosure for providing PRP using non-overlapping RU groupings. Method300may be implemented using first computing device105, first AP110, second AP115, and second computing device120as described in more detail above with respect toFIG.1. Ways to implement the stages of method300will be described in greater detail below. Method300may begin at starting block305and proceed to stage310where first computing device105may associate to first AP110at a virtual MAC address. For example, first computing device105may comprise a redbox functioning in a WGB mode in order to provide a client device wireless network access by associating with first AP110. The client device may, for example, correspond to an autonomous vehicle in motion or a robot moving about in a factory. From stage310, where first computing device105associates to first AP110at the virtual MAC address, method300may advance to stage315where first computing device105may associate to second AP115at the virtual MAC address. For example, first computing device105may associate to second AP115while also maintaining its association to first AP110in order to implement PRP consistent with embodiments of the disclosure. In other words, first computing device105may make an association with two upstream APs (e.g., IEEE 802.11ax compliant APs) at the same time. The upstream association to the two APs may be accomplished using the same Service Set Identifier (SSID) on both sides. Associating a single computing device MAC address to two APs on the same SSID may not be permissible with conventional systems. For example, with IEEE 802.11, a single station (i.e., understood as a single MAC address) may not associate to more than one BSSID. When mobile stations associate with an AP, the AP may assign an AID. The AID may be used for a variety of purposes. Consistent with embodiments of the disclosure, to accomplish dual AP association for PRP, first computing device105may use one virtual MAC address per association with a unique IEEE 802.11ax AID per BSSID. Potentially, first computing device105may associate to as many BSSIDs as there are AP radios in range. However, only two associations are needed for a minimum PRP implementation. Furthermore, transmission of a frame to an AP may require a specific Receiver MAC address (RA), in addition to a Destination MAC address (DA). Consistent with embodiments of the disclosure, first computing device105and its associated APs (e.g., first AP110and second AP115) may negotiate a virtual RA (i.e., virtual MAC address) shared among APs (in addition to each AP's native RA for the BSSID). This process may allow first computing device105to send redundant frames to a single RA on different RU blocks that are received and demodulated by at least two APs at the same time. This may allow PRP to function over a single Wi-Fi redbox radio consistent with embodiments of the disclosure. Because first computing device105may roam to different APs, first computing device105may first request the shared MAC behavior of the upstream APs, which may facilitate the aforementioned dual association. First computing device105may accomplish this by associating to a primary AP (e.g., first AP110). Based on its 11k report for example, first computing device105may request first AP110(i.e., acting as the primary AP) to negotiate virtual MAC address support with a next best or secondary AP (e.g., second AP115) in an extension of the 11k report, association frames, or other exchange processes. The virtual MAC may therefore be carried with first computing device105from AP to AP as first computing device105roams. In other embodiments, the primary AP may coordinate a virtual MAC address with a secondary AP, for example, through over the air Neighbor Discovery Protocol (NDP) messages for each supported SSID where PRP is enabled. First computing device105may now have the dual association to both APs (i.e., first AP110and second AP115). Consistent with embodiments of the disclosure, inter-AP communication may be accomplished using, for example, the IEEE 802.11be standard. In other words, inter-AP communications between first AP110and second AP115including, for example, negotiate the virtual RA or negotiating for the assignment of the first one or more RUs to first AP110in the channel and for the assignment of the second one or more RUs to second AP115in the channel may be accomplished using, for example, the IEEE 802.11be standard. Once first computing device105associates to second AP115at the virtual MAC address in stage315, method300may continue to stage320where first computing device105may replicate data from a data frame to a first one or more RUs in a channel. The first one or more RUs may be assigned to first AP110. For example, referring back toFIG.2B, first RU grouping220may comprise the first one or more RUs in the channel assigned to first AP110. After first computing device105replicates data from the data frame to the first one or more RUs in the channel in stage320, method300may proceed to stage325where first computing device105may replicate data from the data frame to a second one or more RUs in the channel. The second one or more RUs may be assigned to second AP115and the first one or more RUs and the second one or more RUs may not overlap. For example, referring back toFIG.2B, second RU grouping225may comprise the second one or more RUs in the channel assigned to second AP115. Acting as the primary AP, first AP110may coordinate with second AP115for non-overlapping RU groups that may be used for uplink and downlink communication on first AP110and second AP115. This may be accomplished using several different processes. In one embodiment, static PRP RU allocation may be used where PRP traffic may be identified along with corresponding RU characteristics (e.g., size and occurrence frequency), and the APs may agree on reserved PRP specific RU allocation over the air, or through a central allocation function (e.g., at a Wireless LAN Controller (WLC)). In another embodiment, a round robin PRP RU allocation may be used where APs (e.g., during the NDP exchange or through WLC allocation) may agree on a PRP slotting scheme for each AP and each RU. “Border” PRP RU allocation may be used in another embodiment where one AP may allocate the lower RUs of a given channel to PRP traffic while the other AP allocates the upper RUs in a non-overlapping fashion. Consistent with yet another embodiment, WGB-specific PRP RU allocation may be used where one AP may be elected as the primary AP and may make the RU allocation, while the other AP only accepts the RU allocation made by the primary AP. Channel state information (CSI)-based PRP RU allocation may comprise another embodiment where the result of explicit Multi-User, Multiple-Input, Multiple-Output (MU-MIMO) sounding or implicit measures may be used to select complimentary RUs. Complimentary RUs may comprise those that may be mathematically de-correlated (e.g., the 2 MHz RUs at the beginning and end of a 80 or 160 Mhz channel). From stage325, where first computing device105replicates data from the data frame to the second one or more RUs in the channel, method300may advance to stage330where a radio associated with first computing device105may transmit the first one or more RUs and the second one or more RUs in the channel to the virtual MAC address. For example, first AP110(e.g., as the primary AP) may issues a TF allocating RUs (and associated MCS or data-rate) to both AIDs of the same size and rate (or of different sizes and rate if channel metrics indicate each AP might have different receive RSSI). Then first computing device105may make copies of the data frame and send them on the different RUs (as per the TF assignment from the primary AP), in a single UL MU-PPDU frame. Consequently, embodiments of the disclosure may provide PRP using a single source radio (e.g., located with first computing device105), rather than two radios. Once the radio associated with first computing device105transmits the first one or more RUs and the second one or more RUs in the channel to the virtual MAC address in stage330, method300may continue to stage335where first AP110may receive the first one or more RUs and the second one or more RUs. For example, the first one or more RUs and the second one or more RUs may be received by first AP110on a single UL MU-PPDU frame on the channel. The association between first computing device105and first AP110may comprise a first one of two independent paths in operating environment100. After first AP110receives the first one or more RUs and the second one or more RUs in stage335, method300may proceed to stage340where first AP110may create a first copy of the data frame from the first one or more RUs. For example, while first AP110receives both the first one or more RUs and the second one or more RUs, first AP110may use the first one or more RUs to create the first copy of the data frame because the first one or more RUs were allocated to first AP110. From stage340, where first AP110creates the first copy of the data frame from the first one or more RUs, method300may advance to stage345where first AP110may send the first copy of the data frame to second computing device120. For example, after first AP110demodulates the UL MU-PPDU signal (potentially receiving the same PPDU on multiple RU's from each AID), the first copy of the data frame may be created from the demodulated signal and forwarded on a first of the two aforementioned independent paths in operating environment100towards second computing device120. Once first AP110sends the first copy of the data frame to second computing device120in stage345, method300may continue to stage350where second AP115may receive the first one or more RUs and the second one or more RUs. For example, the first one or more RUs and the second one or more RUs may be received by second AP115on a single UL MU-PPDU frame on the channel. The association between first computing device105and second AP115may comprise a second one of two independent paths in operating environment100. After second AP115receives the first one or more RUs and the second one or more RUs in stage350, method300may proceed to stage355where second AP115may create a second copy of the data frame from the second one or more RUs. For example, while second AP115receives both the first one or more RUs and the second one or more RUs, second AP115may use the second one or more RUs to create the second copy of the data frame because the second one or more RUs were allocated to second AP115. From stage355, where second AP115creates the second copy of the data frame from the second one or more RUs, method300may advance to stage360where second AP115may send the second copy of the data frame to second computing device120. For example, after second AP115demodulates the UL MU-PPDU signal (potentially receiving the same PPDU on multiple RU's from each AID), the second copy of the data frame may be created from the demodulated signal and forwarded on a second one of the two aforementioned independent paths in operating environment100towards the second computing device120. Once second AP115sends the second copy of the data frame to second computing device120in stage360, method300may continue to stage365where second computing device120may receive the first copy of the data frame from first AP110. For example, the first copy of the data frame may be received by second computing device120over the first one of the two aforementioned independent paths. After second computing device120receives the first copy of the data frame from first AP110in stage365, method300may proceed to stage370where second computing device120may receive the second copy of the data frame from second AP115. For example, the second copy of the data frame may be received by second computing device120over the second one of the two aforementioned independent paths. From stage370, where second computing device120receives the second copy of the data frame from second AP115, method300may advance to stage375where second computing device120may discard the first copy of the data frame or the second copy of the data frame. For example, when second computing device120receives redundant copies of the data frame, it may remove one of the redundant copies of the data frame by discarding either the first copy of the data frame or the second copy of the data frame. Accordingly, embodiments of the disclosure may provide PRP over a wireless network using dual non-overlapping RUs sent to different APs on the same SSID where the redundant frames are introduced onto their own independent paths. In some cases, however, second computing device120may fail to receive one of either the first copy of the data frame or the second copy of the data frame. In this situation, second computing device120may use whichever of the first copy of the data frame or the second copy of the data frame it received. Once second computing device120discards the first copy of the data frame or the second copy of the data frame in stage375, method300may then end at stage380. FIG.4shows computing device400. As shown inFIG.4, computing device400may include a processing unit410and a memory unit415. Memory unit415may include a software module420and a database425. While executing on processing unit410, software module420may perform, for example, processes for providing PRP using non-overlapping RU groupings as described above with respect toFIG.3. Computing device400, for example, may provide an operating environment for first computing device105, first AP110, second AP115, or second computing device120. First computing device105, first AP110, second AP115, or second computing device120may operate in other environments and are not limited to computing device400. Computing device400may be implemented using a Wireless Fidelity (Wi-Fi) access point, a cellular base station, a tablet device, a mobile device, a smart phone, a telephone, a remote control device, a set-top box, a digital video recorder, a cable modem, a personal computer, a network computer, a mainframe, a router, a switch, a server cluster, a smart TV-like device, a network storage device, a network relay devices, or other similar microcomputer-based device. Computing device400may comprise any computer operating environment, such as hand-held devices, multiprocessor systems, microprocessor-based or programmable sender electronic devices, minicomputers, mainframe computers, and the like. Computing device400may also be practiced in distributed computing environments where tasks are performed by remote processing devices. The aforementioned systems and devices are examples and computing device400may comprise other systems or devices. Embodiments of the disclosure, for example, may be implemented as a computer process (method), a computing system, or as an article of manufacture, such as a computer program product or computer readable media. The computer program product may be a computer storage media readable by a computer system and encoding a computer program of instructions for executing a computer process. The computer program product may also be a propagated signal on a carrier readable by a computing system and encoding a computer program of instructions for executing a computer process. Accordingly, the present disclosure may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.). In other words, embodiments of the present disclosure may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. A computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific computer-readable medium examples (a non-exhaustive list), the computer-readable medium may include the following: an electrical connection having one or more wires, a portable computer diskette, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, and a portable compact disc read-only memory (CD-ROM). Note that the computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. While certain embodiments of the disclosure have been described, other embodiments may exist. Furthermore, although embodiments of the present disclosure have been described as being associated with data stored in memory and other storage mediums, data can also be stored on or read from other types of computer-readable media, such as secondary storage devices, like hard disks, floppy disks, or a CD-ROM, a carrier wave from the Internet, or other forms of RAM or ROM. Further, the disclosed methods' stages may be modified in any manner, including by reordering stages and/or inserting or deleting stages, without departing from the disclosure. Furthermore, embodiments of the disclosure may be practiced in an electrical circuit comprising discrete electronic elements, packaged or integrated electronic chips containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic elements or microprocessors. Embodiments of the disclosure may also be practiced using other technologies capable of performing logical operations such as, for example, AND, OR, and NOT, including but not limited to, mechanical, optical, fluidic, and quantum technologies. In addition, embodiments of the disclosure may be practiced within a general purpose computer or in any other circuits or systems. Embodiments of the disclosure may be practiced via a system-on-a-chip (SOC) where each or many of the elements illustrated inFIG.1may be integrated onto a single integrated circuit. Such an SOC device may include one or more processing units, graphics units, communications units, system virtualization units and various application functionality all of which may be integrated (or “burned”) onto the chip substrate as a single integrated circuit. When operating via an SOC, the functionality described herein with respect to embodiments of the disclosure, may be performed via application-specific logic integrated with other components of computing device400on the single integrated circuit (chip). Embodiments of the present disclosure, for example, are described above with reference to block diagrams and/or operational illustrations of methods, systems, and computer program products according to embodiments of the disclosure. The functions/acts noted in the blocks may occur out of the order as shown in any flowchart. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. While the specification includes examples, the disclosure's scope is indicated by the following claims. Furthermore, while the specification has been described in language specific to structural features and/or methodological acts, the claims are not limited to the features or acts described above. Rather, the specific features and acts described above are disclosed as example for embodiments of the disclosure. | 29,902 |
11943330 | DETAILED DESCRIPTION It is to be understood that the disclosed subject matter is not limited in its application to the details of construction and to the arrangements of the components set forth in the following description or illustrated in the drawings. The disclosed subject matter is capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting. As such, those skilled in the art will appreciate that the conception, upon which this disclosure is based, may readily be utilized as a basis for the designing of other structures, methods, and systems for carrying out the several purposes of the disclosed subject matter. Therefore, the claims should be regarded as including such equivalent constructions insofar as they do not depart from the spirit and scope of the disclosed subject matter. Although the disclosed subject matter has been described and illustrated in the foregoing exemplary embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the disclosed subject matter may be made without departing from the spirit and scope of the disclosed subject matter. FIG.1depicts a block diagram of an optical microtransponder (MTP) sensor system100(“system100”) in accordance with some embodiments of the present disclosure. The system100comprises a MTP reader102and a MTP104. In some embodiments, the MTP104is bonded or adhered via adhesive to an object so as to operate as an identifier for the object. The MTP104may be adhered to, implanted within, or otherwise attached to an object110which may be any object requiring individual unique identification (ID) data such as a microscope slide, a test animal or insect, clothing, electronic parts, and the like. An enlargement of the MTP104is depicted in the breakout shown inFIG.1to illustrate OTMP components of a substrate160, photo elements150, and an optical communication circuit155. The height of the MTP104can be for example approximately 20 μm-60 μm and dependent on the number of stacked layers and sensors for a particular MTP104. The MTP104may be an integrated circuit which may be normally in a persistent dormant unpowered state until powered on when illuminated with an excitation beam132from the MTP reader102. Upon illumination, the MTP104may power on (generally instantly, e.g., much less than 1 second) and transmit a data beam133via light or Rf to the MTP reader102. The data beam133in some embodiments may be an emission (e.g., from a light emitting diode (LED)) or in other embodiments, a reflection/absorption mechanism (e.g., shuttering via LCD). In alternative embodiments, the MTP104receives a separate stimulus such as a code modulated onto the excitation beam132which initiates transmission of the sensor data. Alternatively, receiving data from an internal or linked sensor triggers a transmission of the data beam133. Some embodiments of system100may include an onboard power supply such as a battery and/or one or more subsystems powered by the onboard power supply. Such subsystems may include, but are not limited to, volatile memory that may be persisted by battery power, one or more sensors in addition to the photo elements150, and/or other features. In some embodiments, the excitation beam132is a visible focused light or laser beam and the data beam133is an infrared light beam emission (e.g., from an infrared emitting diode). The data beam133may contain a signal to identify the specific MTP104to the MTP reader102, for example using an identification number unique to the specific MTP104. Using the unique identification information, the MTP reader102may transmit data to a computer (not shown) to uniquely identify the object110. In some embodiments, a user operates the MTP reader102to illuminate the MTP104with a light or other electromagnetic signal that causes the MTP104to transmit the data beam133via light or other electromagnetic signal. For example in some embodiments the range of electromagnetic spectrum used by MTP104for this signaling may include one or more subsets of the sub-terahertz portion of the spectrum, including infrared and longer wavelengths. The data beam133is then received by the MTP reader102. The MTP reader102then may decode the data beam133carrying identification data to unambiguously identify the object110. “Laser” shall be defined herein as coherent directional light which can be visible light. A light source includes light from a light emitting diode (LED), solid state lasers, semiconductor lasers, and the like, for communications. The excitation beam132in some embodiments may comprise visible laser light (e.g., 660 nm wavelength). In some embodiments, the excitation beam132in operation may illuminate a larger area than that occupied by the MTP104, thereby allowing a user to easily localize and read the MTP104. In some embodiments, the excitation beam132may comprise other wavelengths of light in the visible and/or invisible spectrum necessary to supply sufficient power generation using photocells of the MTP104. The data beam133may be emitted with a different wavelength than the excitation beam132. For example, the data beam133may be 1300 nm IR light while the excitation beam is 660 nm red light. However, other wavelengths, such as the near-infrared (NIR) band, may be used for optical communication and alternative embodiments may use other communication techniques such as reflective signaling methods to return a modulated data signal to the MTP reader102. In some alternative embodiments, the OTMP104is a microtransponder (MTP) that comprises an antenna (e.g., an integrated antenna) for communicating ID information to a corresponding reader via radio waves rather than a light-based signal. The clock recovery circuit106may extract a clock pulse signal from the received modulated light beam as described in detail further below with respect toFIGS.6-8. In one embodiment, the light of the excitation beam132is amplitude modulated (e.g., pulsed) at approximately 1 MHz to provide the data clock which may be used by the MTP104for supplying the operation clock pulses, for example, of transmitted ID data bits. The timing of the pulse groups can be set so that the duty cycles and average power levels fall within requirements for registration as a Class 3R laser device. An example MTP, such as the p-Chip, can be a monolithic (single element) integrated circuit (e.g., 600 μm×600 μm×100 μm) that can transmit its identification code through radio frequency (RF).FIG.2illustrates a schematic diagram of an example MTP in accordance with some embodiments of the present disclosure. A MTP may include photocells (202a,202b,202c,202d), clock recovery circuit206(e.g., clock signal extraction circuits), a logical state machine204, a loop antenna210, and, for example, 64-bit memory (not shown) such as supporting over 1.1 billion possible ID codes. The photocells, when illuminated by a pulsed laser, may provide power to the electronic circuits on the chip with ˜10% efficiency. The chip may transmit its ID through modulated current in the antenna210. The varying magnetic field around the chip may be received by a nearby coil in the reader, and the signal may be digitized, analyzed, and decoded. P-Chips may be manufactured on silicon wafers in foundries, using CMOS processes similar to those used in the manufacturing of memory chips and computer processors. Wafers may receive post-manufacturing treatment including laser encoding, passivation, thinning, and dicing to yield individual p-Chips. The p-Chip surface may be made of silicon dioxide, which is deposited as a final passivation layer. FIG.3illustrates a side view representation of an illustrative MTP104in accordance with at least one embodiment of the invention. The MTP104may comprise a stack of individual integrated circuit layers300,302,304,306, and308. The layer302may support a protection and passivation layer. The layer304may comprise logic, clock, sensors, and transmitter circuits. Layers306and308may comprise storage capacitors; and300is the substrate. Those of skill in the art will recognize that functions of the MTP104can be organized into layers of other configurations. For example, the stacking may comprise layers of differing thicknesses uniformly overlaid so that they can be manufactured for example in a 3D IC process well-known in the art. The MTP104may be manufactured using mixed-signal manufacturing technology that is typically used to make sensor electronics or analog-to-digital converters which comprise both analog and digital devices together. In an example embodiment, each layer is approximately 12 μm thick and 100 μm×100 μm in dimension. In one embodiment, dimensions of the MTP104are 100×100×50 μm. Alternative embodiments may use more or less layers, for instance as depending on the sensor application. FIG.4illustrates a top plan view representation of an illustrative MTP104. The view depicted inFIG.4is of the top layer302ofFIG.3. In one embodiment, on top of the layer302comprises a transmitting element, such as an LED array400, that circumscribes the periphery of the MTP104. In other embodiments, an LED array may be realized as a single LED in the middle of410(shown in phantom as LED420) or other topography for directed light emission. The placement of the LED array400depicts an example of an embodiment emphasizing light generation. Alternative embodiments may include varying topography layouts favoring power harvesting or capturing sensor data and the like. In some embodiments, the LEDs may include focusing lenses or other optics. Centrally located on the top layer302is an array401of photocells402,404,406, and photoconductor408. As illustrated, each photocell in array401can be physically sized to create power for a particular circuit within the MTP104and one can be dedicated to clock/carrier signal extraction as described below with respect toFIG.4. Photocell402, the largest in area, produces a voltage Vdd (in some embodiments, a negative voltage, Vneg) for operating the output transistor416to drive the electronic radiation transmitter (realized in some embodiments as an LED in the optical communication circuit155). Photocell404produces a positive voltage for logic/sensor circuits410, and photocell406produces a negative voltage, Vneg, for logic/sensor circuits block410. Photoconductor408is used to extract clock pulses, e.g., for operating the logic/sensor circuits410. As illustrated, the power cells are coupled to capacitors, for example, in layers306or308for storing the energy produced by the photocells when illuminated by laser light. In some embodiments, energy extracted from the clock photoconductor408is applied to a differentiator (described below with respect toFIG.6) which extracts clock edges which are amplified and used to provide timing signals to the logical and sensing circuits. As illustrated, a plurality of identification fuses418is located on the surface414. By opening select ones of these fuses, the MTP104is provided a unique identification code range beyond a default base page of code values that may be hard-coded into the chip logic. In an alternative embodiment, the ID values may be electronically coded using electronic antifuse technology. Further still are embodiments with electronic memory for data, signal processing, and identification storage. FIG.5depicts a functional block diagram of an illustrative MTP104in accordance with at least one embodiment of the invention. The MTP104may comprise the photo elements150, energy storage504, clock/carrier extraction network506(i.e., clock recovery circuit106), sensors508, logic510, transmit switching circuit512, and a transmitting device155such as an IR LED. The photo elements150can include dedicated photocells such as the clock extraction photoconductor408, the energy harvesting photocell array404,406, and the transmit photocell402. The energy harvesting photocell array404and406may be coupled to energy storage504and may comprise photovoltaic cells which convert light energy from illumination into an electrical current. The clock photoconductor408, which is part of the clock recovery circuit and can be physically located in different places from the recovery circuits, may detect a clock pulse signal for the clock/carrier extraction circuit506. In some embodiments, the energy storage504is a plurality of capacitors having at least one capacitor coupled to a photocell of the photocell array404,406. The energy stored in the energy storage unit504may be coupled to the electronic circuits. Since the laser light is pulsed, the energy from the laser may be accumulated and the MTP104may operate on the stored energy. Unlike the photocell array404and406, in some embodiments the energy of photocell402is not stored and the transmitter switching circuit512via output transistor416can “dump” all of its energy into the transmit element155. As the received laser pulse energy is extracted by the clock/carrier extraction circuit506, the logical state machine (i.e., logic510) may form data packets comprising the ID bits and sensor data and provide these to the transmit data switch512for the formation of the optical transmission signal. The logic510may directly integrate the sensor and ID signal(s) into a composite data frame of the OOK (on-off keyed) emitter. The modulation symbols may be applied to the transmitter512and transmitted with each pulse of energy. The sensor(s)508, if present, can comprise one or more sensors, for, for example, measuring biological cell or physiological characteristics. Any analog data from the sensor(s)508may be converted into a pulse width modulated signal or other binary signaling method that encodes the analog quantity in the time domain in a manner suitable for pulsing the IR emitting diode for direct transmission to the MTP reader102without the need for traditional, power and area intensive analog to digital conversion techniques. Example sensors include, but are not limited to, a dielectric sensor, a proportional to absolute temperature (PTAT) sensor, a pH sensor, a redox potential sensor, and/or light sensor. Clock Recovery Circuit FIG.6is a schematic diagram of a clock recovery circuit506in accordance with one or more embodiments of the present invention. The clock recovery circuit506may comprise a photoconductor602(shown in detail inFIG.6) having a resistance R1 that varies as a function of received light intensity, a reference resistor604having a fixed resistance R2, an amplifier606and an inverter608. A source terminal of the photoconductor602is coupled to a first terminal of the resistor604at a node A. Node A is coupled to the input of the amplifier606, and the output of the amplifier606is coupled to the inverter608which generates the recovered clock circuit at its output. The series combination of the photoconductor602and the resistor604form a voltage divider R that is coupled between a voltage VDD and ground. Specifically, in this embodiment, a drain terminal of the photoconductor602is coupled to the voltage VDD from the energy storage504, which sustains the voltage when the illumination is off, and the second terminal of the resistor604is coupled to ground. Since the resistance R1 of the photoconductor602varies as a function of received light intensity, and the voltage at node A is determined by the ratio of the resistances R1 and R2, a modulated light input incident on the photoconductor602produces a modulated voltage signal at the input of the amplifier606. In some embodiments, a coupling capacitor610is added in front of the amplifier606. The voltage divider R and the coupling capacitor610form a differentiator which may extract clock edges when the modulating frequency is as low as a few kilohertz (at approximately 1 MHz or above, this may not be necessary). The inverter608digitizes the analog output of the amplifier606, resulting in an example digital waveform as shown inFIG.8.FIG.8illustrates a timing diagram of the light intensity and the voltage signal at each node of the clock recovery circuit506with a coupling capacitor ofFIG.6. FIG.7illustrates a cross-section view of an example photoconductor602in accordance with some embodiments of the present invention. In some embodiments, the size of the photoconductor602can be 5 um×5 um or larger. As illustrated inFIG.6, the photoconductor602may employ a long channel n-MOSFET in an isolated deep n-well bucket. The n-wells and the deep n-well (D-nwell) may completely seal the p-well, in the p-substrate, and the transistor components, i.e., the source, drain, and gate which are confined in the bucket. The gate layer, for example made from polysilicon material, may be disposed on top of an insulating layer, such as silicon dioxide (SiO2). The polysilicon material spectrum-wise absorbs shorter wavelength light, such as blue light, but passes longer wavelength light, such as red light. When using an excitation beam132having a longer wavelength, such as a red light beam, the polysilicon material filters and blocks the shorter wavelengths and passes the long wavelength. As such, it suppresses shorter wavelengths. For example, a room light (e.g., a fluorescent lamp) that flickers at the speed of 60 Hz may produce some interference or noise having more spectrum in the shorter wavelength (blue wavelength) range, and the polysilicon material effectively blocks the flickering from the room light and only passes the desired energy beam (e.g., the red light). Further, the photoconductor602(which may also be referred to as a photoresister) allows the clock recovery circuit106to function under both low illumination and high illumination conditions in contrast to photodiode-based clock recovery circuits. For example, under sufficiently high illumination, excessive flooding charges in a photodiode cannot be sufficiently discharged, resulting in the malfunction of a photodiode-based clock recovery circuit. In contrast, the photoconductor602can be operated in current mode and may be less affected by the high illumination flooding phenomenon since photo charges are drained constantly by the electric field in the photoconductor602. Additionally, the deep n-well bucket of the photoconductor602is isolated such that the n-wells physically form a potential barrier that prevents charges generated outside of this bucket from entering the bucket, ensuring that only those photons arriving inside the bucket can contribute to the conductivity of the photoresistor602. As such, excessive photogenerated charges during high illumination, which may result in malfunctioning of photodiode-based clock recovery circuits, is suppressed in the clock recovery circuit106. Additionally, this FET device may have a very small physical footprint. The inverter608can comprise a static CMOS inverter device comprising an NMOS and a PMOS transistor and having two states, either high or low. If the inverter input is above a reference voltage, it is considered to be high, below the reference voltage is considered to be low, and then the output is inverted. The static CMOS inverter can also act as an analog amplifier as it has a sufficiently high gain in its narrow transition region to amplify the signal, enabling the clock recovery circuit506to have a very small footprint. In instances where the extracted clock pulse is extremely low, amplification by the amplifier606may not be sufficient to reach the threshold voltage for flipping the logic state; in these instances, the inverter608can further boost the overall amplification to reach its threshold. The clock recovery system can be applied to MTPs that signal out with RF and those that signal out with light (e.g., via an LED), such as described in U.S. Ser. No. 14/631,321, filed Feb. 25, 2015. Reverse Antenna System Each p-Chip may have a unique serial number or identifier (ID) programmed. P-chips may be read by a MTP reader (e.g., a wand) with no duplicate IDs. A MTP reader may be a hand-held device connected to a standard Windows PC, laptop or tablet used to read the MTP and is capable of reading the serial number or ID of individual p-Chips. FIG.9illustrates a functional block diagram of a MTP reader in accordance with some embodiments of the present disclosure. As illustrated inFIG.9, an example MTP ID reader may be USB-powered and may include a USB 2.0 transceiver microcontroller, a field programmable gate array (FPGA), power converters and regulators, a laser diode with the programmable current driver, an optical collimation/focusing module, and a tuned air coil pickup with a high-gain, low-noise differential RF receiver. The laser emits for example an average of 60 mW of optical power modulated at 1 MHz at 658 nm wavelength when reading a p-Chip identifier (ID). The ID is read when the p-Chip is placed within suitable proximity (e.g., <10 mm) from the reader. The p-Chip generated waveform is compared to the data clock (Laser Modulation) used for the synchronization of the transmitted ID data bits. The resulting ID readout from the p-Chip is rapid (<0.01 s) and is reported on the PC or tablet. The MTP ID reader may be able to read p-Chips under challenging conditions, such as through a sheet of white paper, blue-colored glass (˜1 mm thick), or a sheet of transparent plastic laminate. Other MTP readers have been developed (e.g., an instrument for reading IDs with the p-Chip in a fluid). Another version can be a battery operated blue tooth reader that can be used with a PC or cell phone. Some embodiments may provide efficient means of increasing the signal strength emitted by these small MTPs. The p-Chip data may be transmitted using a data coding that results in one third to two thirds of the transmitted bits having a value of one. The average for all IDs may be half of the data having a value of one. A “1” digital signal is transmitted for example with the laser on and a “0” digital signal is transmitted with the laser off (The photo cell stored energy provides a small amount of energy to be transmitted). The signal power tracks the ratio of ones to zeros in the data. Some embodiments may transmit a “1” digital signal the same as it currently is transmitted, but a “0” digital signal is transmitted with the laser ON with the current flowing in the opposite direction of the current for a “1” digital signal. This results in all IDs being transmitted with the same power. Data may be transmitted when the laser is on. This may result in twice the power in the transmitted signal (6 dB more signal in the receiver, on average). The method may result in easier signal processing and easier differentiation of ones and zeros. This can lead to a MTP ID reader with a greater read distance and simpler processing. For example, the p-Chip® MTP may be queried with a light flashing at 1 Mhz with a 50% duty cycle. This may be accomplished with a laser or a focused LED, or the like. FIG.10Aillustrates in simplified form how a string of “1101” is transmitted under an old system, andFIG.10Billustrates in simplified form how a string of “1101” is transmitted under a reverse antenna system described herein, respectively. For each off/on cycle, such as c1, c2, c3 or c4 ofFIGS.10A-10B, the MTP ID reader seeks a radio signal identifying a “1” digital signal or “0” digital signal transmission. As shown in simplified form, for the first illustrative MTP output ofFIG.10Aillustrating a prior art system, zeros are transmitted when the light source is off. However, the photo cell capacitance used to transmit the zero is limited. In fact, this limited signal denotes a “0.” The limited energy applicable to zero means that signal-to-noise at the MTP reader is restrained by the SNR for the zero. This means that while in principle the “1”s can be read at a significantly greater distance, MTP signal may only be read at the shorter distance applicable to the “0” components of the signal. A method is provided herein that includes reversing the direction of the current in the RF output antenna to transmit a “0” digital signal so as to use substantially the same current for the “1” digital signal and the “0” digital signal (seeFIG.10B). In some embodiments different fromFIG.10B, any given bit (“1” or “0”) or digital signal in the p-Chip® MTP may be transmitted within 8 consecutive light cycles. One means of reversing the antenna current is to use a switching circuit such as an H-bridge.FIG.11Ashows one example diagram of reversing the direction of antenna operation in accordance with some embodiments of the present disclosure. As shown inFIG.11A, an antenna10may be operated by a voltage source Vinand an H-bridge20. Selectively closing switches S1and S4may direct a current through the antenna10in the direction indicated by the arrows. Selectively closing switches S2and S3may direct a current through the antenna10in an opposite direction. FIG.11Bshows another example diagram of reversing the direction of antenna operation in accordance with some embodiments of the present disclosure. Another means of reversing the antenna current is to use two switches, such as S1A and S2A inFIG.11B, and two antennas (e.g.10A,10B). Selectively closing switch S1A may direct a current through the antenna10A in one direction indicated by the arrow. Selectively closing switch S2A may direct a current through the antenna10B in an opposite direction. If S1is selectively closed, current moves in direction D1. If S2A is selectively closed, current moves in direction D2, opposite the direction D1. The antenna may be formed in separate metal layers, or on the same layer. Only one FET (S1A or S2A) may be closed at any given time. When either FET is turned on a reverse current may be coupled into the other antenna. The body diode of the off FET may provide a current path for the coupled signal. In some embodiments, the antenna options described herein may be effected in a monolithic integrated circuit. In some embodiments, the monolithic integrated circuit may be sized about 2 mm×2 mm×0.2 mm or less in thickness. In some embodiments, the signal strength for a MTP incorporating the above-described bi-phase transmission is increased by about 6 dB. This will increase the reliable read distance of a MTP reader. In some embodiments, the number of cycles committed to transmitting a one bit is 8 data periods. Each laser cycle is one data period. Every time the number of data periods is doubled there is a signal processing gain of 3 dB. Eight data periods is 3 doublings (2, 4, 8). This results in a signal processing gain of 9 dB. By being increased from 8 to 64 (2, 4, 8, 16, 32, 64) or 128 (2, 4, 8, 16, 32, 64, 128) the signal processing gain may increase from 9 dB to 18 dB (for 64 repeats) or 21 dB (for 128 repeats). The current p-Chip using a repeat of 8 times for its 64 data cells when using a laser at 1 MHz may transmit IDs at a rate of 2,000 per second. By increasing the repeat rate to 128 the read rate may decrease to 128 reads per second with a signal gain of 21 dB. This can result in an increased read distance. The laser rate may be increased or decreased (e.g., in a range of 500 KHz to 5 MHz). The repeat rate may be controlled by selecting one of 8 repeat rates (3 addition memory bits). Security Inlay MTPs may also be used to implement security features. These can be MTPs that signal out with RF, or those that signal out with light. Such security features are enhanced where a MTP cannot be removed from its secured object without its MTP function being destroyed. Example objects that may need such security features are bottles of high-end wine. Wine is used herein as an example object to facilitate illustration and explanation of the security inlay structure and function, but, as noted above, the security inlay is not limited to use with wine bottles. Provided herewith is an inlay containing a MTP that may be designed to break the MTP when a tape or foil seal is disrupted. In some embodiments, a light-triggered transponder may be utilized in a security inlay for security purposes. For example, a security inlay may provide a reliable method to authenticate wine. In the wine industry, the cork or stopper may be sealed with a capsule or foil designed to not allow the stopper to be removed without peeling the capsule. This provides a certain measure of security. However, for high end wines it can be worthwhile to the unscrupulous to acquire the equipment to replicate a capsule. There can be additional wax seals, but these have the same defect as the monetary value of counterfeiting rises. An example security inlay may include: (a) a bottom inlay segment; (b) a top inlay segment configured to fit and disposed to the bottom inlay segment; (c) a light-triggered transponder with a top and a bottom side disposed between the two inlay segments, with the bottom side glued to the bottom inlay segment and the top side glued to the top inlay segment. The security inlay is configured so that a separation of the top inlay segment from the bottom inlay segment breaks the light-triggered transponder such that it cannot be read. FIG.12illustrates the security inlay as fitted to a wine bottle. As illustrated inFIG.12, the inlay10is shown under the capsule20of a wine bottle22.FIG.13is a cross-section view of an example security inlay design in accordance with some embodiments of the present disclosure. As illustrative inFIG.13, the inlay10is composed of two parts, top10A and bottom10B, with a MTP18mounted in between. The parts can be made of transparent or partially transparent plastic by one of several technologies, such as 3D printing, molding or pressing heated-up plastic. In some embodiments, a specially prepared MTP18, easy to mechanically break, is used. For example, the MTP's structural integrity may be reduced by a notch12on the back of the MTP or by making the MTP very thin (e.g., about 10-about 30 microns). The MTP may be glued to the inlay so as to assure breakage. The adhesion spots can be nonsymmetrical, similar to as illustrated, to assure uneven forces when the top inlay portion is separated from the bottom inlay portion. As shown inFIG.13, one half of is the MTP may be glued (glue16) to the inlay bottom10B, and the other half to the inlay top10A. Grooves can be made in both inlay top and bottom to accommodate glue. In one embodiment, as illustrated inFIG.13, the bottom inlay segment has a bottom groove to accommodate glue to adhere to the bottom side of the light-triggered transponder. The top inlay segment has a top groove to accommodate glue to adhere to the top side of the light-triggered transponder. The two inlay halves may be maintained in place by weaker elements, such as mechanical fit (including slight notches and corresponding bulges) or droplets of weak glue appropriately placed, such as to the perimeter of the inlay (between the halves). The inlay design assures that when the two inlay halves are pulled apart (when the capsule is removed from the bottle), the MTP breaks and no longer functions electrically. Should a would-be counterfeiter cut the capsule around the inlay, the glue26may be selected to resist solvent washout (such as by being polymerized). The glue26may also be applied in a clean pattern that can be visualized by human eyes or imaging equipment. A glue pattern may be on the top most vulnerable surface, or both the top and bottom can have glue patterns. With features such as this, attempts to recycle the inlay will be visually detectable. At the same time, the inlay and the MTP inside may be mechanically stable and easy to manipulate by hand or robotically, so long as handled appropriately. FIG.14shows a blown-up view of an example security inlay as fitted to a wine bottle in accordance with some embodiments of the present disclosure. As illustrated inFIG.14, the inlay10, which may appear similar to a thin button, may be glued to both the stopper/cork24and the capsule20with glue26. If the bottled wine is original, the MTP ID can be read with a custom ID reader (e.g., wand) or cell phone based attachment, covering, or application, for example. However, the removal of the capsule from the bottle (before the wine bottle is opened) splits the inlay into two parts and at the same time permanently damages the MTP placed inside the inlay10. The MTP may no longer be read. Thus, validation for the bottle of wine is no longer possible. The size of the inlay may be selected to cover all or most of the top surface of the stopper24. In some embodiments, the inlay spans the opening of the wine bottle. The would-be counterfeiter may not be able to dig the inlay out without disabling the MTP. When the bottle is properly opened, the top10A peels off with the capsule. The bottom does not materially interfere with the use of a corkscrew. In some embodiments, the bottom is made still thinner to further facilitate use of a corkscrew. A wine manufacturer may receive the inlays from a dedicated factory. The inlay may be glued by the wine manufacturer to the cork, and glued to the capsule. Gluing may be serially conducted, or the glue can be pre-placed on the top and bottom of the inlay. Glue may set by any number of mechanisms, including photo-polymerization (since the inlay in some embodiments is at least semi-transparent), chemical setting, oxidative radiation, and/or other techniques. The capsule may be pressed over the inlay to make sure that the inlay is properly glued. Alternatively, capsule makers may pre-glue the inlay to the inside of the capsule. Then the wine manufacturer may glue the inside center part of the capsule to the cork. This may be accomplished by having an inlay in the capsule pre-treated with glue (possibly protected with a removable plastic wrap). In this situation, the only thing the manufacturer would need to do to authenticate wine is to remove the wrap before placing the inlay-capsule on the wine bottle. If the capsule is transparent, the MTP can be read immediately. If a non-transparent capsule is used, an opening may be made in it to read the MTP in the inlay. The opening may be small, such that the inlay10may still be well glued to the capsule. In some embodiments, the capsule top includes a metal foil except for a small window to allow for querying by the MTP photodetector. The window may be covered with a clear plastic coating. In some embodiments, the capsule is a laminate of an opaque material and clear material, with the opaque material missing in the window. In some embodiments, the MTP may be larger than that sold as the p-Chip® transponder, possibly in one dimension. This size may assure good unsymmetrical adhesion to the top and bottom inlay portions. Authentication is possible over the whole chain of custody, from wine manufacturer through distribution chain to customer. At every step, reading of the MTP ID may validate wine authenticity. If needed, a connection to a central wine database may be made over internet, the MTP ID is provided to the database and recorded in it with a time stamp and the identity of the MTP reader device. Thus, if proper arrangements are made, the data provider may maintain a history of the bottle of wine. If the final customer wants to check wine authenticity, several approaches may be possible. First, the fact that the vendor may read the ID in front of the customer gives a reassurance. Second, the vendor may search a database and present the history of the bottle to the customer. Third, the customer may enter the MTP ID, use an app on his/her smartphone, and obtain the history of the bottle. Fourth, if the customer has his own ID reader, the customer may verify the information himself/herself. Thus, provided are a reliable method to authenticate wine or other objects. The disclosed security inlays may be resilient to manipulations involving whole inlays, extremely sensitive to separation of the halves, easily installed, and not noticeable in most situations. While the invention has been exemplified with wine bottles, it can be used with any container sealed with a capsule or tape such that the inlay containing part of the capsule or tape must be separate from the container. Such uses may include bottles containing pharmaceutical drugs, perfume bottles, or like bottles. Other uses may include labels or other elements placed on or incorporated in plastic, metal, and/or composite materials including CPG Consumer Packaged Goods. For shipping boxes the tape may be adhesive enough that it cannot be removed without marring the base material of the box, e.g., cardboard. Likewise, labels may be adhesive enough that they cannot be removed without damaging the labels and/or the underlying containers. Where a wine bottle uses a screw top closure, such as a Stelvin® closure, the security inlay may be attached to the bottle on the side under the screw threads and under the capsule. In some embodiments, the inlay bottom may have a curved bottom shape to match the neck of the bottle. In some embodiments, the capsule may be glued to the wine neck in the region of the security inlay. A capsule may mean tight fitting metal or plastic foil that forms part of the closure of an object such that the object may not be opened without disrupting the capsule. A laminate is a bonding, fusing, adhesion, or the like between polymer layers, or between polymer and fabric layers, such that in the range of anticipated use the laminate is a unitary structure. The disclosure described herein is of a MTP with signal transmission enhancements and methods of forming or using the same. Monolithic Security Feature Containing MTP Monolithic security features may be created by casting, embedding or incorporating MTPs into a substrate via additive manufacturing processes. Such security features can also be made by attaching MTPs to the substrate after they are formed. Monolithic security features may be designed to transport the MTP to or across an external feature whose structure and composition causes the MTP to crack or in some way permanently disable the MTP. As an example, a MTP may be embedded in a heat shrinkable tube that seals a twist cap. The MTP may be deposited such that as the twist cap is unscrewed. The MTP may encounter a ramp or wedge or other structure on the container. The heat shrinkable substrate may be designed to deform while passing the structure, but not fully absorb or dissipate the increasing forces from the structure. As the MTP encounters and moves over the structure, the resistance may force break the MTP or MTP subcomponent thereby rendering it incapacitated. Multiple MTP Indexed Security Feature The present invention may use authentication of multiple microtransponders, or combinations of microtransponders and taggants (e.g., QR codes, barcodes, RFID tags, etc.) as matched pairs to establish a higher level of security. All taggants must be present and readable to validate the contents. The taggants may be placed next to one another or at different locations on the surface of the object or within the object, and/or at least two different types of security markings can combined to form a compounded security marking. Failure of any microtransponder or other taggant to respond may indicate non-authentic contents. At least one microtransponder in the multi-level indexing sequence may be a fragile chip that may be rendered physically unable to respond when the container is initially opened. Fragile chips can be produced by post fabrication processing, i.e. thinning of the chip substrate to ensure it breaks when bent or removal from the substrate is attempted. In some embodiments, a method for ensuring chip incapacitation may be implemented by designing a fracture plane or cutting a slot into the chip to disconnect the antennae. In one embodiment, a physical object (e.g., a container) may be attached with chip A and chip B from a legitimate pairing when both signals respond to interrogation. In one embodiment, if a physical object is only attached with chip A and chip B is not physically present for interrogation by the reader, a reader may not authenticate this product as the database needs a response from both chips. If the physical object has both chip A and chip B present, but chip B may be broken on opening, the reader may not authenticate this product as chip B is incapacitated. In one embodiment, similar to the example of the physical object with chip A and chip B, the physical object may have a different pairwise or legitimate pairing indexing via chip C and chip D. While the pairing of chip C and chip D may be legitimate, it may be unique and not equal to the pairing of chip A and chip B. If counterfeiters acquire chips A and C and add them to their packages. The reader may be unable to authenticate the chips as chip A and chip C do not constitute a legitimate pairing. Light-Triggered Microtransponder (MTP) with Durable Self-Destructive Super Anchors Physical Unclonable Functions (PUF) have been identified and may be adopted as a key element in physical and digital based anti-counterfeit and authentication systems. A PUF is a physical entity that is embodied in a physical structure and is easy to evaluate but hard to predict, even for an attacker with physical access to the PUF. The key element to a PUF is the use of natural and randomly occurring features or properties that can be used as unique distinguishing features of individual objects that are otherwise quite similar. PUFs depend on the uniqueness of their physical microstructure, which typically includes a random component that is already intrinsically present in the physical entity or is explicitly introduced into or generated in the physical entity during its manufacturing. The physical microstructure nature associated with the PUF is substantially uncontrollable and unpredictable. In order to evaluate a PUF, a so-called challenge-response authentication scheme is used. The “challenge” is a physical stimulus applied to the PUF and the “response” is its reaction to the stimulus. The response is dependent on the uncontrollable and unpredictable nature of the physical microstructure and thus can be used to authenticate the PUF, and a physical object of which the PUF forms a part. A specific challenge and its corresponding response together form a so-called “challenge-response pair” (CRP). In a practical application, a PUF may be interrogated in some manner referred to as a challenge. The PUF has a response to the interrogation that clearly exposes, identifies or documents the unique random feature. The response is then compared to a digital reference. If the unique random feature of the PUF matches the digital reference, the result of the challenge is a positive authentication. If the unique random feature of the PUF is different from the digital reference, the challenge may fail thus rendering the PUF and the corresponding physical object it is attached to as not authentic or fake. The definition of a PUF may depend on uncontrollable and unpredictable nature of the physical microstructure and focus on a naturally occurring random physical structure or phenomena to obtain uniqueness such that the degree of difficulty to replicate or clone a chip may be exceptionally high. The challenge revealing the random feature of the On-Chip PUF is based on ring oscillation and FPGA architecture, both of which may degrade over time and may not be long term durable. Despite the wide array of PUF's devised features and in use, there are some significant problems to be solved. While the digital reference of the PUF at its inception may be locked and virtually invariant over time, the physical PUF used to generate the digital reference may begin to degrade immediately. Over time and/or as a result of handling, environmental conditions or conditions of use a legitimate original PUF may eventually have its unique features erode or modified to a point where it may fail a challenge to its digital twin. In this case, a genuine article may be mistakenly identified as a fake or counterfeit item. Thus, there is a need to provide a more durable way to guarantee authenticity of an object. The present disclosure provides an innovative approach to assigning uniqueness by applying MTPs with and unique IDs to large numbers of similar objects. In some embodiments, non-random functions may be assigned and embedded or incorporated into an object. In some embodiments, the non-random feature may be difficult to reach, and any attempt to manipulate or change the unique feature results in it being disabled or destroyed. Further, embodiments of the present disclosure may be Tamper Proof and/or Self-Destructive with high-level durability and reliable functionality. The combination of super durability with a tamper proof structure may lead to a Super Anchor (SA). The primary concept of the present disclosure may provide objects (super anchors) with unique embedded features to increase durability of the object. The super durable object may be embedded into the matrix of a chip. Other attempts to exploit a durable approach for IC's, namely On-Chip devices, involve the variant microstructures of the chip itself. In the present disclosure, a super anchor may have high durability as the MTP ID number is a unique fixed feature that may be integrated into but separate from the bulk media (e.g., chip structure). The unique feature may be isolated from degradation of the bulk. The super anchor may provide features of non-random bus secure. The super anchor may be tamper proof and/or self-destructive in response to attempts to change the unique ID. Further applications of the self-destructive design may be used to ensure authentic packaging such that containers and vessels are not reused to hold counterfeit items. For example, an end use example of self-destructive super anchor can be utilized in a security Inlay. FIG.15is a flowchart illustrating an example process configured to utilize a super anchor for physical object authentication in accordance with some embodiments of the present disclosure. The durable self-destructive super anchor may be utilized for object authentication, object tracing and tracking under a control of a digital security system including a manufacturer database. The digital security system may include one or more computing devices to facilitate in object authentication, object tracing and tracking. The digital security system may include at least a security computing device in communication with a plurality of user computing devices via a network. The security computing device may include a processor, a memory and a communication interface for enabling communication over the network. The digital security system may receive MTP registration information and process MTP ID information from a MTP ID secure reader (e.g., ID reader) via the network. At step1501, a Super Anchor (SA) may be manufactured by embedding or incorporating a MTP with a unique ID onto a taggant, a taggant substrate or into a layer of a taggant. The taggant may or may not have PUFs embodied in its physical structure. A Super Anchor may be manufactured by incorporating the MTP into the taggant structure while the taggant may be made or as part of a multilayer manufacturing process. An example of co-manufacturing of taggant and Super Anchor may be casting a thermoplastic tag or label by in mold processing. An example of multi-layering co-manufacture may include lamination of an MTP into a credit card, label or tape whereby the MTP becomes part of the monolithic structure of the tag or object. The tag or taggant formed may be a label, dot, laminate, tape or any physical structure. The primary purposes of the taggant may include: (1) providing a surface to affix the Super Anchor to a physical object for tracking the physical object; and/or (2) acting as a passive or active part of the tamper evident, tamper resistant or self-destructive mechanism. For example, a Super Anchor may be indicated as a light-triggered MTP with a unique ID attached to or embedded in a chip taggant with Physical Unclonable Functions (PUFs) along with self-destructive features and high durability functionalities. At step1502, the unique ID number of the MTP may be registered in a digital security system and/or manufacturer database and be indexed to the MTP. At step1503A, the manufactured SA with the unique ID number or unique serial number may be digitally indexed to and attached to a physical object. Said Super Anchor may or may not have acceptable means of attaching it to the physical object as part of its structure and composition. The means, method and process of adhering a Super Anchor to a physical object may vary widely depending on the composition and conditions of use for the physical object receiving the Super Anchor. Super Anchors may be attached directly to a physical object with known materials and processes such as adhesives, sealants, waxes, tapes and films. Glue or other adhesive may set by any number of mechanisms, including photo-polymerization, chemical setting, oxidative radiation, and/or other techniques. Said materials may have immediate or latent action. Attaching materials may be reactive. Reactive materials may be activated by pressure, chemical, thermal light, sound or other radiation sources. Such materials and processes are illustrative, but not limiting. Super Anchors may be sewn or injected into an object. In some embodiments, a Super Anchor may be supplied and used as an unattached object with a reactive site or substrate that may have been modified for specific attraction and binding of chemical and/or biological species with or without subsequent treatment, interrogation and identification of the attaching species. After identification, the binding species can be removed, thereby regenerating the Super Anchor. As such, the Super Anchors may be able to form platforms and scaffolding for random or precision growth sequencing in automated or semiautomated processes. An unattached Super Anchor with or without reactive sites or substrates may be dispersed in a continuous medium such as a fluid. Dynamic object information of the Super Anchor can be discerned by capturing its unique ID at one or more sites in a closed vessel. The dynamic object information can be used to determine flow characteristics of the continuous medium. Real time rheological and tribological data may be calculated. Algorithms and software for Computational Fluid Dynamics may be developed and used to document flow dynamics and velocity gradients in high detail. Modeling of industrial material flow and reaction conditions, documentation of mixing equipment capability and fluid handling system design can be greatly improved. At step1503B, data associated with the physical object stored in the digital security system may be updated with object index information so that the physical object can be searched and read with the unique ID number and product data in the digital security system. The product data may include product serialization or identifier associated with the physical object, such as radio-frequency identification (RFID), QR Code, etc. At step1504, when a user receives the physical object attached with the manufactured SA, a user may securely log into the digital security system via a user computing device to initiate an authentication process for the physical object. At step1505, a secure reader (e.g., ID reader) may be utilized to illuminate the SA attached to the physical object and receive the SA signal. At step1506, the secure reader may receive SA signal and decode the received SA signal to obtain the unique ID number or serial number indexed to the SA. The user computing device may execute an application to communicate with the secure reader to receive the decoded ID of the SA associated with the physical object. At step1507, the user computing device may communicate with the digital security system via network and send the decoded ID of the SA to the digital security system. The digital security system may compare the decoded unique ID associated with the physical object to the ID numbers stored in the digital security system. At step1508A, based on a comparison result, the digital security system may determine whether the decoded unique ID number is registered. At step1508B, in response to determining that the decoded unique ID is not registered, the digital security system may generate a message of “Not Authentic” for displaying on a user interface of the user computing device. At step1508C, the digital security system may update the data associated with the physical object with the user and challenge information for an object authenticity validation. At step1509A, in response to determining that the decoded unique ID number is registered in the digital security system, the digital security system may further determine whether the decoded unique ID number matches a stored ID number associated with the physical object. At step1509B, in response to determining that the decoded unique ID number does not match a stored ID number associated with the physical object, the digital security system may generate a message of “Not Authentic” for displaying on a user interface of the user computing device. At step1509C, based on the determined authentic result of1509A, the digital security system may update the data associated with the physical object with user and challenge information for an object authenticity validation. At step1510A, in response to determining that the decoded unique ID matches a stored ID indexed to the physical object, the digital security system may generate a message of “Authentic” for displaying on a user interface of the user computing device. At step1510B, based on the determined authentic result of1510A, the digital security system may update the data associated with the physical object with user and challenge information for the object authenticity validation. Embodiments of the present disclosure may provide the MTP with super durable super anchors utilized for tagging, authentication and anti-counterfeiting of physical objects. In some embodiments, the manufactured Super Anchor (SA) may be combined with RFID or QR code technology and certain encryption technology to further enhancing tracing and anti-counterfeit protection of the physical object. In some embodiments, a manufactured SA may be printed as a label on any types of surfaces of physical objects. In some embodiments, the manufactured SA may be printed as a label to replace RFID or QR code for special security document transfer. Embodiments of the present disclosure may provide the MTP with super durable super anchors combined or integrated into business systems, database of digital security systems, distributed ledger, blockchain, blockchain interoperability as well as interoperability of object and financial based blockchains. In some embodiments, storing the secured unique ID number of a manufactured SA indexed to the attached physical object may be implemented by storing the registered unique ID of the SA and related data associated with the physical object to a blockchain or a blockless distributed ledger. In this way, the registered unique ID and related data may be saved and stored in such a way, that it is substantially impossible to tamper with it. Furthermore, storing the secured registered unique ID and related super anchor data to a blockchain or blockless distributed ledger may allow for object authenticity validation and tracing from remote, for example by an authorized receiver along a supply chain of the related physical object or group of objects. In some embodiments, the above process may be adapted for use in analyzing flow characteristics and/or other features of a continuous medium. For example, at step1503A, the SA may be dispersed in the continuous medium (e.g., rather than being physically attached to a solid medium). Then, the SA may be illuminated and may respond, as described above, a plurality of times. Each time may be recorded, and a position of the SA within the medium may also be recorded. These time-stamped SA positions may be processed to determine at least one fluid characteristic of the continuous medium, as noted above. Microtransponder-Based Smart Paper Contracts Authenticity of paper based credentials may not be secure. Massive fraud may occur with authenticating paper based credentials. For example, a diploma may be ordered online from a University anywhere in the world and may be printed and sent directly to anywhere. False credentials may be used and sent to physicians, psychologists or other professionals for a variety of nefarious purposes. Authentication of documents normally takes time and costs consumers significant amounts of money, which needs to be avoided. Further, record searches may delay home and real estate transaction by many days, impeding business flow and revenue generation. P-Chip® MTP (e.g., configured as a durable self-destructive super anchor in some cases) may be utilized to implement MTP-based smart paper contracts. Embodiments of the present disclosure describe techniques of the MTP-based paper contracts which may provide low cost registration and authentication of processing devices while increasing traceability and security of digital or printed paper items. MTP-based smart paper contracts may eliminate multiple steps and cost of creating secure, authentic digital records and smart contracts. MTP-based smart paper contracts may provide low cost registration and authentication of printers & marking devices increase traceability and security of printed items. MTP-based smart paper contracts may use machine tokenization for service payment, etc. Unlike watermarks embedded in paper documents and credential substrates or print based security features from special dyes or pigments and 2-dimensional codes such as QR and data matrix codes, P-Chip® MTP are not easy to duplicate and provide a highly affordable option for digital authentication. Adding a document or physical record to a digital security system or similar functional database, data lake, or computer based archival and verification system requires the document be scanned and a unique ID or serial identifier added. Smart paper contracts based on P-Chip® MTP may have a low cost energy activated identifier attached and/or embedded in the substrate of the MTP that confers to the document a unique and physically unalterable ID number. As used herein, the term “smart contract,” “smart paper contract,” “printed item,” or “printed object” may include all types of printable items, but not limited to, contracts, financial transactions, transcripts, certificates, checks, secure credentials, medical records, quality records, deed searches on homes, and title searches on automobiles, boats agricultural equipment and recreational vehicles, etc. For example, MTP-based smart paper contracts may be utilized to create documents, such as secure credentials, contacts, certificates, quality records, etc. The specific raw material and product characteristics may be documented by certificates of analysis, medical records, genomic certification such as breed or certified seed. As used herein, the term “paper” is used as an easy to understand, but not limiting embodiment of the present invention which may include all print related substrates such as synthetic paper, films, cardboard, plastic, metal wood and composites. Furthermore, concept of the present disclosure may encompass printing of labels and packaging as novel ways to create secure “smart labels”, secure “smart tags” and secure “smart packages”. The present invention may encompass both conventional 2D printing as well as 3D printing processes for the above mentioned substrates and printed items. FIG.16illustrates a functional diagram of implementing a smart paper contract in accordance with some embodiments of the present disclosure. As illustrated inFIG.16, a functional part16A may include databases and operations associated with sender and receiver activities. A smart contract sender (e.g., document sender) may register achievement or event with a digital security system (at block1602) via a first computing device. The data of the sender and document may be stored as client recodes in database1601(e.g., DB 1). The sender may create a print purchase order (at block1603) and store the order and related financial data as client financial data in database1604(e.g., DB 4). The smart contract sender may transfer secure print data to a smart contract receiver (e.g., document recipient) (at block1605). As illustrated inFIG.16, a functional part16B may include databases and operations performed by authorized printer(s) and marking device(s) associated with the digital security system. At block1613, authorized printer(s) and marking device(s) may be registered in the digital security system with respective assigned security serial numbers. The authorized printer may receive the purchase order form the sender. The authorized printer may convert the secure print data associated with the purchase order to machine executable instructions (at block1614). The received secure print data and purchase order may be stored in database1616(e.g., DB 2). The authorized printer may obtain security substrate (at block1612) and print the secure documents (at block1615). The security substrate (at block1612) and operations of printing the secure documents (at block1615) may be stored in database1617(e.g., DB 4). The authorized marking device may obtain security ink (at block1618) and be configured to print a 2D security mark on the secure documents (at block1619). The terms “security substrate” and “security ink” reference legacy materials and processes for creating a secure document by printing. There may be many commercially available substrates and inks. Examples of security substrates may be paper that has watermarks or embossed structures. Another example may be a paper that has been pre-printed with an “invisible ink”. Under normal solar illumination, the ink does not reflect in the visible spectrum. When subjected to UV light, the pre-printed lettering or mark would down convert the higher energy light into the visible spectrum and be viewable to an observer. Papers can be natural or synthetic based, hence the more generic term “substrate” may be used. Synthetic papers may be more expensive and can be made with specific spectral responses designed into their bulk properties and provide another level of security. Incorporation of color changing (gonio-apparent) fibers into the paper or substrate to be printed may add another layer of security as the threads exhibit a unique color reflectance that may change as the angle of observation of the document is changed. The color change is a function of the material. The material is very expensive, and in the case of official state produced documents it may be a controlled substance. Security inks may be specific physical structures of pigments or dyes that may yield changing reflectance (observable color) to humans and/or machines. Both the security substrate (block1612) and security inks (block1618) may be raw materials that are procured by the printer. Customers have the ability to specify the security substrate and ink or any combination as part of their print order to obtain a secure document. 2D security markings are current state of the art printing techniques. In addition to using secure inks and combinations of secure inks, the printed design may have intentional structures that are printed in ultrahigh detail. Careful inspection or low power magnification may reveal the micro-structure that a simple counterfeiter may not be aware of or able to make. 2D security marks can also be PUF's according to the original definition by Virginia Technical University in that their micro-structures are a function of ink droplet splatter, absorbance into the print substrate and variation in drying. The 2D structure may be photographed and digitized. Digital features may be identified through combined edge finding algorithms for shape and combining with other image factors such as area, color and luminance. The digital file may be given a unique ID. The unique ID and file image may be archived in a database and indexed to the digital file. Further, a digital image capture may be compared to the archived image to determine authenticity as a PUF challenge response sequence. Recent developments to attach or embed RFID devices in print paper afford another level of security for printed documents whereby the RFID tag number becomes the digital identification number or part of the digital ID for the printed document. RFID enabled sheet papers are available for digital print platforms like HP Indigo printers and others. In some cases, RFID tags may be attached to documents after printing. The benefits of using RFID technology for authentication of printed documents are consistent with their use in other security media. The downside of this security mechanism is that it can be cloned by non-authorized entities, it is not durable in use, and it is expensive. Embodiments described herein may be used with RFID enabled sheet papers in addition to, or in place of, the 2D security marks described above The printed secure documents with the 2D security mark and/or embedded RFID tag may be shipped to the document receiver (at block1620) and the related records may be stored in database1621(e.g., DB 5). The secure documents with the 2D security mark may be sent along with the invoice to the smart contract receiver (at block1622). The smart contract receiver may receive both digital copies of secure documents via emails or text messages over a network and printed secure documents with 2D security mark via mail (at block1606). The smart contract receiver may receive and sign the secure documents (at block1607). A digital twin of the signed documents may be created (at block1608) and stored in database1609(e.g., DB 6). The smart contract receiver may process or pay the invoice associated with the received documents via a second computing device over a network and store the transaction records in client financial database1611(e.g., DB 7). The financial transaction record of the paid invoice may be sent via the second computing device to the digital security system (at block1623) and stored in database1624(e.g., DB 8). The MTP-based document security measures described herein may be used in place of traditional 2D security marks and/or embedded RFID tags or in combination with them. In either case, the smart paper contracts formed using the embodiments described herein may be more durably secure than documents secured by traditional 2D security marks and/or embedded RFID tags alone. Generating a Secure Document Smart Contract with Blockchain Integration In some embodiments, blockchain may be used to apply a predetermined collision resistant hash function for tracing and tracking a smart contract document. As used herein, the collision resistant hash function refers to a special type of hash function, i.e. a mathematical function or algorithm that maps data of arbitrary size to a bit string of a fixed size of a hash value, which is designed to also be a one-way function, i.e. a function that is easy to compute on every input, but hard to invert given the image of a random input. Preferably, the collision resistant hash function is designed such that it is difficult to find two different data sets d1and d2such that hash(d1)=hash(d2). These are hash functions for which a certain sufficient security level can be mathematically proven. In the present security solution, the security of the cryptographic hash function is further improved by the fact, that the MTP ID number reading of a marking comprising a smart anchor, particularly of a composite security marking, as disclosed herein, takes place at a particular location and time, where the physical object bearing the marking is actually present at such location and time. This can be used either to increase the absolute level of security that can be achieved or to allow for the use of the collision resistant hash function working with smaller data sets, e.g. shorter data strings as inputs and/or outputs, while still providing a given required security level. By utilizing blockchain technology, the MTP ID may be used along with a collision resistant hash function to generate a smart contract. Generating a smart contract may involve in a multi-level indexing process for object authentication, object tracing and tracking. For example, combining the unique MTP ID number 1 associated with each printable page in a box of Smart Paper with the unique ID number 2 associated with a paper box containing all the Smart Paper may allow the smart paper to arrive at the printer with a predetermined identifier able to be immediately integrated to a collision resistant hash function of the blockchain upon printing. Further. in the present disclosure, each authorized printer and/or marking device may have a its own unique identification ID number 3. The unique MTP ID 1 from the paper may be combined with the unique ID 2 of the paper box and a unique serial ID number 3 of the authorized printer or marking device. Further, all related MTP IDs may be applied to a collision resistant hash function to create a similar blockchain enabled identification. This identification may be used as a further level of security to register the printer or marking device for machine tokenization payment. In some embodiments, the MTP unique ID 1 of a smart paper may be used to register fax machines and increase security of fax machines for data transmission. FIG.17illustrates an example system diagram of generating a secure document smart contract while integrating with blockchain. The example system1700may include a plurality of smart paper SP(Ni)1703, a smart paper container SPC(Mi)1705, an authorized print device1706, an authorized p-Chip PUF reader1707(e.g., p-chip identifier reader), and a blockchain secure archive1710. The plurality of smart paper SP(Ni)1703, a smart paper container SPC(Mi)1705, and an authorized print device1706may be embedded with respective super anchors configured with respective p-Chip MTPs and super anchors. The authorized p-Chip super anchor reader1707may be registered with a serial number in a digital security system. The authorized p-Chip super anchor reader1707and a collision resistant hash function1708may be incorporated into the digital authorized print device1706. The p-Chip unique serial number of authorized p-Chip super anchor reader1707may be used for generating a corresponding hash value by a collision resistant hash function1708, thereby adding an additional layer of security. While it may be fully integrated with the printing work flow in some embodiments, the collision resistant hash function1708may be performed electronically off the digital authorized print device1706in real time by a printing entity. In some embodiments, the digital authorized print device1706may be configured to receive secure document content1701and print instructions1702from users over a network to generate a secure document smart contract1709. The digital authorized print device1706may be configured to load a smart paper SP(Ni)1703from the smart paper container SPC(Mi)1705to create a printed article for the secure document smart contract1709. The digital authorized print device1706may communicated with and automatically control the authorized p-Chip super anchor reader1707to read the super anchor IDs of the loaded smart paper SP(Ni)1703and the smart paper container SPC(Mi)1705. In one embodiment, the digital authorized print device1706may be embedded or incorporated with a Super Anchor including a MTP with an ID number for elevating the security status of the print device1706. The incorporation may allow the digital authorized print device1706and its output to be recognized as a verified and trusted source. In one embodiment, the digital authorized print device1706may be registered through a blockchain trust center and may allow all subsequent prints to be secure inside the blockchain, thereby eliminating a costly, time consuming step. A collision resistant hash function1708may be applied to a p-Chip MTP ID number associated with the digital authorized print device1706, a print instructions1702, print time and print date stamps generated by the printing device1706for generating a high secure document smart contract. In one embodiment, the incorporation of p-Chips into the paper and paper container may provide 2 additional levels of security as both associated to unique super anchors with respective unique ID numbers. For example, a smart paper SP(Ni)1703may be generated by embedding a p-Chip MTP with 2D super anchors into a print paper and linked to a 2D p-Chip ID numbers (e.g., a first and a second IDs. A smart paper container SPC(Mi)1705may be generated by embedding a third p-Chip MTP into a paper container SPC(Mi)1705and linked to a third ID number. The digital authorized print device1706may be embedded or incorporated with a MTP with a fourth ID number. The collision resistant hash function1708may be applied to smart paper SP(Ni)1703and smart paper container SPC(Mi)1705, and its partners or its license at time of manufacture to create pre-manufactured smart contracts for printing. Thus, existing physical records scanned for digital archival purposes or newly created records may immediately become partial data of the smart contracts. The collision resistant hash function1708may be applied to other entity or document specific information and may greatly increase security at exceptionally low cost. For example, there are different reasons to lower the cost with increased document security.1) It may not be cost effective to use more than one print based super anchor.2) Using a 2D security marking and p-Chip ID numbers from smart paper1704, one p-Chip ID number from paper container1705and one p-Chip ID number from digital authorized print device1706may provide multiple level (e.g., 4 levels) of unique identification to a single security document.3) Replacement of an existing 2D security marking by 1 to 3 or more p-Chips may greatly reduce operating costs for the print device and the cost of secure prints for end users while greatly improving the document security. In some embodiments, p-Chip authentication may be applied to individual print cartridges for security grade inks that may be associated to different brands. Using a unique p-Chip ID number of the ink cartridge along with a different p-Chip ID number for the print device may be another way to greatly increase security for an existing 2D print based systems. In some embodiments, the smart paper and smart paper container may be labeled with material lot number and container number. The lot number may have unique Certificate of Analysis (CoA) information that may identify multiple physical constants for the batch of product and/or material. The respective p-Chip ID numbers indexed to or associated with the smart paper and smart paper container may be exchanged with or configured to include the material lot number and the container product number of the smart paper container. In one embodiment, any number of unique and variable physical data points for the batch may be used as PUFs. Further, a Super Anchor as described above may be added to the 3D print device for generating a secure 3D print. Authentication of 3D Printed Object with Embedded MTPs Utilizing the Process of Converting to Smart Contract The present disclosure provides a cost effective method and system for the identification and authentication of parts and components created by additive manufacturing. The explosion of additive manufacturing processes, equipment and techniques may hold great promise to revolutionize physical manufacturing of objects. Increasing speed while reducing equipment capital cost and cost per unit of printed objects. Reduction of costs may have made it feasible to create and sell non-original, counterfeit products. The negative effects of counterfeiting may be well established including revenue and tax loss and increased warranty claims. While these detrimental consequences have massive negative repercussions on a global scale, an even larger problem may exist related to human health and safety of fake parts leading to substantial injury and death of humans and animals. Attaching a P-Chip® MTP to a printed object may provide the object with a unique identification number that may be protected against forgery by utilizing the above-described challenge response mechanism. As described above, the P-Chip® MTP may be used to convert the printed object to a Smart Part, and/or a Smart contract by methods outlined in Smart Paper as described above. P-Chip® MTPs can be incorporated directly into a printed object by placement on the print stage. For example, the MTPs may have and adhesive or tape that is activated by methods based on mechanical, thermal, or radiation to fuse into the object. The MTPs may be incorporated using a sacrificial medium that may be destroyed by the printing process, sub-process or post printing process. P-Chip® MTPs may be incorporated directly into a printed object by a tape, asset tag or label. The security inlay can be used to deter MTP substitution. In some embodiments, P-Chip® MTPs may be incorporated as subcomponents with the P-Chip® being attached or embedded in the matrix by a separate process mechanically or by additive manufacturing. For example, one manifestation may be a thin base with an embedded MTP. The base may be made of the same material, or compatible with the material of the object being printed. Printing may occur on top of the thin base. Alternatively, the thin base may be attached by an adhesive, coating or polymeric material of organic, inorganic or hybrid composition. In some embodiments, similar materials and shapes such as pegs, tabs, labels, caps or any other structural elements of the finished part, component, sub-component or assembly may be used with embedded MTPs. In some embodiments, structures may be attached, fused to the printed part as an exterior surface. MTPs and components containing MTPs may be purposefully overprinted to enable durability of the MTPs during service life as a covert security feature. MTPs may be added to specific features of the printed article that may provide mechanical protection during service or act as an overt or covert function for reading during distribution sales and service life of the article. Existing robotics may be used to chip the object immediately after printing as a separate station in the workflow or separate process by any means. As described above, MTPs may be printed or attached to objects and be combined with 2D security markings, RFID's and other known PUF technologies for an added layer of security to the objects. The MTPs may be manufactured as labels printed on the objects. The MTPs may be embedded in paper documents as smart contracts. Various materials may be used in end use applications, such as additive manufacturing of metals, ceramics, plastics, polymeric materials, single component, plural component mixtures and combinations thereof, including medical and dental implants for humans and animals. The 3D printed object by the MTPs may require particular conditions of use, range of efficacy or limitations, such as temperature range, flexibility characteristics, etc. Bulk properties of print materials like flexibility, bend radius and coefficient of expansion may be carefully considered to ensure no stresses introduced that may incapacitate subcomponents, destroy the MTP chip or cause it to be ejected from the part in service. For example, a MTP label may be printed on the objects applied with RFID to provide flexibility and additional layer of security to the objects. For example, depending on the materials and methods used in production of the transponder antenna and the chip bonding method and orientation of the transponder on the substrate, every passive RF transponder may have a minimum (e.g., 3-inch diameter) allowed bending radius (radius of curvature). Flexing or bending the finished Passive RFID transponders media to a radius smaller than this minimum radius at any point in the application process may result in RFID failure either from antenna fracture or breaking of the chip-antenna bond. The RFID label manufacturer may provide the value for the minimum-bending radius. Objects printed with MTP label may have an extra bend flexibility over the normal RFID label. For example, p-Chips have been successfully attached and read on ¼ inch automotive brake lines. Specific issues for additive manufacturing of ceramics and metals may apply. All materials and equipment common to additive manufacturing may be utilized for the 3D printed object by MTPs. For example, laser marking of polymeric materials may be used to make identifications and 2D security marks. Laser marking is a commercial process where laser marking pigments are embedded in a matrix (polymeric, paint, adhesive, plastic etc.). The pigments may be randomly dispersed in the composite material. The composite material may be irradiated with high energy radiation and the pigments heat up and char the surrounding continuous phase of the part or coating as a response thereby changing color. Controlling the radiation beam may produce symbols, structures or identification numbers that are embedded in the part or at the surface. While laser marking may be an affordable way to add a part number to an object, laser marking pigments, radiation sources and automation controls are ubiquitous. This is not a very secure marking. If one uses laser marking and characterizes the random features as described for printing a smart contract, they may create a super anchor, which may be more secure than a simple laser mark. These methods may be widely used in carbon based materials and composites. Another method of laser marking is direct metal ablation. High power lasers may erode metal surfaces and change the surface color (anodization) leaving a permanent mark. Super Anchors may replace laser marking and 2D security marks for plastics and organic based objects. They may be used for defensive security purposes. 3D printers for ceramics and metals may have a high power laser for sintering. In some embodiments, super anchors may be attached to inorganic 3D printed articles with 2D laser marks. In some embodiments, super anchors may be attached to inorganic 3D printed articles to replace 2D laser marks and increase security. In some embodiments, the light activated MTPs may include longer waveforms being developed for IC signaling like terahertz. In some embodiments, acoustics signal may be utilized instead of light for transmitting and reading MTP chip IDs. Compatible equipment and circuit elements including modulation-demodulation circuit, coding-decoding circuit, MTP reader may be developed via piezoelectric devices on the MTPs chip to be associated with corresponding acoustics signals. Further, a mobile application may be provided to be compatible with a corresponding MTP reader for scanning the MTPs attached on the physical objects. The mobile application may be executed to communicate with a digital security system for registering physical objects attached with MTP labels or embedded with MTPs. The mobile application may be executed to communicate with a digital security system for tracking and authenticating physical objects reiterated in digital security system. The mobile application may be executed to read MTP IDs printed on the objects with a corresponding MTP reader and send the read IDs directly to the digital security system or similar functional database for object authentication processing described inFIG.15. Enhanced Read Distance Microtransponder (MTP) The current generation MTP may have limited read capability when attached directly to metal substrates. Modulated light required to activate solar cells of a MTP may interact with the metallic substrate which may generate eddy currents in the metal. The generated eddy currents may reduce the RF signal intensity response from the MTP. The ability to successfully acquire and decode the RF signal containing the unique identity number of a MTP is a function of a signal distance between the MTP and its reader. Embodiments of the present disclosure describe techniques of enhancing read distance for MTPs by eliminating the eddy currents. Signal distance for P-Chips directly attached to metallic surfaces may be reduced by up to 30% compared to non-metallic substrates. The enhanced read distance MTP may be embedded with durable self-destructive PUF functions as described. It may be possible to build a physical gap between metal substrates and objects effected by eddy currents. Such schemes may rely on tapes, shims or filled polymeric adhesives, laminates or films that are external to Integrated Circuit (IC) manufacture and structures. Given the wide range of substrates and attachment methods for end use applications of a P-Chip® MTP, a single high volume, affordable solution may not be possible for post manufacture isolation of the MTP from the metallic substrate. It may be highly advantageous to achieve the resistance to eddy currents from metal substrates as part of the on-chip structures. In some embodiments, successful elimination of eddy currents may be achieved with active or passive materials and or combinations thereof. Active materials may absorb, scatter, destroy or reflect the Eddy currents away from the chip and its signals. Filler materials such as ferrite are also known to act as active materials. Passive materials may not interact at all with the eddy currents and provide a physical separation between the substrate and the IC signals. Glass, ceramics and inorganic media are known materials providing passive separation and are compatible with IC manufacturing. In some embodiments, the base or near base layer of IC design may be fabricated with a passive material, or filled with an active material. A base layer is formed post foundry by attaching passive or active substrate to the MTP chip. Various methods or technologies may be utilized for the base layer of IC design, but not limited to the methods or technologies, including:1) Physical build processing by vapor phase or chemical deposition. While most passivation layers are built to eliminate corrosion of the IC & components, extending the thickness of the back of the chip by deposition of a non-conducting inorganic layer acts as a physical spacer to isolate the IC and its circuitry from the metal substrate causing interference.2) Physical layer build processing from liquid media with subsequent thermal or radiation curing in a field of polysilazane/polysiloxane chemistry. The two chemistries described are capable of making durable non-conducting films and structures with excellent adhesion to other inorganic surfaces. Such sol-gel systems can be applied as a liquid coating by casting, spraying, dip or spin based applications to precise films.3) Attachment of active or passive monolithic layer to wafer by liquid, gel or solid media followed by thermal or radiation curing in a field of polysilazane/polysiloxane chemistry. The same sol-gel systems may be used as adhesives to bind other structures such as a glass sheet to the back of an IC wafer. In some embodiments, a passive monolithic layer may be glass or a filled glass structure.4) Hybrid organic-inorganic polymeric matrices may be considered as they have greater flexibility, and may be an organic route to lower temperature applications. One drawback of sol-gel films is that they may be brittle. Adding small amounts of organic materials into the inorganic sol-gel system may decrease brittleness. A material tradeoff of creating a hybrid sol-gel is that the high temperature resistance is degraded. All end use applications may be directed to metal or contain metal filled layers or particles. The present disclosure may identify known or perceived conditions of use, range of efficacy or limitations. While high temperature service conditions are key features of a P-Chip® MTP, metallic objects used in low or ambient temperature applications such as asset tagging are equally important. Therefore, organic-based eddy current elimination schemes may also be utilized for low to ambient temperature applications. During the manufacturing process of MTP with the enhanced signal distance, various material may be used, but not limited to inorganic films, coatings and adhesives, high temperature hybrid organic-inorganic matrices and materials, and high temperature organic insulating materials, etc. Certain products or technologies that might be used in combination with the disclosed MTP. Various elements, devices, modules and circuits are described above in association with their respective functions. These elements, devices, modules and circuits are considered means for performing their respective functions as described herein. The invention can be described further with respect to the following Numbered Embodiments: Embodiment 1. A transponder, comprising: (1) one or more photocells configured to receive electromagnetic radiation; and (2) a clock recovery circuit comprising a photoconductor, the photoconductor comprising a source terminal, and a drain terminal coupled to a power source, the photoconductor having a resistance configured to vary as a function of received radiation intensity, the clock recovery circuit configured to generate a recovered clock, optionally with one or more features of Embodiment 9 or 19. Embodiment 2. The transponder of a Transponder Embodiment, further comprising a reverse antenna system connected to at least one photocell and configured to transmit data. Embodiment 3. The transponder of a Transponder Embodiment, wherein the photoconductor is configured to produce a modulated voltage signal at a source terminal of the photoconductor in response to a modulated radiation signal incident on the photoconductor. Embodiment 4. The transponder of a Transponder Embodiment, wherein the clock recovery circuit comprises: (a) an amplifier coupled to the source terminal of the photoconductor via a capacitor for receiving the modulated voltage signal and outputting an analog signal generated from the modulated voltage signal; and (b) an inverter coupled to the amplifier and configured to digitize the analog signal of the amplifier to generate the recovered clock. Embodiment 5. The transponder of a Transponder Embodiment, wherein the clock recovery circuit comprises a resistor comprising a first terminal connected to the source terminal of the photoconductor and a second terminal connected to a ground, and wherein the modulated voltage signal at the source terminal of the photoconductor is determined by a ratio of the resistance of the photoconductor and the resistor. Embodiment 6. The transponder of a Transponder Embodiment, wherein the transponder has a unique identifier. Embodiment 7. The transponder of a Transponder Embodiment, wherein the transponder is a monolithic integrated circuit sized about 2 mm or less×2 mm or less×0.2 mm or less in thickness. Embodiment 8. The transponder of a Transponder Embodiment, wherein the electromagnetic radiation includes one or more subsets of the sub-terahertz portion of the electromagnetic spectrum. Embodiment 9. A transponder, comprising: (I) one or more photocells configured to receive electromagnetic radiation; and (II) a reverse antenna system connected to at least one photo cell and configured to transmit data, optionally with one or more features of Embodiment 1 or 19. Embodiment 10. The transponder of a Transponder Embodiment, wherein the transponder is configured to transmit its identifier with a modulated current through the reverse antenna system. Embodiment 11. The transponder of a Transponder Embodiment, wherein the reverse antenna system comprises one or more antennas and a plurality of electronic switches, the system configured to conduct a bi-phase transmission to direct current flows through the antennas and the plurality of electronic switches. Embodiment 12. The transponder of a Transponder Embodiment, wherein the bi-phase transmission is conducted such that a “1” bit digital signal is transmitted with a first current flow through one of the antennas in one direction and a “0” bit digital signal is transmitted with a second current flow in an opposite direction in one of the antennas. Embodiment 13. The transponder of a Transponder Embodiment, wherein the reverse antenna system comprises a forward antenna and a reverse antenna. Embodiment 14. The transponder of a Transponder Embodiment, wherein the reverse antenna system comprises a single antenna. Embodiment 15. The transponder of a Transponder Embodiment, wherein the reversible antenna system is configured to conduct the bi-phase transmission to transmit transmitted a “1” bit digital signal and a “0” bit digital signal with substantially the same power. Embodiment 16. The transponder of a Transponder Embodiment, wherein the reverse antenna system comprises at least one loop antenna surrounding the one or more photo cells. Embodiment 17. The transponder of a Transponder Embodiment, comprising encoding such that number of cycles committed to transmitting a one bit is 8 data periods. Embodiment 18. The transponder of a Transponder Embodiment, comprising encoding such that number of cycles committed to transmitting a one bit is 64 data periods. Embodiment 19. A transponder comprising: (A) a monolithic integrated circuit sized about 2 mm or less×2 mm or less×0.2 mm (thickness) or less; and (B) a reversible antenna system comprising one or more antennas and a plurality of electronic switches, the antennas and switches configured to conduct a bi-phase transmission to direct current flows through the antennas and the plurality of electronic switches, optionally with one or more features of Embodiment 1 or 9. Embodiment 20. The transponder of a Transponder Embodiment, wherein the bi-phase transmission is conducted such that a “1” bit digital signal is transmitted with a first current flow through an antenna in one direction and a “0” bit digital signal is transmitted with a second current flow in an opposite direction in the antenna. Embodiment 21. The transponder of a Transponder Embodiment, wherein the antennas comprise a forward antenna and a reverse antenna. Embodiment 22. The transponder of a Transponder Embodiment, wherein the antenna system comprises a single antenna. Embodiment 23. The transponder of a Transponder Embodiment, wherein the reversible antenna system is configured to conduct the bi-phase transmission to transmit transmitted a “1” bit digital signal and a “0” bit digital signal with substantially the same power. Embodiment 24. The transponder of a Transponder Embodiment, wherein the one or more antennas are loop antennas surrounding one or more photo cells. Embodiment 25. A security inlay, comprising: (1) a bottom inlay segment; (2) a top inlay segment configured to fit to the bottom inlay segment; and (3) an electromagnetic radiation triggered transponder, comprising a top side and a bottom side, seated between the two inlay segments, the bottom side being disposed on the bottom inlay segment and the top side being disposed onto the top inlay segment, wherein the security inlay is configured so that a separation of the top inlay segment from the bottom inlay segment breaks the electromagnetic radiation triggered transponder such that the transponder cannot be read, optionally with one or more features of Embodiment 32. Embodiment 26. The security inlay of a Security Inlay Embodiment, wherein the transponder includes a notch configured to direct a line of cleavage of the transponder such that operative electronics are compromised. Embodiment 27. The security inlay of a Security Inlay Embodiment, wherein the bottom inlay segment includes a bottom groove configured to accommodate glue to adhere to the bottom side of the transponder, and the top inlay segment has a top groove configured to accommodate glue to adhere to the top side of the transponder. Embodiment 28. The security inlay of a Security Inlay Embodiment, wherein the bottom groove and the top groove are located on opposite sides of the notch, respectively. Embodiment 29. The security inlay of a Security Inlay Embodiment, wherein the top side and the bottom side of the transponder respectively comprise adhesive disposed on non-adjacent respective portions of the top side and the bottom side of the transponder. Embodiment 30. The security inlay of a Security Inlay Embodiment, wherein the two segments are configured to not be readily separable when manipulated prior to being adhered to an object in need of a security inlay. Embodiment 31. A method of securing an object comprising adhering the security inlay of a Security Inlay Embodiment via the bottom inlay segment to the object, and adhering to the bottom inlay segment to a tape or capsule that provides closure for the object. Embodiment 32. A monolithic security inlay, comprising: (a) an inlay segment; and (b) an electromagnetic radiation triggered microtransponder (MTP) coupled to the inlay segment and comprising at least one monolithic security self-destructive feature, optionally with one or more features of Embodiment 25. Embodiment 33. The monolithic security inlay of a Security Inlay Embodiment 2, wherein the MTP is directly attached to or cast into a mold label configured to be attached to a physical object. Embodiment 34. The monolithic security inlay of a Security Inlay Embodiment, wherein the MTP is directly attached to or cast into a mold label with subsequent formation of a physical object on or around the mold label containing the MTP. Embodiment 35. The monolithic security inlay of a Security Inlay Embodiment, wherein at least one monolithic security self-destructive feature is configured to transport the MTP to or across an external structure feature to disable the MTP. Embodiment 36. The monolithic security inlay of a Security Inlay Embodiment, wherein at least one monolithic security self-destructive feature is configured to rotate or engage a foreign object or structure to make contact with the MTP which is then incapacitated from induced stress. Embodiment 37. A method of securing an object, comprising: (1) embedding at least two microtransponders (MTPs) into a taggant, a plurality of taggants, a packaging, or the object, or a combination thereof to generate at least one legitimate matched pair, the MTPs each being configured with respective identifiers; (2) indexing the respective identifiers of the MTPs to the object; (3) storing indexing information associated with the MTPs and the object in a database of a digital security system; (4) reading the respective identifiers via an identifier reader; and (5) verifying, based on the reading, the indexing information to determine whether the respective identifiers are associated to the legitimate matched pair, optionally with one or more features of Embodiment 41. Embodiment 38. The method of securing the object of a Securing Embodiment, wherein at least one legitimate matched pair is associated with at least two different MTPs embedded into the taggant, the plurality of taggants, the packaging, or the object, or the combination thereof. Embodiment 39. The method of securing the object of a Securing Embodiment, wherein each MTP comprises: (a) one or more photo cells configured to receive light; (b) a clock recovery circuit comprising a photodconductor, the photoconductor comprising a source terminal, and a drain terminal coupled to a power source, the photoconductor having a resistance configured to vary as a function of received light intensity, the clock recovery circuit configured to generate a recovered clock; and (c) a reverse antenna system connected to at least one photo cell and configured to transmit data. Embodiment 40. The method of securing the object of a Securing Embodiment, wherein each MTP is embodied with at least one monolithic security self-destructive feature. Embodiment 41. A method of securing an object, comprising: (I) embedding or attaching at least one microtransponder (MTP) and at least one taggant to an object to generate at least one legitimate matched pair, the MTP and taggant each being configured with respective identifiers; (II) indexing the respective identifiers of the MTP and taggant to the object; (III) storing indexing information associated with the MTP and taggant and the object in a database of a digital security system; (IV) reading the respective identifiers via an identifier reader; and (V) verifying, based on the reading, the indexing information to determine whether the respective identifiers are associated to the legitimate matched pair, optionally with one or more features of Embodiment 37. Embodiment 42. The method of a Securing Embodiment, wherein the at least one taggant includes a QR code, barcode, RFID tag, or combination thereof. Embodiment 43. The method of a Securing Embodiment, wherein the embedding or attaching is performed to place the at least one MTP and at least one taggant next to one another or at different locations on the surface of the object or within the object. Embodiment 44. The method of a Securing Embodiment, wherein the embedding or attaching is performed to combine the at least one MTP and at least one taggant into a single compound security marking. Embodiment 45. A method for authenticating a physical item, comprising: (1) configuring a super anchor with a first identifier, the super anchor comprising an electromagnetic radiation triggered microtransponder (MTP); (2) registering and storing, by a processor of a server computing device, the first identifier associated with the physical item indexed to a first item number stored in a database of a digital security system, the super anchor device being embedded in a taggant attached to the physical item, the database being configured to store a plurality of identifiers and a plurality of item numbers indexed to respective physical items; (3) illuminating the super anchor device with an identifier reader; (4) receiving and decoding, by the identifier reader, a response signal from the super anchor to obtain a second identifier associated with the physical item; and (5) determining, based on the second identifier by the processor of a server computing device, whether the physical item is authenticated, optionally with one or more features of Embodiment 49. Embodiment 46. The method of an Authenticating Embodiment, wherein determining whether the physical item is authenticated further comprising: determining whether the second identifier is registered in the database; and in response to determining that the second identifier is registered in the database, determining whether the second identifier matches a first identifier indexed to the item number of the physical item. Embodiment 47. The method of an Authenticating Embodiment, wherein determining whether the physical item is authenticated further comprising: in response to determining that the second identifier matches a first identifier indexed to the item number of the physical item, displaying an authentic message on the identifier reader. Embodiment 48. The method of an Authenticating Embodiment, wherein the MTP is embedded into a substrate of the taggant in a multilayer manufacturing process. Embodiment 49. A method for authenticating a physical item, comprising: (a) configuring a super anchor with at least one microtransponder (MTP) and at least one taggant to an object to generate at least one legitimate matched pair, the MTP and taggant each being configured with respective first and second identifiers; (b) registering and storing, by a processor of a server computing device, the first and second identifiers associated with the physical item indexed to a first item number stored in a database of a digital security system, the database being configured to store a plurality of identifiers and a plurality of item numbers indexed to respective physical items; (c) illuminating the super anchor device with an identifier reader; (d) receiving and decoding, by the identifier reader, a response signal from the super anchor to obtain a third identifier associated with the physical item; (e) reading the super anchor device with a taggant reader; (0receiving and decoding, by the taggant reader, a response signal from the super anchor to obtain a fourth identifier associated with the physical item; and (g) determining, based on the third and fourth identifiers by the processor of a server computing device, whether the physical item is authenticated, optionally with one or more features of Embodiment 45. Embodiment 50. The method of an Authenticating Embodiment, wherein determining whether the physical item is authenticated further comprising: determining whether the third and fourth identifiers are registered in the database; and in response to determining that the third and fourth identifiers are registered in the database, determining whether the third and fourth identifiers match the respective first identifier and second identifier indexed to the item number of the physical item. Embodiment 51. The method of an Authenticating Embodiment, wherein determining whether the physical item is authenticated further comprising: in response to determining that the third and fourth identifiers match the respective first and second identifiers indexed to the item number of the physical item, displaying an authentic message on the identifier reader or the taggant reader. Embodiment 52. A system of generating a secure document smart contract, comprising: (1) a plurality of super anchors, each super anchor comprising an electromagnetic radiation triggered microtransponder (MTP) with an identifier, each MTP being linked to respective identifier and registered in a security system; (2) a smart paper embedded with at least one super anchor with first identifier; (3) a smart paper container embedded with a second super anchor with a second identifier; (4) an authorized print device registered in a security system and embedded with a third MTP with a third identifier; and (5) an identifier reader registered in the security system with a reader identifier, wherein the identifier reader is incorporated into the authorized print device and configured to read the first super anchor to obtain the first identifier and the second super anchor to obtain the second identifier, optionally with one or more features of Embodiment 64. Embodiment 53. The system of a System Embodiment, wherein the authorized print device is configured to: (i) receive secure document content and print instructions from a user over a network; and (ii) generate a printed article for the secure document smart contract based on the secure document content and print instructions; Embodiment 54. The system of a System Embodiment, wherein the authorized print device is in communication with a processor configured to execute a hash function to generate respective hash values associated with the secure document smart contract; and wherein the respective hash values are stored in a blockchain secure archive and linked to the secure document smart contract in a blockchain secure archive for printing article authentication. Embodiment 55. The system of a System Embodiment, wherein the respective hash values are associated with the first identifier of the smart paper and the second identifier of the smart paper container. Embodiment 56. The system of a System Embodiment, wherein the respective hash values are associated with the first identifier of the smart paper, the second identifier of the smart paper container, a third identifier of the authorized print device, and the reader identifier. Embodiment 57. The system of a System Embodiment, wherein the authorized print device is a 3D print device. Embodiment 58. The system of a System Embodiment, wherein the first identifier of the smart paper is configured to include a material lot number of the smart paper. Embodiment 59. The system of a System Embodiment, wherein the second identifier of the smart paper container is exchanged with or configured to include a container product number of the smart paper container. Embodiment 60. The system of a System Embodiment, wherein the MTP is manufactured in a process to eliminate the eddy currents for enhancing a MTP read distance. Embodiment 61. The system of a System Embodiment, wherein the process includes applying an active or passive monolithic layer to MTP wafer by liquid, gel or solid media followed by thermal or radiation curing in a field of polysilazane/polysiloxane chemistry. Embodiment 62. The system of a System Embodiment, wherein the passive monolithic layer includes glass or a filled glass structure. Embodiment 63. The system of a System Embodiment, wherein the smart paper further includes a 2D security mark, an RFID tag, or a combination thereof. Embodiment 64. A system of generating a secure smart contract, comprising: (a) a plurality of super anchors, each super anchor comprising an electromagnetic radiation triggered microtransponder (MTP) with an identifier, each MTP being linked to respective identifier and registered in a security system; (b) at least one super anchor with a first identifier; (c) an authorized 3D print device registered in a security system and embedded with a second MTP with a second identifier, wherein the 3D printer is configured to: (i) receive secure content and print instructions from a user over a network; and (ii) generate a 3D printed article for the secure smart contract based on the secure document content and print instructions; and (d) an identifier reader registered in the security system with a reader identifier, wherein the identifier reader is incorporated into the authorized print device and configured to read at least the first super anchor to obtain the first identifier, optionally with one or more features of Embodiment 52. Embodiment 65. The system of a System Embodiment, wherein the 3D printed article includes the first super anchor and 2D laser marks. Embodiment 66. The system of a System Embodiment, wherein the 3D printed article includes the first super anchor without 2D laser marks. Embodiment 67. A method for monitoring a continuous medium, comprising: (1) configuring a super anchor with a first identifier, the super anchor comprising an electromagnetic radiation triggered microtransponder (MTP); (2) dispersing the super anchor in the continuous medium; (3) illuminating the super anchor device with an identifier reader at a first time when the super anchor is at a first position within the continuous medium; (4) receiving and decoding, by the identifier reader, a first response signal from the super anchor; (5) in response to the first response signal, storing first data indicative of the first time and the first position; (6) illuminating the super anchor device with the identifier reader at a second time when the super anchor is at a second position within the continuous medium; (7) receiving and decoding, by the identifier reader, a second response signal from the super anchor; (8) in response to the second response signal, storing second data indicative of the second time and the second position; and (9) processing the first data and the second data to determine at least one fluid characteristic of the continuous medium. While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. Publications and references, including but not limited to patents and patent applications, cited in this specification are herein incorporated by reference in their entirety in the entire portion cited as if each individual publication or reference were specifically and individually indicated to be incorporated by reference herein as being fully set forth. Any patent application to which this application claims priority is also incorporated by reference herein in the manner described above for publications and references. Although some embodiments have been discussed above, other implementations and applications are also within the scope of the following claims. Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the following claims. More specifically, those of skill will recognize that any embodiment described herein that those of skill would recognize could advantageously have a sub-feature of another embodiment, is described as having that sub-feature. | 113,991 |
11943331 | HOMOMORPHISM FOR EC KEY PAIRS A reason for the utilisation of Elliptic Curve (EC) encryption in this application for obscuring user secret values in the earlier stage of the proposed multi-factor multi-party (MFMP) decision protocol is due to the homomorphic property of EC's private-public key relationship [6]. x1G+x2G=(x1+x2)G x is a private key, G is the base point of the EC and xG the corresponding public key for x. More generically, where E(x)=xG, E(m+n)=E(m)+E(n) There exist homomorphic hash functions (and/or encryption functions) for which H(m+n)=H(m)+H(n). These homomorphic hash functions too would accomplish some key functionality that the EC cryptography homomorphic private-public key relationship does for the MFMP protocol. Also, while addition is being utilised in this application the homomorphism property does not have to be for addition. i.e. the contributions of this application can be achieved if the homomorphism property of the hash/encryption function is for other operators. As an example, if the operator is multiplication, consider H(mn)=H(m)×H(n)where H( ) is a hash/encryption function.More generally, if the operator is a generic operator @ such that H(m⊕n)=H(m)⊕H(n) where H( ) is a hash/encryption function, then in such a case the operator could be equally applied to the design of the MFMP protocol where homomorphism comes into play. m-of-n Multisignature Script as Data Storage Given that Bitcoin transactions do not have fields dedicated to the storage of metadata, the success of the MFMP protocol is dependent on finding a suitable location for recording the choices made by parties for the decision factors to which they have been assigned responsibility. For the proposed design of the MFMP protocol, the votes are stored using Bitcoin's script for an m-of-n multisignature (multisig) transaction These multisig elements were initially incorporated into the Bitcoin script so that it requires more than one key to authorise a Bitcoin transaction. The m-of-n multi-signature script takes the following format:OP_0 Sig1 Sig2 . . . NumSigs Pub1 Pub2 Pub3 Pub4 . . . NumKeys OP_CHECKMULTSIG where the content NumSigs Pub1 Pub2 Pub3 Pub4 . . . NumKeys OP_CHECKMULTSIG would be of the output script <scriptPubKey> and the content OP_0 Sig1 Sig2 is of the input script <scriptSig=. <scriptPubKey> is the ruleset that allows for a transaction output to be utilised. <scriptSig> is the content required that would satisfy scriptPubKey>. NumSigs is the number of required signatures, NumKeys is the number of possible signatures, and PubX is the public key that corresponds to the signature SigX. While this script is intended for m-of-n signatures, the PubX elements of the redeem script may be appropriated for use as a store of metadata. As an example, a 2-of-4 multi-signature script is shown where, of the 4 data elements reserved for Public Keys, two are utilised to store metadata. The script takes the following formatOP_0 SigA SigB OP_2 meta1 meta2 PubA PubB OP_4 OP_CHECKMULTSIG These metadata elements could be representative of the set of encrypted votes of parties responsible for deciding on specific factors of the overall decision. As an example a 1-of-7 multi-signature script is shown where, of the seven data elements reserved for Public Keys, five are utilised to store votes and two for genuine public keys. The script takes the following format:OP_0 SigB OP_1 v1v2v3v4v5PubA PubB OP_7 OP_CHECKMULTSIG Elliptic Curve Finite Field Arithmetic and OPCODE It has been found that if the 200 opcode limit of the Bitcoin Script is removed and that disabled opcodes are re-enabled, the Bitcoin protocol could then carry out Elliptic Curve (EC) Finite Field Arithmetic. To clarify, an Elliptic Curve is the set of points described by the equation y2≡x3+ax+b(modp) where 4a3+27b2≢0 (mod p) and p is prime. For the purposes of the present application, the EC Arithmetic functionality required inside Bitcoin script is that of ‘point multiplication by scalar’. This is the operation such that nP=P+P+…+P︸n where n is a natural number, P is a point on the Elliptic Curve, and + is the operator for addition of points on the EC. Scalar multiplication in turn requires the specific EC group operations Point Addition and Point Doubling.Point Addition P+Q: With this operation, we compute a new point on the EC as a negation of the intersection of the curve. This can be described as R=P+Q.Point Doubling P+P: Using point addition, we can compute a point double of P. This can be described as R=P+P=2P. More specifically, given two points, P(x1, y1) and Q(x2, y2), on the EC P+Q=(x3,y3) where x3=m2−x1−x2modp y3=m(x1−x3)−y1modp and m={y2-y1x2-x1modp:ifP≠Q(PointAddition)3x12+a2y1modp:ifP=Q(PointDoubling) We employ an EC opcode which provides for the multiplication Q=kG. This opcode is titled OP_ECPMULT. In other words, OP_ECPMULT takes an encoded Elliptic Curve Point and a number and performs Elliptic Curve Multiplication by Scalar. It outputs the result as an encoded Elliptic Curve Point. Multi-Factor Multi-Party Decision Protocol Overview The MFMP protocol facilitates a decision-making system that is reminiscent of a decision tree. For the multi-factor multi-party decision tree, each level of the tree (where the root node is level 1) represents a particular ‘party-and-factor’ whereas each of the branches from each node represents a choice a party makes with respect to a specific factor. FIG.1shows a decision tree commensurate with the MFMP protocol of the present application. Each node in the figure represents a ‘party-and-factor’ and each line/edge between nodes is an option related to a factor. For the decision tree depicted there are n=3 factors (and n parties) and each factor offers m=2 options. For the MFMP protocol m may vary per factor if necessary or applicable. In general each ‘party-and-factor’ is represented by a node Uiwhere i represents the factor the party is deciding on; e.g. UArepresents a party making a decision based on factor A. For each factor a there are set of options, {ka,j:ϵ[0, ma]}, a party may choose from, where mais the number of options. The Oknodes represent the possible decisions or Outcomes resulting from the combination of the choices of the parties. The options selected by the respective parties form a path in the decision tree toward an Outcome Ox. The MFMP protocol caters only to the scenario where the set of options for a factor remains the same regardless of choice made by another party. {ka,j} is independent of {kb,j}. e.g. InFIG.1, regardless if kA,1or kA,2is chosen via UA, the options available to UBremain as kB,1and kB,2. This leads to decision trees that can only be symmetric. As opposed to other voting protocols where the majority of votes determines the winner, for the MFMP protocol it is the unique combination of votes that determines who (what Outcome) wins. A ‘vote’ by one party is not a vote for a specific Outcome Oibut a vote for ‘a set of possible Outcomes’. FromFIG.1,if UAvotes kA,1this represents a choice for the Outcome set {O1, O2, O3, O4}if UBvotes kB,1this represents a choice for the Outcome set {O1, O2, O5, O6}if UCvotes kC,2this represents a choice for the Outcome set {O2, O4, O6, O8} the Outcome of the secret value combination (kA,1, kB,1, kC,2) of the three parties is the intersection of the three sets of Outcomes O2(SeeFIG.2). Protocol Details Supervisor A MFMP protocol is designed to be carried out with the use of a supervising entity termed Supervisor. This Supervisor is expected to be the entity for whom the multi-factor, multi-party decision is being made for and/or the entity given the responsibility of overseeing the execution of the protocol, for example the CEO of a corporation. This Supervisor may also be the one who provides the coins to fund the final outcome of the decision making process. This Supervisor will have the responsibility for transaction construction, signing transaction inputs as appropriate, and creating and submitting transactions in an appropriate order, in a timely fashion, to the blockchain. The Supervisor will also explain, establish, or negotiate with each voting party, the set of possible choices that may be made with respect to a decision-related factor as well as the corresponding Outcomes. Initialisation and Keys All parties agree on the standardised elliptic curve used in Bitcoin, secp256k1, and the set of related parameters including:G—a base point on the elliptic curve with order q:q×G=0, andq—a large prime number. For each party assigned to a factor in the decision making process, that party is asked to produce for themselves a set of secret ka,ivalues where a represents the decision factor and i represents one of the set of options for that decision factor. There are masecret ka,ivalues in this set. The ka,ivalues are such that 0<ka,i<q. Each party is expected to keep its respective ka,ivalues secret at this point in the protocol. For each ka,ivalue, party Uacalculates the corresponding public key value Qa,i=ka,iG. Hierarchy and Summation Each party shares all of its respective public keys, {Qa,i}, with all the other parties (including the Supervisor). Each party comes to an agreement with the Supervisor on which of their public keys corresponds to which element of the set of choices related to the party's assigned factor. With the public keys from all other parties, a party then calculates (for their individual selves) all possible combinations for summing the public keys, where there are n elements in the summation and each element of the summation is a public key of a different voting party. The summation would correspond to summing the public keys along the possible paths of the decision key hierarchy. (FIG.3). Also, as discussed above under the heading “Homomorphism for EC Key Pairs”, homomorphic functions and/or operators other than addition may be used. From the example shown inFIG.3, the possible summations of the public keys (regardless of which party is making the calculations) are O1=QA,1+QB,1+QC,1 O2=QA,1+QB,1+QC,2 . . . O7=QA,2+QB,2+QC,1 O8=QA,2+QB,2+QC,2 Each party is expected to keep a record of how each summation is obtained. By doing this each party will know which of its secret ka,ivalues (via Qa,i) was utilised in obtaining specific OiOutcome values. For example the party UAknows that if for factor A he/she chooses or had chosen ‘1’ (represented by kA,1via QA,1), then the Outcomes which takes into consideration his/her choice kA,1, would be O1, O2, O3, and O4. Bear in mind that due to the homomorphic properties of the EC private-public key relationship described above under the heading “Homomorphism for EC Key Pairs”, for an Outcome Oxwhere there are n factors/parties Ox=QA,i+QB,j+…+Qn,k=kA,iG+kB,jG+…kn,kG=(KA,i+kB,j+…)G The summation of the n ka,ivalues is labelled as svx, therefore svx=kA,i+kB,j+ . . . kn,k and as such Ox=svxG A responsibility of the Supervisor is for communicating to each voting party a new public key/address sxthat is to be directly associated with each oxoutcome of the MFMP instance. This amounts to the set of pairs {(ox,sx)} where the set {ox} is bijective with the set {Sx}. It will be appreciated by persons skilled in the art that in mathematics, a bijection, bijective function or one-to-one correspondence is a function between the elements of two sets, where each element of one set is paired with exactly one element of the other set, and each element of the other set is paired with exactly one element of the first set, such that there are no unpaired elements”. The public key sxdoes not necessarily belong to the Supervisor himself, but may be owned by a separate individual who is tasked with carrying out the duties with respect to the Outcome ox. For a person to ‘own’ a public key or for a public key to ‘belong to’ a person, in this context means that the person has knowledge of the private key which corresponds to the public key. Each party knowing its set of possible Outcomes if it votes a certain way with respect to a specific factor, will inspect the commitment transaction to ensure that the options related to the Oxvalue in the Escrow script are paired with the correct Sxpublic key—such that both public keys require signatures in order for escrowed funds or tokens to be accessed. Essentially, a party may choose not to provide its vote if one of the possible outcomes Oxof the decision making process is not tied to the correct Sx(in accordance with the earlier agreement with the Supervisor) in the commitment transaction's escrowed output. Embodiment 1—Transactions and Choice Documentation The MFMP protocol is built upon 4 core transactions; a commitment transaction TC, a payment transaction TP, a refund transaction TR, and a voting transaction TV. The interfacing of these 4 transactions is represented inFIG.4. InFIG.4, each transaction is represented by a rectangle with rounded corners. Inputs and outputs of a transaction are represented by basic rectangles. Inputs are shown on the left half of a transaction whereas outputs are shown on the right half of a transaction. The labels of the input and output rectangles are of the public keys of the ‘source of coins’ and the ‘intended recipient’ respectively; e.g. sais a source of funds for the voting transaction while at the same time being a recipient of funds of the commitment transaction. In the present case, the public key of a person may be used as the label of the person himself. Esc output/input is the exception in that the output-input is not directed at one specific address/public key, but can be accessed by different keys based on the ability to satisfy stated criteria. Commitment Transaction The commitment transaction TC(SeeFIGS.4and5) is the main transaction of the MFMP protocol and includes as its input(s) the coins Fu to fund the one outcome that is representative of the choices of all voting parties. These funds are assumed to be from an entity/individual titled the Supervisor S. As an example this Supervisor could be the CEO or accountant/treasurer of the company who is responsible for the finances of the company. The address from which the Supervisor provides these funds is labelled Sa. The commitment transaction is expected to have at least two outputs. The first of these is of a nominal fee that is being transferred to a second address belonging to the Supervisor. This output address is utilised as an easy means for stakeholders to link the commitment transaction to the voting transaction utilising information stored in the blockchain. The address of these funds is of a second address Sbowned by the Supervisor. The second output of the commitment transaction is that of escrowing a quantity of coins—in that the ‘winning’ Outcome of the decision tree will be received (or be funded from) these escrowed coins. This output is described as being ‘escrowed’ as the funds are not immediately exclusive to a specific output address/public key but has conditions attached to the bitcoin script of this output that allows for these funds to be transferred to one of a set of possible addresses—where the eventual address to which the coins are granted is dependent on script-stipulated criteria being satisfied. This script is such that the set of conditions only gives access to the escrowed coins if the requisite ka,ivalues of the participating parties are available. More specifically, the criteria in the script for the selection of an output address for the escrowed funds are that signatures be produced for the public keys Oxand Sx, where Ox=svxG, svx=kA,i+kB,j+ . . . kn,k and Sxis the unique output address paired with the Outcome Ox. Voting Transaction The voting transaction TV(SeeFIGS.4and6) is responsible for the recording of the choices the parties have made with respect to their assigned decision factors. This voting transaction is ‘linked’ to the commitment transaction by using the Sboutput of the commitment transaction as an input of the voting transaction where Sbis a second address controlled by the Supervisor. This link serves three purposes:Linking TCand TV—Having the shared address between both (voting and commitment) transactions allows stakeholders to easily retrieve the other transaction if they have found one of the aforementioned two transactions in the blockchain. Note that the commitment transaction is placed on the blockchain before the voting transaction, therefore while a commitment transaction may exist on the blockchain without a voting transaction also being present, it does not work the other way around.Recording of Association—This link provides a documented association between the commitment transaction and its escrowed funds and the votes being cast by the parties.Supervisor Approval—An address of the Supervisor being included as input in the voting transaction gives the Supervisor an element of supervision over the votes being cast—as the Supervisor signing the input Sb(of the voting transaction) acts as a formal representation of the Supervisor's acceptance of the votes of the parties. The voting transaction itself is expected to be constructed by the Supervisor and then passed along to the parties so that each part may add their vote to the transaction. The votes are the ka,ivalues. The fields reserved for public keys in an m-of-n multisig script are used to store the votes within voting transaction. This script is used as the output script, <scriptPubKey>, of the voting transaction. It will be familiar to persons skilled in the art that <scriptPubKey> is, for Bitcoin, the ruleset that allows for a transaction output to be utilised. <scriptSig> is the content required that would satisfy <scriptPubKey>. The current version of the voting transaction which includes the m-of-n script containing the ka,ivalues, is now returned to the Supervisor. The Supervisor in possession of the set of n ka,ivalues provided by the parties would validate the votes by determining if (kA,i+kB,j+ . . . kn,k)G is equal to one of the OxOutcomes from the calculated hierarchy. If vote combination is validated, the Supervisor then adds to the m-of-n multisig script of voting transaction's output script, <scriptPubKey>, the public key Sa. This is the main public key (‘address’) of the Supervisor and the Supervisor is expected to produce a signature for Safor the <scriptSig> of any transaction who desires to spend any of the coins at the output of the voting transaction. An example of the voting transaction's output script combined with the input script of a transaction that spends this output is shown below:OP_0 Sig SaOP_1 kA,ikB,jkC,kPub SaOP_4 OP_CHECKMULTSIG This 1-of-4 multisig of the <scriptPubKey> includes the 3 votes cast by the parties responsible for factors A, B, and C; it also includes the public key Sa. The current version of the voting transaction is then resent to each party, who on confirming the inclusion of his/her vote in the final version of the output script, signs his/her input to the transaction, representing their approval of the voting transaction, i.e. they acknowledge that their vote has been included in the voting transaction. The Supervisor signs his input to the voting transaction then the voting transaction is submitted to the blockchain. Alternatively, each party may communicate in a secure way their vote to the Supervisor, who then adds all the votes to the output script of the voting transaction, and then sends this transaction to the various parties for their respective input signatures. It should be noted that:the funds/coins that each party utilises/contributes for their input to the voting transaction is a minimum or nominal fee. The voting transaction is meant more as an immutable record of votes more so than a transfer of funds.for the current version of the Bitcoin protocol, the maximum number of public keys allowable in a multisig output is 15. For this reason, the maximum number of votes (voters) that the described m-of-n multisig script may allow is 14 (bearing in mind that one of the fifteen spaces is reserved for a public key Saof the Supervisor). More elaborate scripts including multiple redundant m-of-n multisig (sub)scripts may be constructed to incorporate more votes, keeping in mind however, that the maximum size of the script is 10 kilobytes [7] and that transaction costs are dependent on the size of the transaction.where applicable, the Supervisor may also have a vote based on a decision-related factor. Payment Transaction The payment transaction TP(SeeFIGS.4and7) is the transaction that may successfully access the escrowed coins at the Esc output of the commitment transaction (SeeFIGS.4and5). This transaction will include two signatures in its input script, <scriptSig>, the first being the signature for the public key Oxwhere (kA,i+kB,j+ . . . kn,k)G=Ox and each ka,iis for a different decision-related factor. The unique combination of kai values present in the script is the determinant of who (owner of a public-private key pair) is able to access the Esc coins. These ka,ivalues would have been retrieved from the voting transaction available on the blockchain. The second signature in the payment transaction's input script is that for the public key Sx, which is a public key of the individual who has been assigned responsibility for supervising Ox. This is not necessarily the main Supervisor of the protocol (owner of Saand Sb) but could be any another approved individual. In addition, where applicable, especially for security/control purposes, a third signature could be mandated for the input script of the payment transaction, where this third signature is that of the main Supervisor. If the input of the payment transaction is successfully signed then escrowed coins of the commitment transaction can be moved to a recipient address, Rx, related to the Outcome Ox. The payment transaction is submitted to the blockchain after the voting transaction. Refund Transaction The refund transaction (FIGS.4and8) is a transaction that returns escrowed funds to all parties (Supervisor or otherwise) who would be contributing funds to the commitment transaction. It is seen as fail-safe measure if the participants of the protocol have not acted as they should. Importantly this refund transaction includes an nTimeLock value which prevents it from being accepted by the blockchain until after a certain point in time (unix time or block height) has passed. The input script of the refund transaction includes data that would satisfy one of the options available in the escrowed output script of the commitment transaction. This data may be the signatures of the main Supervisor (who committed the escrowed funds) and other stakeholders. All other stakeholders must sign the refund transaction before the commitment transaction is submitted to the blockchain by the Supervisor. This ensures that the Supervisor is able to retrieve all committed escrowed-funds if things go wrong. The refund transaction is an optional transaction and can only be submitted to the blockchain if no payment transaction of the MFMP protocol instance has been submitted. Moreover the refund transaction can only be submitted after a certain point in time. With this in mind the nTimeLock value of the refund transaction must be chosen so that after the commitment transaction is submitted at time T, enough time is given forvotes to be obtained,the voting transaction committed to the blockchain,voting transaction found in blockchain,and the payment transaction created and submitted to blockchain. The assigned time (span) to accomplish all this is labelled as s. The nTimeLock value for the refund transaction would thus at least be nTimeLock=T+s Note that time the nTimeLock value can either be defined in seconds or block height. Escrow-Related Scripts Output Script: Commitment Transaction Escrow The escrowed funds of the commitment transaction are expected to be ‘protected’ by a stack-based script language that allows for a set of possible ways for the escrowed funds to be accessed/claimed/used. Each of these ways of accessing escrowed funds has an associated set of criteria that must be satisfied in order for the funds to be retrieved. The escrow script can essentially be seen as representative of a set of case statements where each option in the case statements is a different way of accessing escrowed funds. Assuming that there are t options, for t−1 of these options (one criteria is related to refund signature(s)) the criteria for each case are to be (at least):An ECDSA signature is to be produced for the public address of (kA,i+kB,j+ . . . kn,k)G=OxANDAn ECDSA signature is to be produced for the public key, Sx, of someone in a supervisory role for Ox. FIG.9illustrates a high level version of a Bitcoin output script that represents the escrow case statements (featuring 8 options). (cond_i) represents the criteria/condition that needs to be satisfied and [Do_i] represents the action to perform if (cond_i) evaluates as true. Being more specific as it relates to the output script, the condition (cond_i) that may be used to represent the need for two ECSDA signatures could be that of a 2-of-2 multisig (sub)script such as:OP_2 Pub OxPub SxOP_2 OP_CHECKMULTSIG. . .where Pub Ox=(kA,i+kB,j+ . . . kn,k)G and Pub Sxis the public key of entity assigned to Outcome Ox. The action that each [Do_i] element of the script is to be asked to perform, assuming (cond_i) evaluates as true, is to deposit the value 1/TRUE on the top of the stack. It will be appreciated by persons skilled in the art that Bitcoin script is a stack-based language and that a value of ‘TRUE’ at the top of the stack after completion of script execution means that the script has executed successfully. Input Script: Payment Transaction To successfully access the coins of the escrowed output of the commitment transaction, this requires that when the output script, <scriptPubKey>, of the escrowed output is combined with the input script, <scriptSig>, of the payment transaction, that the combined script successfully executes. i.e. produces an 1/TRUE, at the top of the stack. At least one data element <data_i> must be included in the input script that would result in at least one of the if-statements of the output script to be true, ultimately leading to the combined input and output scripts to evaluate to true. It should be noted that <data_i> may represent multiple fields of data. As an example <data_i> may be a combination of three values, <op_0><sigOx><sigc,>, for a 2-of-2 multisig script. It should also be noted that, depending on the option being considered, some redundant data, <bd_datai>, may also be included in the input script where applicable. <bd_dti>, ‘bad data’, is meant to represent data that when processed by (cond_i) is guaranteed to produce 0/FALSE as the output. Similar to that of <data_i>. <bd_dti> may be comprised of multiple individual data elements. In fact <bd_dti> is expected to be composed of the same number of data elements as <data_i>. MFMP Flowchart FIG.10shows a general overview of the Multi-Factor Multi-Party voting protocol. An arrangement of a further embodiment of the invention is shown inFIGS.11to13. Embodiment 2—Transactions and Choice Documentation The present embodiment differs from the multi-factor multi-party protocol of the first embodiment in that whereas the first embodiment records the votes of the parties as a combined set of votes included in the m-of-n multisig output script of the voting transaction, the present embodiment asks the voting parties to reveal their votes by including their votes as an argument of the input script of the voting transaction. By doing this, this provides the advantages of more directly connecting a party to their vote, and allowing a party to sign a transaction independently of the knowledge of the votes of the other parties. In the first embodiment, the votes of the various parties are stored in an m-of-n multisig output script of the voting transaction. To indicate confirmation that their vote is represented and/or documented in the voting transaction, each party (e.g. parties A, B, and C ofFIG.11) will sign their corresponding input of the voting transaction. While this signature can be seen as such a confirmation, it is not documented within the blockchain which vote belongs to which participant. This information may be useful in some scenarios. At the same time, a party cannot sign the voting transaction until all parties have contributed their vote to the m-of-n multisig output script. The second embodiment introduces a variation of the multi-factor multi-party decision making protocol of the first embodiment that addresses the restrictions described on a party's storage and confirmation of votes in the voting transaction of such a protocol. It does this by making it mandatory for parties to reveal their votes in order to access the funds used as input for the voting transaction. In order to achieve this, it is assumed that there exists an opcode in the Bitcoin script that allows for Elliptic Curve (EC) ‘point multiplication by scalar’. This in turn requires that the 200 opcode limit of Bitcoin Script be removed and that disabled opcodes are re-enabled. A description of the technologies utilised in the present embodiment, including the homomorphic properties of the private-public key relationship in Elliptic Curve (EC) cryptography, as well as the proposed opcode, is provided in detail above. The proposed protocol of the present embodiment differs from the MFMP protocol of the first embodiment in the element of (voting) transaction where votes are stored and, subsequently, ‘how and when’ one can go about signing as confirmation that one's vote is represented. Commitment Transaction As is the case with the first embodiment, the second embodiment is built upon 4 core transactions; a commitment transaction TC, a payment transaction TP, a refund transaction TR, and a voting transaction TV. The interfacing of these 4 transactions is represented inFIG.11. Commitment Transaction The commitment transaction of the present embodiment differs from that of the first embodiment in that the commitment transaction of the present embodiment is expected to have at least four outputs, as shown inFIGS.11and12. The first output is of a nominal fee that is being transferred to a second address. Sb, belonging to the Supervisor. This output address is utilised as an easy means of stakeholders to link the commitment transaction to the voting transaction utilising information stored in the blockchain. More importantly it serves as a way of giving the Supervisor jurisdiction over the voting transaction as these outputs are also inputs to the voting transaction, inputs that will require signatures. The second output of the commitment transaction, as is the case with the first embodiment, is that of escrowing a quantity of coins—in that the ‘winning’ Outcome of the decision tree will be received (or be funded from) these escrowed coins. The second embodiment further differs from the first embodiment in that the other outputs (of which there will be at least 2) are to public addresses belonging to the voting parties; e.g. parties A, B, and C ofFIGS.11and12. These outputs are to be designed so that to access the funds the parties are required to produce their respective votes (as well as a signature for public key A). As an example, if party A was to spend the A output of the commitment transaction, he/she would have to include the vote kA,i, for the public key QA,i=kA,iG in the input script of the voting transaction, thus revealing, on the blockchain, their vote kA,i. Given that a party may vote several ways as it relates to their assigned factor, the output script <scriptPubKey> of the commitment transaction for a voting party (e.g. party A) must include several options for accessing funds, each based on one of the options for the voting party. Using the structure of the multi-option (nested if-else) high level script described with reference toFIG.9, the Bitcoin script of the condition. (cond_i), that allows for the payment of funds from option i is<basepoint G> OP_ECPMULT <QA,i> OP_EQUALVERIFY <pub A> OP_CHECKSIG It can be seen that this script utilises the proposed opcode OP_ECPMULT discussed above under the heading “Elliptic Curve Finite Field Arithmetic and OPCODE”. For (cond_i) to be satisfied, the voting party A needs to include the <datai> element <<sig A><kA,i>> Where sigA is the signature of public key A and kA,iis party A's vote. It should be noted that the party A's vote is revealed in <datai> as part of the process of accessing party A's output of the commitment transaction. Voting Transaction The voting transaction of the second embodiment (seeFIG.11) is ‘linked’ to the commitment transaction through the Sboutput as well as the voting-party outputs of the commitment transaction. These outputs of the commitment transaction are inputs to the voting transaction. The voting transaction itself is expected to be constructed by the Supervisor and then passed along to the parties so that each part may sign their respective inputs. It should be recalled (from the commitment transaction described above) that a party signing their input to the voting transaction requires that the party's vote, kai, be revealed. The Supervisor signs his input to the voting transaction then the voting transaction is submitted to the blockchain. It should be noted that the coins that each party utilises/contributes for their input to the voting transaction is a minimum or nominal fee. The voting transaction is meant more as an immutable record of votes than a transfer of funds. FIG.13shows a general overview of the Multi-Factor Multi-Party voting protocol. Turning now toFIG.14, there is provided an illustrative, simplified block diagram of a computing device2600that may be used to practice at least one embodiment of the present disclosure. In various embodiments, the computing device2600may be used to implement any of the systems illustrated and described above. For example, the computing device2600may be configured for use as a data server, a web server, a portable computing device, a personal computer, or any electronic computing device. As shown inFIG.14, the computing device2600may include one or more processors with one or more levels of cache memory and a memory controller (collectively labelled2602) that can be configured to communicate with a storage subsystem2606that includes main memory2608and persistent storage2610. The main memory2608can include dynamic random-access memory (DRAM)2618and read-only memory (ROM)2620as shown. The storage subsystem2606and the cache memory2602and may be used for storage of information, such as details associated with transactions and blocks as described in the present disclosure. The processor(s)2602may be utilized to provide the steps or functionality of any embodiment as described in the present disclosure. The processor(s)2602can also communicate with one or more user interface input devices2612, one or more user interface output devices2614, and a network interface subsystem2616. A bus subsystem2604may provide a mechanism for enabling the various components and subsystems of computing device2600to communicate with each other as intended. Although the bus subsystem2604is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple busses. The network interface subsystem2616may provide an interface to other computing devices and networks. The network interface subsystem2616may serve as an interface for receiving data from, and transmitting data to, other systems from the computing device2600. For example, the network interface subsystem2616may enable a data technician to connect the device to a network such that the data technician may be able to transmit data to the device and receive data from the device while in a remote location, such as a data centre. The user interface input devices2612may include one or more user input devices such as a keyboard; pointing devices such as an integrated mouse, trackball, touchpad, or graphics tablet; a scanner: a barcode scanner; a touch screen incorporated into the display: audio input devices such as voice recognition systems, microphones; and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and mechanisms for inputting information to the computing device2600. The one or more user interface output devices2614may include a display subsystem, a printer, or non-visual displays such as audio output devices, etc. The display subsystem may be a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), light emitting diode (LED) display, or a projection or other display device. In general, use of the term “output device” is intended to include all possible types of devices and mechanisms for outputting information from the computing device2600. The one or more user interface output devices2614may be used, for example, to present user interfaces to facilitate user interaction with applications performing processes described and variations therein, when such interaction may be appropriate. The storage subsystem2606may provide a computer-readable storage medium for storing the basic programming and data constructs that may provide the functionality of at least one embodiment of the present disclosure. The applications (programs, code modules, instructions), when executed by one or more processors, may provide the functionality of one or more embodiments of the present disclosure, and may be stored in the storage subsystem2606. These application modules or instructions may be executed by the one or more processors2602. The storage subsystem2606may additionally provide a repository for storing data used in accordance with the present disclosure. For example, the main memory2608and cache memory2602can provide volatile storage for program and data. The persistent storage2610can provide persistent (non-volatile) storage for program and data and may include flash memory, one or more solid state drives, one or more magnetic hard disk drives, one or more floppy disk drives with associated removable media, one or more optical drives (e.g. CD-ROM or DVD or Blue-Ray) drive with associated removable media, and other like storage media Such program and data can include programs for carrying out the steps of one or more embodiments as described in the present disclosure as well as data associated with transactions and blocks as described in the present disclosure. The computing device2600may be of various types, including a portable computer device, tablet computer, a workstation, or any other device described below. Additionally, the computing device2600may include another device that may be connected to the computing device2600through one or more ports (e.g., USB, a headphone jack, Lightning connector, etc.). The device that may be connected to the computing device2600may include a plurality of ports configured to accept fibre-optic connectors. Accordingly, this device may be configured to convert optical signals to electrical signals that may be transmitted through the port connecting the device to the computing device2600for processing. Due to the ever-changing nature of computers and networks, the description of the computing device2600depicted inFIG.14is intended only as a specific example for purposes of illustrating the preferred embodiment of the device. Many other configurations having more or fewer components than the system depicted inFIG.14are possible. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be capable of designing many alternative embodiments without departing from the scope of the invention as defined by the appended claims. In the claims, any reference signs placed in parentheses shall not be construed as limiting the claims. The word “comprising” and “comprises”, and the like, does not exclude the presence of elements or steps other than those listed in any claim or the specification as a whole. In the present specification, “comprises” means “includes or consists of” and “comprising” means “including or consisting of”. The singular reference of an element does not exclude the plural reference of such elements and vice-versa. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In a device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. REFERENCES ReferenceAuthor, date, name & location1-Riemann, R., & Grumbach, S. (2017). Distributed Protocols at the Rescuefor Trustworthy Online Voting.arXiv preprint arXiv:1705.04480.2-Chaum, D. L. (1981). Untraceable electronic mail, return addresses, anddigital pseudonyms.Communications of the ACM, 24(2), 84-90.3-Yao, A. C. (1982, November). Protocols for secure computations. InFoundations of Computer Science, 1982. SFCS'08. 23rd AnnualSymposium on(pp. 160-164).4-Gambs, S., Guerraoui, R., Harkous, H., Huc, F., & Kermarrec, A. M. (2011).Scalable and secure aggregation in distributed networks.arXiv preprintarXiv:1107.5419.5-Zhao, Z., & Chan, T. H. H. (2015, December). How to vote privately usingbitcoin. InInternational Conference on Information and CommunicationsSecurity(pp. 82-96). Springer International Publishing.6-Maxwell, G. (2015) Confidential Transactionshttps://people.xiph.org/~greg/confidential_values.txt7-Bitcoin Script Interpreterhttps://github.com/bitcoin/bitcoin/blob/fcf646c9b08e7f846d6c99314f937ace50809d7a/src/script/interpreter.cpp#L256 | 42,195 |
11943332 | DETAILED DESCRIPTION The embodiments describing methods and apparatus are valid for all technologies having the features and possibilities that are discussed in this disclosure. The embodiments described herein are used as non-limiting examples. To facilitate a greater understanding of various aspects of the herein-described technology, this Description is divided into three parts. The first part (“Part A”) focuses on key aspects, and provides a complete description of these. The second part (“Part B”) describes these aspects and also goes beyond them to describe additional aspects of the technology. The third part (“Part C”) describes additional embodiments that are consistent with the technological aspects presented in Parts A to and B. PART A The classical SBox architecture is built using Tower Field extensions. To build GF(2{circumflex over ( )}8) using the Tower Field construction, the teachings described in references [4,10] are followed, starting with the basic binary field GF(2) and building extension fields. Let us start with the irreducible polynomial f(x)=x{circumflex over ( )}2+x+1 over GF(2). Let W be a root of f(x) such that f(W)=0. A normal base is constructed from the conjugates of W, [W,W{circumflex over ( )}2]. Now every element k in GF(2{circumflex over ( )}2) can be expressed as k=k0W+k1W{circumflex over ( )}2, where k0and k1are elements in GF(2); i.e., 1 or 0. Using the same technique, the field GF(2{circumflex over ( )}4) can be built from GF(2{circumflex over ( )}2), and from GF(2{circumflex over ( )}4) the target field GF(2{circumflex over ( )}8) can finally be built. The irreducible polynomials, roots, and normal bases used are summarized in Table 3. TABLE 3Definition of subfields used to construct (GF(2{circumflex over ( )}8).TargetIrreducibleCoefficientsNormalFieldPoly.Rootin FieldBaseGF(22)x2+ x + 1WGF (2)[W, W2]GF(24)x2+ x + W2ZGF(22)[Z2, Z8]GF(28)x2+ x + WZYGF(24)[Y, Y16] Let A=a0Y+a1Y{circumflex over ( )}16 be a general element in GF(2{circumflex over ( )}8) with coefficients in GF(2{circumflex over ( )}4). The inverse of A can be written as A-1=(AA16)-1A16=((aoY+a1Y16)(a1Y+aoY16))-1(a1Y+aoY16)=((ao2+a12)Y17+aoa1(Y2+Y32))-1(a1Y+aoY16)=((ao+a1)2Y17+aoa1(Y+Y16)2)-1(a1Y+aoY16)=((ao+a1)2WZ+aoa1)-1(a1Y+aoY16). And the element inversion in GF(2{circumflex over ( )}8) can be done in GF(2{circumflex over ( )}4) as T1= (a0+ a1)T2= (W Z) T12T3= a0a1T4= T2+ T3T5= T4−1T6= T5a1T7= T5a0, where the result is obtained as A−1=T6Y+T7Y16. In these equations several operations are utilized (addition, multiplication, scaling, and squaring) but only two of them are non-linear over GF (2): multiplication and inversion. Furthermore, the standard multiplication operation also contains some linear operations. If all the linear operations are separated from the non-linear ones and bundled together with the linear equations needed to do the base change for the AES SBox input, which is represented in polynomial base using the AES SBox irreducible polynomial x8+x4+x3+x+1, one ends up with a classical architecture (herein denoted “A”) of the SBox201as shown inFIG.2. The Top Linear layer203of the circuitry performs base conversion and generates the linear parts of the inversion. The Bottom Linear layer205performs base back-conversion and affine transformation of the AES SBox. Important Aspects of the Invention: “Architecture D” The new architecture (herein referred to as “D” for Depth) is a new architecture in which the bottom matrix found in earlier designs has been removed, and as a result the depth of the circuit has been reduced as much as possible. The idea behind this is that the bottom matrix only depends on the set of multiplications of the 4-bit signal Y and some linear combinations of the 8-bit input U. Thus, the result R can be achieved as follows: R=Y0·M0·U⊕ . . . ⊕Y3·M3·U where each Miis an 8×8 matrix representing 8 linear equations on the 8-bit input U, to be scalar multiplied by the Yi-bit. Those 4×8 linear circuits can be computed as a 32-bit signal L in parallel with the circuit for the 4-bits of Y. The result R is achieved by summing up four 8-bit sub-results. Therefore, in architecture D one gets the depth 3 after the inversion step (critical path: MULL and 8XOR4 blocks, seeFIG.5), instead of the depth 5-6 in the architecture A (seeFIG.4). The new architecture D requires a few more gates, since the assembling bottom circuit needs 56 gates: 32NAND2+8XOR4. The reward is the lower depth. A block diagram of one exemplary embodiment of the architecture D is depicted inFIG.3A, and an alternative exemplary embodiment of the architecture D is depicted inFIG.3B. The difference between the two embodiments is that, by using further techniques described later in this document, it was possible to reduce the signal Q from 22 bits down to 18 bits. In all other respects, the two embodiments are identical, and for this reason and for the sake of efficiency, the description will focus on the embodiment illustrated inFIG.3B, with the understanding that the discussion is equally applicable to the embodiment ofFIG.3A. As shown inFIG.3B, an Sbox circuit300is arranged to perform an SBox computational step when comprised in cryptographic circuitry. The exemplary SBox circuit (300) comprises a first circuit part301, a second circuit part303, and a third circuit part305. The first circuit part301comprises digital circuitry that generates a 4-bit first output signal (Y) from an 8-bit input signal (U). The second circuit part303is configured to operate in parallel with the first circuit part301and to generate a 32-bit second output signal (L) from the 8-bit input signal (U), wherein the 32-bit second output signal (L) consists of four 8-bit sub-results. The third circuit part305is configured to produce four preliminary 8-bit results (K) by scalar multiplying each of the four 8-bit sub-results (L) by a respective one bit of the 4-bit first output signal (Y), and to produce an 8-bit output signal (R) by summing the four preliminary 8-bit results (K). Further in accordance with the exemplary embodiment, the first circuit part301is configured to generate the 4-bit first output signal (Y) from the input signal (U) by supplying the 8-bit input U to a first linear matrix circuit307that generates an output Q (22-bits in the embodiment ofFIG.3A, and 18 bits in the embodiment ofFIG.3B). The output Q is supplied to multiplication/summing circuitry309that performs a Galois Field (GF) multiplication309to generate a 4-bit signal, X, which is then supplied to an inverse Galois Field circuit (311), that performs a GF inversion to generate the 4-bit signal Y. Also in accordance with the exemplary embodiment, the second circuit part303is configured to generate the second output signal L from the input signal U by performing a calculation that comprises a second linear matrix operation. In order to facilitate a comparison between the conventional architecture, A, and the new architecture, D, a more detailed description and implementation size of the different blocks are shown inFIG.4(conventional architecture, A) and5(new architecture, D). Looking first atFIG.4, it is seen that the conventional SBox architecture400includes a top layer401that is alternatively configured either as a Forward SBox (FTopA), an Inverse SBox (ITopA), or as a Combined SBox (CTopA) that enables selection between forward and inverse computations). But a notable aspect is that the conventional SBox architecture400also includes a bottom layer403that is alternatively configured either for forward operation (FBotA), inverse operation (IBotA), or a combination that enables selection between forward and inverse operation (CBotA)). The inventors have recognized that the bottom layer403of the conventional SBox architecture A400can be eliminated, and depth reduced, by redistributing many functional aspects of the bottom layer into upper layers. A result is the Architecture D500, shown inFIG.5. The Architecture D500depicted inFIG.5has an organization that is equivalent to that shown inFIG.3. Notably, it includes a top layer501that is alternatively configured either as a Forward SBox (FTopD), an Inverse SBox (ITopD), or as a Combined SBox (CTopD) that enables selection between forward and inverse computations. In the case of the Combined SBox (CTopD), selection between forward and inverse computations is in dependence on a received forward/inverse A notable feature is that some functional aspects of the conventional bottom layer (see, e.g., bottom layer403inFIG.4) have been redistributed into upper layers of the new architecture500, so it now has a first circuit part503that operates in parallel with a second circuit part505. The first circuit part503generates a 4-bit first output signal (Y) from the 8-bit input signal (U), and the second circuit part505generates a 32-bit second output signal (L) from the 8-bit input signal (U), wherein the 32-bit second output signal (L) consists of four 8-bit sub-results. The output Y from the first circuit part503and the output L from the second circuit part505are processed together by a third circuit part509that is configured to produce four preliminary 8-bit results (K) by scalar multiplying each of the four 8-bit sub-results by a respective one bit of the 4-bit first output signal (Y), and to produce an 8-bit output signal (R) by summing the four preliminary 8-bit results (K). Details of an exemplary gate configuration of the new SBox Architecture500are presented in the following. Preliminaries In the listings presented below, specifications for six circuits for the forward, inverse, and combined SBoxes in two architectures A (small) and D (fast) are described. The symbols used in the following listings are as follows, and have the indicated meanings:#comment—a comment line@filename—means that the code from another file called ‘filename’ should be included, the listing of which is then given in this section as well.a{circumflex over ( )}b—is the usual XOR gate; other gates are explicitly denoted and taken from the set of {XNOR, AND, NAND, OR, NOR, MUX, NMUX, NOT}(a op b)—where the order of execution (the order of gates connections) is important we specify it by brackets. The inputs to all SBoxes are the 8 signals {U0 . . . U7} and the outputs are the 8 signals {R0 . . . R7}. The input and output bits are represented in Big Endian bit order. For combined SBoxes the input has additional signals ZF and ZI where ZF=1 if we perform the forward SBox and ZF=0 if inverse, otherwise; the signal ZI is the compliment of ZF. We have tested all the proposed circuits and verified their correctness. The circuits are divided into sub-programs that correspond, respectively, to the functions/layers shown inFIG.5. The discussion starts with a description of the common shared components, and then for each solution the components (common or specific) for the circuits are described. Shared Components Listing for: MULX/INV/S0/S1/8XOR4: Shared components.# File: mulx.aY1 = XOR(T6, NOR(T1, X3))T20 = NAND(Q6, Q12)T21 = NAND(Q3, Q14)T22 = NAND(Q1, Q16)T10 = (NOR(Q3, Q14) {circumflex over ( )} NAND(Q0, Q7))T11 = (NOR(Q4, Q13) {circumflex over ( )} NAND(Q10, Q11))T12 = (NOR(Q2, Q17) {circumflex over ( )} NAND(Q5, Q9))T13 = (NOR(Q8, Q15) {circumflex over ( )} NAND(Q2, Q17))X0 = T10 {circumflex over ( )} (T20 {circumflex over ( )} T22)X1 = T11 {circumflex over ( )} (T21 {circumflex over ( )} T20)X2 = T12 {circumflex over ( )} (T21 {circumflex over ( )} T22)X3 = T13 {circumflex over ( )} (T21 {circumflex over ( )} NAND(Q4, Q13))# File: inv.aT0 = XOR(X2, X3)T1 = AND(X0, X2)T2 = XOR(X0, X1)T3 = NAND(X0, X3)T4 = NAND(X3, T2)T5 = NAND(X1, X2)T6 = NAND(X1, T0)T7 = NAND(T5, T0)T8 = NAND(T2, T3)Y0 = XNOR(T1, T7)Y2 = XNOR(T1, T8)Y02 = Y2 {circumflex over ( )} Y0Y3 = XOR(T4, NOR(T1, X1))Y13 = Y3 {circumflex over ( )} Y1Y00 = Y01 {circumflex over ( )} Y23# File: s0.aY02 = Y2 {circumflex over ( )} Y0# File: 8xor4.dY13 = Y3 {circumflex over ( )} Y1R0 = (K0 {circumflex over ( )} K1) {circumflex over ( )} (K2 {circumflex over ( )} K3)Y23 = Y3 {circumflex over ( )} Y2R1 = (K4 {circumflex over ( )} K5) {circumflex over ( )} (K6 {circumflex over ( )} K7)Y01 = Y1 {circumflex over ( )} Y0R2 = (K8 {circumflex over ( )} K9) {circumflex over ( )} (K10 {circumflex over ( )} K11)Y00 = Y02 {circumflex over ( )} Y13R3 = (K12 {circumflex over ( )} K13) {circumflex over ( )} (K14 {circumflex over ( )} K15)R4 = (K16 {circumflex over ( )} K17) {circumflex over ( )} (K18 {circumflex over ( )} K19)# File: s1.aR5 = (K20 {circumflex over ( )} K21) {circumflex over ( )} (K22 {circumflex over ( )} K23)Y01 = NAND(T6, NAND(X2, T3))R6 = (K24 {circumflex over ( )} K25) {circumflex over ( )} (K26 {circumflex over ( )} K27)Y23 = NAND(T4, NAND(X0, T5))R7 = (K28 {circumflex over ( )} K29) {circumflex over ( )} (K30 {circumflex over ( )} K31) Listing for: MULN/MULL: Shared components.# File: muln.aK20 = NAND(Y0, L20)K15 = NAND(Y3, L15)N0 = NAND(Y01, Q11)K19 = NAND(Y3, L19)N1 = NAND(Y0, Q12)K1 = NAND(Y1, L1)K23 = NAND(Y3, L23)N2 = NAND(Y1, Q0)K5 = NAND(Y1, L5)K27 = NAND(Y3, L27)N3 = NAND(Y23, Q17)K9 = NAND(Y1, L9)K31 = NAND(Y3, L31)N4 = NAND(Y2, Q5)K13 = NAND(Y1, L13)N5 = NAND(Y3, Q15)K17 = NAND(Y1, L17)# File: mull.fN6 = NAND(Y13, Q14)K21 = NAND(Y1, L21)K4 = AND(Y0, L4)N7 = NAND(Y00, Q16)K25 = NAND(Y1, L25)K8 = AND(Y0, L8)N8 = NAND(Y02, Q13)K29 = NAND(Y1, L29)K24 = AND(Y0, L24)N9 = NAND(Y01, Q7)K28 = AND(Y0, L28)N10 = NAND(Y0, Q10)K2 = NAND(Y2, L2)N11 = NAND(Y1,Q6)K6 = NAND(Y2, L6)# File: mull.iN12 = NAND(Y23, Q2)K10 = NAND(Y2, L10)K4 = NAND(Y0, L4)N13 = NAND(Y2, Q9)K14 = NAND(Y2, L14)K8 = NAND(Y0, L8)N14 = NAND(Y3, Q8)K18 = NAND(Y2, L18)K24 = NAND(Y0, L24)N15 = NAND(Y13, Q3)K22 = NAND(Y2, L22)K28 = NAND(Y0, L28)N16 = NAND(Y00, Q1)K26 = NAND(Y2, L26)N17 = NAND(Y02, Q4)K30 = NAND(Y2, L30)# File: mull.cK4 = NAND(Y0, L4) {circumflex over ( )} ZF# File: mull.dK3 = NAND(Y3, L3)K8 = NAND(Y0, L8) {circumflex over ( )} ZFK0 = NAND(Y0, L0)K7 = NAND(Y3, L7)K24 = NAND(Y0, L24) {circumflex over ( )} ZFK12 = NAND(Y0, L12)K11 = NAND(Y3, L11)K28 = NAND(Y0, L28) {circumflex over ( )} ZFK16 = NAND(Y0, L16) Listing for: Forward SBox with the smallest delay (fast)Forward SBox (fast)# Forward (fast)Q4 = Q16 {circumflex over ( )} U4L13 = U3 {circumflex over ( )} [email protected] = Z18 {circumflex over ( )} Z160L17 = U4 {circumflex over ( )} [email protected] = U1 {circumflex over ( )} U3L29 = Z96 {circumflex over ( )} [email protected] = Z10 {circumflex over ( )} Q2L14 = Q11 {circumflex over ( )} [email protected] = U0 {circumflex over ( )} U7L26 = Q11 {circumflex over ( )} [email protected] = U2 {circumflex over ( )} U5L30 = Q11 {circumflex over ( )} [email protected] = Z36 {circumflex over ( )} Q5L7 = Q12 {circumflex over ( )} Q1L19 = U2 {circumflex over ( )} Z96L11 = Q12 {circumflex over ( )} L15# File: ftop.dQ9 = Z18 {circumflex over ( )} L19L27 = L30 {circumflex over ( )} L10# ExhaustiveQ10 = Z10 {circumflex over ( )} Q1Q17 = U0→ searchQ12 = U3 {circumflex over ( )} L28L0 = Q10Z18 = U1 {circumflex over ( )} U4Q13 = U3 {circumflex over ( )} Q2L4 = U6L28 = Z18 {circumflex over ( )} U6L10 = Z36 {circumflex over ( )} Q7L20 = Q0Q0 = U2 {circumflex over ( )} L28Q14 = U6 {circumflex over ( )} L10L24 = Q16Z96 = U5 {circumflex over ( )} U6Q15 = U0 {circumflex over ( )} Q5L1 = Q6Q1 = U0 {circumflex over ( )} Z96L8 = U3 {circumflex over ( )} Q5L9 = U5Z160= U5 {circumflex over ( )} U7L12 = Q16 {circumflex over ( )} Q2L21 = Q11Q2 = U6 {circumflex over ( )} Z160L16 = U2 {circumflex over ( )} Q4L25 = Q13Q11 = U2 {circumflex over ( )} U3L15 = U1 {circumflex over ( )} Z96L2 = Q9L6 = U4 {circumflex over ( )} Z96L31 = Q16 {circumflex over ( )} L15L18 = U1Q3 = Q11 {circumflex over ( )} L6L5 = Q12 {circumflex over ( )} L31L22 = Q15Q16 = U0 {circumflex over ( )} Q11L3 = Q8L23 = U0 Listing for: Forward SBox circuit with area/depth trade-off (optimal)Forward SBox (optimal)# ForwardZ66 = U1 {circumflex over ( )} U6H9 = N3 {circumflex over ( )} H7→ (optimal)Z114 = Q11 {circumflex over ( )} Z66H10 = N15 {circumflex over ( )} [email protected] = U7 {circumflex over ( )} Z114H11 = N9 {circumflex over ( )} [email protected] = Q1 {circumflex over ( )} Z114H12 = N12 {circumflex over ( )} [email protected] = Q7 {circumflex over ( )} Z114H13 = N1 {circumflex over ( )} [email protected] = U2 {circumflex over ( )} Q13H14 = N5 {circumflex over ( )} [email protected] = Z9 {circumflex over ( )} Z66H15 = N7 {circumflex over ( )} [email protected] = Q16 {circumflex over ( )} Q13H16 = H10 {circumflex over ( )} H11Q15 = U0 {circumflex over ( )} U2H17 = N16 {circumflex over ( )} H8# File: ftop.aQ17 = Z9 {circumflex over ( )} Z114H18 = H6 {circumflex over ( )} H8# ExhaustiveQ4 = U7H19 = H10 {circumflex over ( )} H12→ searchH20 = N2 {circumflex over ( )} H3Z6 = U1 {circumflex over ( )} U2# File: fbot.aH21 = H6 {circumflex over ( )} H14Q12 = Z6 {circumflex over ( )} U3# ProbabilisticH22 = N8 {circumflex over ( )} H12Q11 = U4 {circumflex over ( )} U5→ heuristicH23 = H13 {circumflex over ( )} H15Q0 = Q12 {circumflex over ( )} Q11H0 = N3 {circumflex over ( )} N8Z9 = U0 {circumflex over ( )} U3H1 = N5 {circumflex over ( )} N6R0 = XNOR(H16, H2)Z80 = U4 {circumflex over ( )} U6H2 = XNOR(H0, H1)R1 = H2Q1 = Z9 {circumflex over ( )} Z80H3 = N1 {circumflex over ( )} N4R2 = XNOR(H20, H21)Q7 = Z6 {circumflex over ( )} U7H4 = N9 {circumflex over ( )} N10R3 = XNOR(H17, H2)Q2 = Q1 {circumflex over ( )} Q7H5 = N13 {circumflex over ( )} N14R4 = XNOR(H18, H2)Q3 = Q1 {circumflex over ( )} U7H6 = N15 {circumflex over ( )} H4R5 = H22 {circumflex over ( )} H23Q13 = U5 {circumflex over ( )} Z80H7 = N0 {circumflex over ( )} H3R6 = XNOR(H19, H9)Q5 = Q12 {circumflex over ( )} Q13H8 = N17 {circumflex over ( )} H5R7 = XNOR(H9, H18) The following bonus circuits are included to update the world record for the smallest SBox. The new record is 108 gates with depth 24. Listing for: Forward SBox circuit withthe smallest number of gates (bonus)# Forward (bonus)# File: fbot.bQ0 = Z0 {circumflex over ( )} [email protected] = N1 {circumflex over ( )} N5Z1 = U1 {circumflex over ( )} [email protected] = N4 {circumflex over ( )} H0Q7 = Z0 {circumflex over ( )} [email protected] = XNOR(N2, H1)Q2 = U2 {circumflex over ( )} Q0Q1 = Q7 {circumflex over ( )} Q2H2 = N9 {circumflex over ( )} N15H13 = H4 {circumflex over ( )} H12Q3 = U0 {circumflex over ( )} Q7H3 = N11 {circumflex over ( )} N17R4 = N1 {circumflex over ( )} H13Q4 = U0 {circumflex over ( )} Q2R6 = XNOR(H2, H3)H14 = XNOR(N0, R7)Q5 = U1 {circumflex over ( )} Q4H4 = N11 {circumflex over ( )} N14H15 = H9 {circumflex over ( )} H14Q6 = U2 {circumflex over ( )} U3H16 = H7 {circumflex over ( )} H15Q10 = Q6 {circumflex over ( )} Q7# File: ftop.bR1 = XNOR(N6, H16)Q8 = U0 {circumflex over ( )} Q10Z0 = U3 {circumflex over ( )} U4H17 = N4 {circumflex over ( )} H14Q9 = Q8 {circumflex over ( )} Q2Q17 = U1 {circumflex over ( )} U7H18 = N3 {circumflex over ( )} H17Q12 = Z0 {circumflex over ( )} Q17Q16 = U5 {circumflex over ( )} Q17R0 = H13 {circumflex over ( )} H18Q15 = U7 {circumflex over ( )} Q4H5 = N9 {circumflex over ( )} N12Q13 = Z0 {circumflex over ( )} Q15R5 = H4 {circumflex over ( )} H5H6 = N16 {circumflex over ( )} [email protected] = R2 {circumflex over ( )} [email protected] = N10 {circumflex over ( )} [email protected] = XNOR(H6, H8)H9 = N8 {circumflex over ( )} H1Q14 = Q0 {circumflex over ( )} Q15H10 = N13 {circumflex over ( )} H8Q11 = U5R3 = H5 {circumflex over ( )} H10H11 = H9 {circumflex over ( )} H10H12 = N7 {circumflex over ( )} H11 Listing for: Inverse SBox with the smallest delay (fast)Inverse SBox (fast)# Inverse (fast)Q9 = Q10 {circumflex over ( )} Q4L5 = L27 {circumflex over ( )} [email protected] = U4 {circumflex over ( )} U5L19 = Q14 {circumflex over ( )} [email protected]= U2 {circumflex over ( )} U7L26 = Q3 {circumflex over ( )} [email protected] = L12 {circumflex over ( )} Z132L13 = L19 {circumflex over ( )} [email protected] = Q0 {circumflex over ( )} Q11L17 = L12 {circumflex over ( )} [email protected] = U3 {circumflex over ( )} Z132L21 = XNOR(U1, Q1)@8xor4.dQ13 = U0 {circumflex over ( )} L27L25 = Q5 {circumflex over ( )} L3Q14 = XNOR(Q10, U2)L14 = U3 {circumflex over ( )} Q12# File: itop.dQ15 = Q14 {circumflex over ( )} Q0L18 = U0 {circumflex over ( )} Q1# ExhaustiveQ16 = XNOR(Q8, U7)L22 = XNOR(Q5, U6)→ searchQ17 = Q16 {circumflex over ( )} Q11L8 = Q11Q8 = XNOR(U1, U3)L23 = Q15 {circumflex over ( )} Z132L28 = Q7Q0 = Q8 {circumflex over ( )} U5L0 = U0 {circumflex over ( )} L23L9 = Q12Q1 = U6 {circumflex over ( )} U7L3 = Q2 {circumflex over ( )} Q11L29 = Q10Q7 = U3 {circumflex over ( )} U4L4 = Q6 {circumflex over ( )} L3L2 = U5Q2 = Q7 {circumflex over ( )} Q1L16 = Q3 {circumflex over ( )} L27L10 = Q17Q3 = U0 {circumflex over ( )} U4L1 = XNOR(U2, U3)L30 = Q2Q4 = Q3 {circumflex over ( )} Q1L6 = L1 {circumflex over ( )} Q0L7 = U4Q5 = XNOR(U1, Q3)L20 = L6 {circumflex over ( )} Q2L11 = Q5Q10 = XNOR(U0, U1)L15 = XNOR(U2, Q6)L31 = Q9Q6 = Q10 {circumflex over ( )} Q7L24 = L15 {circumflex over ( )} U0 Listing for: Inverse SBox circuit with area/depth trade-off (optimal)Inverse SBox (optimal)# InverseQ5 = U0 {circumflex over ( )} Q6H6 = N4 {circumflex over ( )} H1→ (optimal)Q7 = U3 {circumflex over ( )} Q0H7 = N0 {circumflex over ( )} [email protected] = Z66 {circumflex over ( )} Z132H8 = N15 {circumflex over ( )} [email protected] = U5 {circumflex over ( )} Q17H9 = N9 {circumflex over ( )} [email protected] = U0 {circumflex over ( )} U5H10 = N6 {circumflex over ( )} [email protected] = U4 {circumflex over ( )} Z33H11 = H3 {circumflex over ( )} [email protected] = Q4 {circumflex over ( )} Q10H12 = N7 {circumflex over ( )} [email protected] = XNOR(U4, Z129)H13 = N8 {circumflex over ( )} H0Q13 = XNOR(Z20, Z40)H14 = N3 {circumflex over ( )} N5# File: itop.aQ16 = XNOR(Z66, U7)H15 = H5 {circumflex over ( )} H8# ExhaustiveQ14 = Q13 {circumflex over ( )} Q16H16 = N6 {circumflex over ( )} N7→ searchQ15 = Z33 {circumflex over ( )} Q3H17 = H12 {circumflex over ( )} H13Z20 = U2 {circumflex over ( )} U4Q11 = NOT(U2)H18 = H5 {circumflex over ( )} H16Z129 = U0 {circumflex over ( )} U7H19 = H3 {circumflex over ( )} H10Q0 = Z20 {circumflex over ( )} Z129# File: ibot.aH20 = H10 {circumflex over ( )} H14Q4 = U1 {circumflex over ( )} Z20# ProbabilisticR0 = H7 {circumflex over ( )} H18Z66 = U1 {circumflex over ( )} U6→ heuristicR1 = H7 {circumflex over ( )} H19Q3 = U3 {circumflex over ( )} Z66H0 = N2 {circumflex over ( )} N14R2 = H2 {circumflex over ( )} H11Q1 = Q4 {circumflex over ( )} Q3H1 = N1 {circumflex over ( )} N5R4 = H8 {circumflex over ( )} H9Q2 = U6 {circumflex over ( )} Z129H2 = N10 {circumflex over ( )} N11R3 = R4 {circumflex over ( )} H20Z40 = U3 {circumflex over ( )} U5H3 = N13 {circumflex over ( )} H0R5 = N2 {circumflex over ( )} H6Z132 = U2 {circumflex over ( )} U7H4 = N16 {circumflex over ( )} N17R6 = H15 {circumflex over ( )} H17Q6 = Z40 {circumflex over ( )} Z132H5 = N1 {circumflex over ( )} H2R7 = H4 {circumflex over ( )} H11Note:the above ‘NOT(U2)’ in the file ‘itop.a’ is removable by setting Q11 = U2 and accurately negating some of the gates and variables downwards where Q11 is involved. For example, the variable Y01 should be negated as well due to: N0 = NAND(Y01, Q11) consequently, all gates involving Y01 s.b. negated which leads to negate other Q variables, etc . . . Listing for: Inverse SBox circuit with the smallest number of gates (bonus)Inverse SBox (bonus)# Inverse (bonus)Q2 = Q8 {circumflex over ( )} Q9R3 = H3 {circumflex over ( )} [email protected] = Q1 {circumflex over ( )} Q2H7 = N9 {circumflex over ( )} [email protected] = Z33 {circumflex over ( )} Q7R5 = N10 {circumflex over ( )} [email protected] = Q17 {circumflex over ( )} Q15H8 = N8 {circumflex over ( )} [email protected] = Q3 {circumflex over ( )} Q8H9 = N6 {circumflex over ( )} [email protected] = XNOR(U1, Q0)H10 = N7 {circumflex over ( )} [email protected] = Q15 {circumflex over ( )} Q0H11 = N1 {circumflex over ( )} R0Q13 = Q16 {circumflex over ( )} Q14H12 = N0 {circumflex over ( )} H11# File: itop.bQ11 = NOT(U1)R2 = H9 {circumflex over ( )} H12Z33 = U0 {circumflex over ( )} U5H13 = H8 {circumflex over ( )} H10Z3 = U0 {circumflex over ( )} U1# File: ibot.bR1 = R2 {circumflex over ( )} H13Q1 = XNOR(Z3, U3)H0 = N4 {circumflex over ( )} N5H14 = H5 {circumflex over ( )} H13Q16 = XNOR(Z33, U6)H1 =N1 {circumflex over ( )} N2H15 = N13 {circumflex over ( )} H14Q17 = XNOR(U1, Q16)R6 = H0 {circumflex over ( )} H1R7 = N12 {circumflex over ( )} H15Q8 = U4 {circumflex over ( )} Q17H2 = N13 {circumflex over ( )} N14H16 = N4 {circumflex over ( )} H9Q3 = XNOR(U2, Z33)H3 = R6 {circumflex over ( )} H2H17 = R5 {circumflex over ( )} H16Q4 = Q1 {circumflex over ( )} Q3H4 = N17 {circumflex over ( )} H3R4 = N3 {circumflex over ( )} H17Q15 = XNOR(U4, U7)R0 = N16 {circumflex over ( )} H4Q10 = U3 {circumflex over ( )} Q15H5 = N15 {circumflex over ( )} H4Q9 = Q4 {circumflex over ( )} Q10H6 = N10 {circumflex over ( )} N11 Listing for: Combined SBox circuits with the smallest delay (fast/-S)Combined SBox (fast)# Combined (fast)Q15 = U0 {circumflex over ( )} Q5Q2 = A20 {circumflex over ( )} [email protected] or @ctop.dsA11 = U2 {circumflex over ( )} U3Q6 = XNOR(A4, A22)@mulx.aA12 = NMUX(ZF, A2, A11)Q8 = XNOR(A16, A22)@inv.aQ13 = A6 {circumflex over ( )} A12A23 = XNOR(Q5, Q9)@mull.cQ12 = Q5 {circumflex over ( )} Q13L10 = XNOR(Q1, A23)@mull.dA13 = A5 {circumflex over ( )}A12L4 = Q14 {circumflex over ( )} [email protected] = Q5 {circumflex over ( )} A13A24 = NMUX(ZF, Q2, L4)Q14 = U0 {circumflex over ( )} A13L12 = XNOR(Q16, A24)# File: ctop.dA14 = XNOR(U3, A3)L25 = XNOR(U3, A24)# Floating multiplexersA15 = NMUX(ZF, A0, U3)A25 = MUX(ZF, L10, A3)A0 = XNOR(U2, U4)A16 = XNOR(U5, A15)L17 = U4 {circumflex over ( )} A25A1 = XNOR(U1, A0)Q3 = A4 {circumflex over ( )} A16A26 = MUX(ZF, A10, Q4)A2 = XNOR(U5, U7)L6 = Q11 {circumflex over ( )} Q3L14 = L24 {circumflex over ( )} A26A3 = U0 {circumflex over ( )} U5A17 = U2 {circumflex over ( )} A10L23 = A25 {circumflex over ( )} A26A4 = XNOR(U3, U6)Q7 = XNOR(A8, A17)A27 = MUX(ZF, A1, U5)A5 = U2 {circumflex over ( )} U6A18 = NMUX(ZF, A14, A2)L30 = Q12 {circumflex over ( )} A27A6 = NMUX(ZF, A4, U1)Q1 = XNOR(A4, A18)A28 = NMUX(ZF, L10, L5)Q11 =A5 {circumflex over ( )} A6Q4 = XNOR(A16, A18)L21 = XNOR(L14, A28)Q16 = U0 {circumflex over ( )} Q11L7 = Q12 {circumflex over ( )} Q1L27 = XNOR(L30, A28)A7 = U3 {circumflex over ( )} A1L8 = Q7 {circumflex over ( )} L7A29 = XNOR(U5, L4)L24 = MUX(ZF, Q16, A7)A19 = NMUX(ZF, U1, A4)L29 = A28 {circumflex over ( )} A29A8 = NMUX(ZF, A3, U6)A20 = XNOR(U6, A19)L15 = A19 {circumflex over ( )} A29L5 = A0 {circumflex over ( )} A8Q9 = XNOR(A16, A20)A30 = XNOR(A3, A10)L11 = Q16 {circumflex over ( )} L5Q10 = A18 {circumflex over ( )} A20L18 = NMUX(ZF, A19, A30)A9 = MUX(ZF, U2, U6)L9 = Q0 {circumflex over ( )} Q9A31 = XNOR(A7, A21)A10 = XNOR(A2, A9)A21 = U1 {circumflex over ( )} A2L16 = A25 {circumflex over ( )} A31Q5 = A1 {circumflex over ( )} A10A22 = NMUX(ZF, A21, A5)L26 = L18 {circumflex over ( )} A31A32 = MUX(ZF, U7, A5)A7 = XNOR(U2, U3)L13 = A32 {circumflex over ( )} A7Q8 = ZF {circumflex over ( )} A7A33 = NMUX(ZF, A15, U0)A8 = XNOR(A0, A2)L19 = XNOR(L6, A33)L25 = NMUX(ZF, A8, U1)A34 = NOR(ZF, U6)A9 = U2 {circumflex over ( )} A1L20 = A34 {circumflex over ( )} Q0Q2 = NMUX(ZF, A8, A9)A35 = XNOR(A4, A8)Q7 = Q1 {circumflex over ( )} Q2L28 = XNOR(L7, A35)Q9 = Q8 {circumflex over ( )} Q2A36 = NMUX(ZF, Q6, L11)A10 = XNOR(U0, A7)L31 = A30 {circumflex over ( )}A36A11 = XNOR(U5, U6)A37 = MUX(ZF, L26, A0)Q3 = MUX(ZF, A10, A11)L22 = Q16 {circumflex over ( )} A37Q17 = U0L0 = Q10L1 = Q6L2 = Q9L3 = Q8# File: ctop.ds# Floating multiplexersA0 = U3 {circumflex over ( )} U6A1 = XNOR(U5, U7)Q1 = XNOR(A0, A1)A2 = U1 {circumflex over ( )} U4A3 = U0 {circumflex over ( )} U5Q17 = MUX(ZF, U5, A3)A4 = U7 {circumflex over ( )} A2Q5 = U3 {circumflex over ( )} A4Q15 = Q17 {circumflex over ( )} Q5A5 = XNOR(U0, U2)A6 = U1 {circumflex over ( )} A5Q4 = Q1 {circumflex over ( )} Q3L15 = Q8 {circumflex over ( )} L11Q6 = Q8 {circumflex over ( )} Q3A20 = XNOR(Q17, L6)L4 = Q11 {circumflex over ( )} A29A12 = XNOR(U5, A6)L31 = XNOR(A16, A20)L3 = MUX(ZF, A29, Q6)L9 = MUX(ZF, A12, A1)L0 = XNOR(A17, L31)A30 = XNOR(L25, Q7)L19 = MUX(ZF, U1, A12)A21 = MUX(ZF, U4, Q12)L30 = XNOR(L14, A30)L13 = Q6 {circumflex over ( )} L9L8 = Q7 {circumflex over ( )} A21A31 = L22 {circumflex over ( )} A30A13 = U2 {circumflex over ( )} U4L12 = Q6 {circumflex over ( )} A21L21 = XNOR(L27, A31)A14 = MUX(ZF, A7, A11)A22 = MUX(ZF, L8, U4)L26 = XNOR(Q3, A31)Q16 = XNOR(A13, A14)L28 = L25 {circumflex over ( )} A22A32 = MUX(ZF, A4, A3)Q11 = Q17 {circumflex over ( )} Q16L23 = L12 {circumflex over ( )} A22L16 = XNOR(A8, A32)A15 = OR(ZF, A0)A23 = L25 {circumflex over ( )} A16A33 = MUX(ZF, A3, U0)Q13 = A6 {circumflex over ( )} A15L17 = L19 {circumflex over ( )} A23Q10 = A32 {circumflex over ( )} A33Q12 = Q5 {circumflex over ( )} Q13L29 = Q8 {circumflex over ( )} A23A34 = MUX(ZF, Q11, A7)Q14 = Q16 {circumflex over ( )} Q13A24 = OR(ZI, A11)L1 = Q10 {circumflex over ( )} A34A16 = MUX(ZF, A12, Q12)L7 = XNOR(U4, A24)A35 = MUX(ZF, U0, A4)A17 = A8 {circumflex over ( )} Q6A25 = NMUX(ZF, U6, U7)Q0 = A19 {circumflex over ( )} A35L20 = NMUX(ZF, A17, U2)L2 = A1 {circumflex over ( )} A25A36 = MUX(ZF, L14, Q7)L6 = XNOR(U4, A17)A26 = Q1 {circumflex over ( )} A6L5 = A18 {circumflex over ( )} A36L27 = Q4 {circumflex over ( )} L20L24 = NMUX(ZF, A7, A26)A18 = Q5 {circumflex over ( )} Q2A27 = MUX(ZF, U3, Q1)L10 = MUX(ZF, Q12, A18)L22 = Q15 {circumflex over ( )} A27L14 = Q9 {circumflex over ( )} L10A28 = MUX(ZF, A25, A10)A19 = NMUX(ZF, A9, A2)L18 = XNOR(L13, A28)L11 = U0 {circumflex over ( )} A19A29 = A21 {circumflex over ( )} L18 Listing for: Combined SBox circuit with area/depth trade-off (optimal)Combined SBox (optimal)# Combined (optimal)Q7 = XNOR(Q9, A10)@ctop.aQ8 = XNOR(Q1, A10)# File: [email protected] = XNOR(U0, U2)# Probabilistic [email protected] = ZF {circumflex over ( )} A11H1 = N1 {circumflex over ( )} [email protected] = U1 {circumflex over ( )} U3H3 = N15 {circumflex over ( )} [email protected] = A1 {circumflex over ( )} A12H4 = N12 {circumflex over ( )} [email protected] = MUX(ZF, A13,H5 = N0 {circumflex over ( )} H1A11)H6 = N7 {circumflex over ( )} N8# File: ctop.aQ15 = U4 {circumflex over ( )} A14H8 = N10 {circumflex over ( )} N11# Floating multiplexersA15 = NMUX(ZF, U5,H9 = H4 {circumflex over ( )} H8A0 = XNOR(U0, U6)A0)S4 = H3 {circumflex over ( )} H9Q1 =XNOR(U1, ZF)Q5 = XNOR(A14, A15)H10 = N12 {circumflex over ( )} N14A1 = U2 {circumflex over ( )} U5Q17 = XNOR(U4, A15)H11 = N16 {circumflex over ( )} H8A2 = XNOR(U3, U4)A16 = MUX(ZF, A5, A2)S14 = N17 {circumflex over ( )} H11A3 = XNOR(U3, U7)Q16 = XNOR(A13, A16)H12 = N1 {circumflex over ( )} N2A4 = MUX(ZF, A2, U2)A17 = A3 {circumflex over ( )} A8H13 = N3 {circumflex over ( )} N5A5 = A0 {circumflex over ( )} A1Q2 = XNOR(A10, A17)H14 = N4 {circumflex over ( )} N5Q6 = A4 {circumflex over ( )} A5A18 = U4 {circumflex over ( )} U6H15 = N9 {circumflex over ( )} N11A6 = XNOR(Q1, A1)A19 = U1 {circumflex over ( )} U2H16 = N6 {circumflex over ( )} H13A7 = NMUX(ZF, U0, A3)Q11 = Q6 {circumflex over ( )} A19H17 = H6 {circumflex over ( )} H14Q4 = A5 {circumflex over ( )} A7A20 = MUX(ZF, A18,H18 = N4 {circumflex over ( )} H5Q3 = Q1 {circumflex over ( )} Q4A19)H30 = H18 {circumflex over ( )} ZFA8 = NMUX(ZF, U6, A2)Q13 = U5 {circumflex over ( )} A20S1 = H17 {circumflex over ( )} H30A9 = Q1 {circumflex over ( )} A3A21 = XNOR(U4, Q0)H19 = H3 {circumflex over ( )} H15Q9 = A8 {circumflex over ( )} A9Q14 = XNOR(A14, A21)S6 = XNOR(H18, H19)Q10 = Q4 {circumflex over ( )} Q9A22 = XNOR(A4, A6)S11 = H17 {circumflex over ( )} H19A10 = XNOR(A4, A7)Q12 = XNOR(U6, A22)H20 = H10 {circumflex over ( )} H15S0 = XNOR(S6, H20)H26 = H12 {circumflex over ( )} H14R1 = S1S5 = H17 {circumflex over ( )} H20S7 = XNOR(S4, H26)R2 = S2H21 = N7 {circumflex over ( )} H12H27 = H4 {circumflex over ( )} H23R3 = MUX(ZF, S3, S11)H22 = H16 {circumflex over ( )} H21S2 = H30 {circumflex over ( )} H27R4 = MUX(ZF, S4, S12)S12 = H20 {circumflex over ( )} H22H28 = N8 {circumflex over ( )} H16R5 = MUX(ZF, S5, S13)S13 = S4 {circumflex over ( )} H22S3 = S14 {circumflex over ( )} H28R6 = MUX(ZF, S6, S14)H23 = N15 {circumflex over ( )} N16H29 = H21 {circumflex over ( )} H25R7 = MUX(ZF, S7, S15)H24 = N9 {circumflex over ( )} N10S15 = H23 {circumflex over ( )} H29H25 = N8 {circumflex over ( )} H24R0 = S0 Listing for: Combined SBox circuit with the smallest number of gates (bonus)Combined SBox (bonus)# Combined (bonus)A8 = MUX(ZF, Q1, A4)Q9 = U6 {circumflex over ( )} [email protected] = Q8 {circumflex over ( )} [email protected] = Q4 {circumflex over ( )} [email protected] = Q6 {circumflex over ( )} [email protected] = MUX(ZF, A0, U4)@muln.aQ12 = XNOR(U7, A9)@cbot.bQ11 = Q0 {circumflex over ( )} Q12A10 = MUX(ZF, A6, Q12)# File: ctop.bA11 = A2 {circumflex over ( )} A10# Floating multiplexersA12 = A4 {circumflex over ( )} A11A0 = XNOR(U3, U6)Q5 = Q0 {circumflex over ( )} A12Q15 = XNOR(U1, ZF)Q13 = Q11 {circumflex over ( )} A12A1 = U5 {circumflex over ( )} Q15Q17 = Q14 {circumflex over ( )} A12A2 = U2 {circumflex over ( )} A0Q16 = Q14 {circumflex over ( )} Q13A3 = U4 {circumflex over ( )} A1A4 = U4 {circumflex over ( )} U6# File: cbot.bA5 = MUX(ZF, A2, A4)H0 = N9 {circumflex over ( )} N10Q4 = XNOR(A3, A5)H1 = N16 {circumflex over ( )} H0Q0 = U0 {circumflex over ( )} Q4H2 = N4 {circumflex over ( )} N5Q14 = Q15 {circumflex over ( )} Q0S4 = N7 {circumflex over ( )} (N8 {circumflex over ( )} H2)A6 = XNOR(U0, U2)H4 = N0 {circumflex over ( )} N2Q3 = ZF {circumflex over ( )} A6H6 = N15 {circumflex over ( )} H1Q1 = Q4 {circumflex over ( )} Q3H7 = H4 {circumflex over ( )} (N3 {circumflex over ( )} N5)A7 = MUX(ZF, U1, Q0)H20 = H6 {circumflex over ( )} ZFQ6 = XNOR(A5, A7)S2 = H20 {circumflex over ( )} H7Q8 = Q3 {circumflex over ( )} Q6S14 = S4 {circumflex over ( )} H7H8 = N13 {circumflex over ( )} H0H9 = N12 {circumflex over ( )} H8S1 = H20 {circumflex over ( )} H9H10 = N17 {circumflex over ( )} H1H12 = H2 {circumflex over ( )} (N1 {circumflex over ( )} N2)S0 = H6 {circumflex over ( )} H12H21 = N8 {circumflex over ( )}H4S5 = N6 {circumflex over ( )} (H9 {circumflex over ( )} H21)S11 = H12 {circumflex over ( )} S5S6 = S1 {circumflex over ( )} S11H15 = N14 {circumflex over ( )} H10H16 = H8 {circumflex over ( )} H15S12 = S5 {circumflex over ( )} H16H22 = N9 {circumflex over ( )} N11S7 = XNOR(S4,H10 {circumflex over ( )} H22)H19 = XNOR(H7, S7)S3 = H16 {circumflex over ( )} H19S15 = S11 {circumflex over ( )} H19S13 = S4 {circumflex over ( )} (N12 {circumflex over ( )} H15)R0 = S0R1 = S1R2 = S2R3 = MUX(ZF, S3, S11)R4 = MUX(ZF, S4, S12)R5 = MUX(ZF, S5, S13)R6 = MUX(ZF, S6, S14)R7 = MUX(ZF, S7, S15) Advantages of the Invention Having a design for a fast SBox is very important for certain classes of applications, for example AES hardware support in CPUs. In such a scenario it is likely that the SBox design is placed-and-routed with extreme care for the critical path. Having a very short critical path might speed up the possible clocking frequency considerably. Also in an FPGA where it is more difficult (than in an ASIC) to reach high clocking frequencies, it is important to have as few gates in the critical path as possible. The tables from the Background section have been extended to now also include results of the new Architecture D described here. Note how the critical path has been substantially reduced compared to conventional SBox circuits. 1. Forward SBox Area Size/GatesCritical Path/DepthReferenceStd. gatesTech. GEStd. gatesTech. XORsCanright[4]80XO + 34ND + 6NR19XO + 3ND + 1NR(most used result)120226.402320.796Boyar et al.[8]94XO + 34AD13XO + 3AD128264.241614.932Boyar et al.[9]81XO + 32AD21XO + 6ADRecord smallest113231.292724.864Ueno et al.[5]91XO + 38AD + 16OR (+4IV)11XO + 3AD + 1ORRecord fastest145 (+4)286.531513.772Reyhani[12]69XO + 43ND + 7NR (+4IV)16XO + 4ND (+1IV)119 (+4)213.4520 (+1)18.031Reyhani[12]79XO + 43ND + 7NR (+4IV)11XO + 5ND (+1IV)129 (+4)236.7516 (+1)13.449 Results of New Architecture D 81XO + 2XN + 5AD +8XO + 1XN + 1AD +This work42ND + 6NR1ND + 1NRNew record136248.041210.597fastest 2. Combined SBox Area Size/GatesCritical Path/DepthReferenceStd. gatesTech. GEStd. gatesTech. XORsPrevious ResultsCanright[4]94XO + 34ND + 6NR + 16MX (+2IV)20XO + 3ND + 2OR + 5NR(most used150(+2)297.643025.644result)Reyhani[6]81XO + 32ND + 4OR + 16NR + 16MI (+8IV)17XO + 2ND + 3OR + 6NR(previous149(+8)290.132823.608best result)Our resultsThis86XO + 22XN + 1AD + 46ND + 2OR + 6NR +10XO + 1XN + 2ND + 1NRwork A17MX + 6MI(internal186363.261412.371fast2)This81XO + 28XN + 1AD + 46ND + 7NR + 7MX +8XO + 2XN + 2ND + 1NR +work B12MI1MI(internal182356.651412.420fast10) A synthesis of the results has been performed and compared with other recent academic work. The technology process is GlobalFoundries 22 nm CSC20L [Glo19], and synthesis has been performed using Design Compiler 2017 from Synopsys in topological mode with the compile_ultra command. Also, the flag compile_timing_high_effort was turned on in order to force the compiler to make as fast circuits as possible. In the following graphs, the X axis is the clock period (in ps) and the Y axis is the resulting topology estimated area (in μm2). The number of available gates was not restricted in any way, so the compiler was free to use non-standard gates e.g., a 3 input AND-OR gate. To get the graphs in the following subsections, the clock period was started at a 1200 ps clock period (˜833 MHz) and was then reduced by 20 ps until the timing constraints could not be met. It is noted that that the area estimates by the compiler fluctuate heavily, and this is believed to be a result of the many different strategies the compiler had to minimize the depth of. One strategy might be successful for say a 700 ps clock period, but a different strategy (which results in a significantly larger area) could be successful for 720 ps. There is also an element of randomness involved in the strategies for the compiler. The synthesis results for the forward SBox are shown inFIG.6, and for the combined SBox inFIG.7. To enable comparison,FIG.6shows graphs of synthesis results for the architectures of:Ches18_fast601Ches18_small603“fast” circuit605, as described herein“optimal” circuit607, as described herein“bonus” circuit609, as described herein. (The terms “Ches18_fast” and “Ches18_small” refer to the results in [12].) InFIG.6:The curve605depicting the herein-described “fast” circuit is shown ranging from just under 650 ps up to approximately 1075 ps.The curve603depicting “ches18_small” ranges from approximately 780 ps up to approximately 1075.The curve601depicting “ches18_fast” ranges from approximately 800 up to approximately 1075. To enable further comparison,FIG.7shows graphs of synthesis results for the architectures of:canright701.reyhani703.“fast” circuit705, as described herein.“fast-s” circuit707, as described herein.“optimal” circuit709, as described herein.“bonus” circuit711, as described herein. (The term “Canright” refers to the results in [4] and the term “reyhani” refers to the results in [6].) InFIG.7:The curve707depicting “ ”fast-s” is shown closest to the x-axis, and ranges from approximately 740 up to 1200.The curve705depicting “fast” is shown second-closest to the x-axis, and ranges from approximately 740 up to 1200.The curve703depicting “reyhani” ranges from approximately 900 up to 1200.The curve701depicting “canright” ranges from approximately 1000 up to approximately 1200. In each ofFIGS.6and7, the closer the curve is to the axes, the better the result in terms of area/speed trade-off. Further aspects of the herein-described technology are now described with reference toFIG.8, which is in one respect a flowchart of actions performed by embodiments consistent with the invention. In another respect,FIG.8can also be considered a block diagram of means800for performing SBox functionality, comprising the various component parts (801,803,805,807, and809) for mapping an input to an output in accordance with SBox functionality. Actions begin with receiving an 8-bit input, U (step801). The 8-bit input is supplied as an input to two further actions that operate in parallel with one another. A first of these is using a first linear matrix operation, a Galois Field (GF) multiplication, and a GF inversion to generate a 4-bit first output signal (Y) from the 8-bit input signal (U) (step803). And in parallel, a calculation is performed (step805) that comprises a second linear matrix operation to generate a 32-bit second output signal (L) from the input signal (U), wherein the 32-bit second output signal (L) consists of four 8-bit sub-results. Next, four preliminary 8-bit results (K) are produced by scalar multiplying each of the four-8-bit sub-results of the 32-bit second output signal (L) by a respective one bit of the 4-bit first output signal (Y) (step807). Then, an 8-bit output signal (R) is produced by summing the four preliminary 8-bit results (K). In other aspects of embodiments consistent with the invention, as illustrated inFIG.9, the architectures of the improved SBox as described herein can also be embodied in a number of other forms including a computer program901comprising a set of program instructions configured to cause one or processors to perform a series of actions such as those depicted in any ofFIGS.3A,3B,5, and8(for example, the computer program901when run on a processing device causes operation in accordance with the various embodiments described herein). Generally, program modules may include routines, programs, objects, components, data structures, etc. that performs particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes. Some other embodiments take the form of a computer readable storage medium903(or equivalently a set of media) comprising a computer program901as described above. A computer-readable medium901may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), and the like. Still other embodiments can take the form of a computer program product905, embodied in a computer-readable medium903, including computer-executable instructions, such as program code, executed by computers in networked environments. Throughout this disclosure, the blocks in the various diagrams may refer to a combination of analog and digital circuits, and/or one or more controller units, configured with software and/or firmware, e.g. stored in the storage units (data base), that when executed by the one or more controller units perform as described above. One or more of these controller units, as well as any other combination of analog and digital circuits, may be included in a single application-specific integrated circuitry (ASIC), or several controller units and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a system-on-a-chip (SoC). The one or more controller units may be any one of, or a combination of a central processing unit (CPU), graphical processing unit (GPU), programmable logic array (PAL) or any other similar type of circuit or logical arrangement. Advantages A new architecture is disclosed herein that is faster (having shorter critical depth) than previously known results. In this new architecture, the bottom linear matrix that is present in conventional solutions (see, e.g.,FIG.2) has been removed, with most of the calculations instead being performed in the top linear matrix, parallel to the inversion circuit (see, e.g.,FIGS.3A and3B). In this way the depth of the circuit is reduced by roughly 25%-30%. The resulting SBoxes are the fastest known. Abbreviations AES: Advanced Encryption StandardASIC: Application Specific Integrated CircuitFPGA: Field Programmable Gate Array REFERENCES [1] NIST: Specification for the ADVANCED ENCRYPTION STANDARD (AES). Technical Report FIPS PUB 197, National Institute of Standards and Technology (NIST) (2001)[2] Vincent Rijmen. Efficient implementation of the Rijndael S-box. available at www.esat.kuleuven.ac.be.[3] Akashi Satoh, Sumio Morioka, Kohji Takano, and Seiji Munetoh. A compact Rijndael hardware architecture with S-box optimization. In Colin Boyd, editor, Advances in Cryptology—ASIACRYPT 2001, 7th International Conference on the Theory and Application of Cryptology and Information Security, Gold Coast, Australia, Dec. 9-13, 2001, Proceedings, volume 2248 of Lecture Notes in Computer Science, pages 239-254. Springer, 2001.[4] D. Canright. A Very Compact S-Box for AES. In Josyula R. Rao and Berk Sunar, editors, Cryptographic Hardware and Embedded Systems—CHES 2005, pages 441-455, Berlin, Heidelberg, 2005. Springer Berlin Heidelberg.[5] Rei Ueno, Naofumi Homma, Yukihiro Sugawara, Yasuyuki Nogami, and Taka-fumi Aoki. Highly efficient GF(28) inversion circuit based on redundant GF arithmetic and its application to AES design. In Tim Güneysu and Helena Handschuh, editors, Cryptographic Hardware and Embedded Systems-CHES 2015-17th International Workshop, Saint-Malo, France, Sep. 13-16, 2015, Proceedings, volume 9293 of Lecture Notes in Computer Science, pages 63-80. Springer, 2015.[6] Reyhani-Masoleh, A., Taha, M., & Ashmawy, D. (2018). New Area Record for the AES Combined S-box/Inverse S-box. In 2018 IEEE 25th Symposium on Computer Arithmetic (ARITH), pages 145-152, 2018.[7] Samsung Electronics Co., Ltd. STD90/MDL90 0.35 μm 3.3V CMOS Standard Cell Library for Pure Logic/MDL Products Databook, 2000.[8] Joan Boyar and René Peralta. A New Combinational Logic Minimization Technique with Applications to Cryptology. In Paola Festa, editor,Experimental Algorithms, pages 178-189, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg. URL: eprint.iacr.org.[9] Joan Boyar and René Peralta. A new combinational logic minimization technique with applications to cryptology. In ofLecture Notes in Computer Science, pages 178-189. Springer, 2010.[10] Joan Boyar and René Peralta. A small depth-16 circuit for the AES S-Box. In Dimitris Gritzalis, Steven Furnell, and Marianthi Theoharidou, editors,SEC, volume 376 ofIFIP Advances in Information and Communication Technology, pages 287-298. Springer, 2012. URL: link.springer.com.[11] J. Boyar and R. Peralta. Patent application number 61089998 filed with the U.S. Patent and Trademark Office. A new technique for combinational circuit optimization and a new circuit for the S-Box for AES, 2009.[12] Arash Reyhani-Masoleh, Mostafa Taha, and Doaa Ashmawy. Smashing the implementation records of AES S-Box.IACR Transactions on Cryptographic Hardware and Embedded Systems,2018(2):298-336, May 2018. The embodiments disclosed herein may be implemented through combination of analog and digital circuits, and one or more controller units, together with computer program code for performing the functions and actions of the embodiments described herein. The program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the mobile communications device. One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick. The computer program code may furthermore be provided as pure program code on a server and downloaded to the mobile communications device at production, and/or during software updates. When using the word “comprise” or “comprising” it shall be interpreted as non-limiting, i.e. meaning “consist at least of”. The inventive concept has mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, embodiments other than the ones disclosed above are equally possible within the scope of the inventive concept, as disclosed herein. Key aspects of the technology described herein will be appreciated by those of skill in the art as comprising various combinations and sub-combinations of technical features discussed above with respect to, and depicted in,FIGS.3A,3B,5,8, and9(“architecture D”) in Part A of this disclosure, and in the Architectural Improvements disclosed in Section 5 of Part B of this disclosure. Other key aspects of the technology described herein will be appreciated by those of skill in the art as comprising various combinations and sub-combinations of technical features discussed above with respect to the methodology for resolving linear systems with multiplexers as discussed in Section 4 of Part B of this disclosure, which follows. PART B OF THE DISCLOSURE Smallest, Optimal, and Fastest AES SBoxes 1 Introduction Efficient hardware design of the AES SBox is a well-studied subject. If you want the absolute maximum speed, you'd probably use a straight forward table-lookup implementation, which naturally lead to a large area. In many practical situations the area of the cryptographic subsystem is limited, and the designer cannot afford to implement table-lookup for the 16 SBoxes involved in an AES round, especially when implemented in an FPGA. For these situations, we need to study how to implement the AES SBox with logical gates only, focusing on both area and maximum clocking speed. The maximum clocking speed of a circuit is determined by the critical path or depth of the circuit; the worst case time it takes to get stable output signals from a change in input signals. Another aspect when implementing AES is the need for the inverse cipher. Many modes of operation for a block cipher only use the encryption functionality and hence there is no need for the inverse cipher. In case you need both the forward and inverse SBox, it is often beneficial to combine the two circuits. This is because the main operation of the AES SBox is inverting a field element, which naturally is its own inverse, and we expect that many gates of the two circuits can be shared. From a mathematical perspective, the forward AES SBox is defined as the composition of a non-linear function I(g) and an affine function A(g), such that SBox(g)=A(I(g)). The non-linear function I(g)=g−1is the multiplicative inverse of an element g in the finite field GF(28) defined by the irreducible polynomial x8+x4+x3+x+1. We will assume that the reader is familiar with the AES SBox, and refer to [oST01] for a more comprehensive description. The first step towards a small area implementation was described by Rijmen [Rij00], where results from [IT88] was used. The idea is that the inverse calculation in GF(28) can be reduced to a much simpler inverse calculation in the subfield GF(24) by doing a base change to GF((24)2). In 2001, Satoh et al [SMTM01] took this idea further and reduced the inverse calculation to the subfield GF(22). In 2005, Canright [Can05] built on the work of Satoh et al and investigated the importance of the representation of the subfield, testing many different isomorphisms that led to the smallest area design. This construction is perhaps the most cited and used to implementation of an area constraint combined AES SBox. In a series of papers, Boyar, Peralta et al presented some very interesting ideas for both the subfield inverter as well as new heuristics for minimizing the area of logical circuits [BP10a, BP10b, BP12, BFP18]. They derived an inverter over GF(24) with depth 4 and a gate count of only 17. The construction in [BP12] is the starting point for this disclosure. After Boyar, several other papers followed focusing on low depth implementations [JKL10, NNT+10, UHS+15]. In 2018 two papers by Reyhani et al [RMTA18a, RMTA18b] presented the best known implementation (up until now) of both the forward SBox as well as the combined SBox. As pointed out in [RMTA18a], there are misalignment between researchers in how to present and compare implementations of combinatorial circuits. One way is to simply count the total number of standard gates in the design and find the path through the circuit that contains the critical path to determine and compare the speed. In practice it is much more complicated than that. For this paper, we present both the simple measure using only the number of gates, as well as giving a Gate Equivalent (GE) number based on the typical area required for the gate compared to the NAND gate. So for example the NAND gate will have GE=1, while the XOR gate will have a GE=2.33. The relative numbers for the GE are dependent on the specific ASIC process technology used, as well as the drive strength needed from the gate. We have used the GE values obtained from the Samsung's STD90/MDL90 0.35 μm 3.3V CMOS technology [Sam00]. A comprehensive discussion on our choices for circuits comparison can be found in Appendix A. Additionally, we propose to count technological depth of a circuit normalized in terms of the delays of a XOR gate, which makes it possible to compare depths and the speed of various academic results. In the following disclosure, various new techniques to minimize a circuit realization of a linear matrix are presented and analyzed. Also, a novel approach to include multiplexers in the minimization is introduced, which is relevant for the combined SBox construction. Also, a new architecture is provided in which the bottom linear matrix (present in conventional circuits) is removed in order to get as small circuit depth as possible. These new techniques result in smaller and faster AES SBoxes than previously presented. 2 Preliminaries We will follow the notation used in both [Can05] and [BP12] when we now construct our tower field representation. The irreducible polynomials, roots, and normal basis can be found in Table 1. TABLE 1Definition of the subfields used to construct GF(28).TargetIrreducibleCoefficientsNormalFieldPoly.Rootin FieldBaseGF(22)x2+ x + 1WGF(2)[W, W2]GF(24)x2+ x + W2ZGF(22)[Z2, Z8]GF(28)x2+ x + WZYGF(24)[Y, Y16] Following [Can05] and [BP12], we can now derive the expression for inverting a general element A=a0Y+a1Y16in GF(28) as A-1=(AA16)-1A16=((a0Y+a1Y16)(a1Y+a0Y16))-1(a1Y+a0Y16)=((a02+a12)Y17+a0a1(Y2+Y32))-1(a1Y+a0Y16)=((a0+a1)2Y17+a0a1(Y+Y16)2)-1(a1Y+a0Y16)=((a0+a1)2WZ+a0a1)-1(a1Y+a0Y16). And the element inversion in GF(28) can be done in GF(24) according to T1= (a0+ a1)T2= (WZ) T21T3= a0a1T4= T2+ T3T5= T4−1T6= T5a1T7= T5a0 where the result is obtained as A−1=T6Y+T7Y16. In these equations we utilize several operations (addition, multiplication, scaling, and squaring) but only two of them are nonlinear over GF(2); multiplication and inversion. Furthermore, the standard multiplication operation also contains some linear operations. If we separate all the linear operations from the non-linear and bundle them together with the linear equations needed to do the base change for the AES SBox input, which is represented in polynomial base using the AES SBox irreducible polynomial x8+x4+x3+x+1, we will end up with an architecture of the SBox according toFIG.2. In case we are dealing with the inverse SBox, we naturally need to apply the inverse affine transform to the top linear matrix instead of the bottom. This architecture will be our starting point, and we will now provide a set of new or enhanced algorithms for minimizing both the area and the depth of the two linear top and bottom matrices. 3 Circuits for Binary Linear System of Equations In this section, we will recapitulate the known techniques for linear circuit minimization and propose a few improvements. We start by clearly stating the objectives. 3.1 Basic Problem Statement Given a binary matrix Mm×nand the maximum allowed depth maxD, find the circuit of depth D≤maxD with the minimum number of XOR gates such that it computes Y=M·X. In other words, given n bits of input X=(x0. . . xn−1) the circuit should compute m linear combinations Y=(y0. . . ym−1). Any circuit realization that implements a given system of linear expressions is called a solution. The above problem is NP complete, and we have seen various heuristic approaches that help finding a sub-optimal solution in the literature. In all previous work we have studied, the assumption is that all input signals arrive in the same time, and all output signals are “ready” with delays at most maxD. In this paper we extend the problem with AIR and AOR defined as follows. Additional Input Requirement (AIR). The problem may be extended with an additional requirement on input signals X, such that each input bit xiarrives with its own delay di, in terms of XOR-gates delays. The resulting depth D≤maxD then includes input delays. For example, if some input xihas the delay di>maxD then no solution exists. The AIR is useful while deriving the bottom matrix as described in Section 2, since after the non-linear part, the signals entering the bottom matrix will have different delays. Additional Output Requirement (AOR). The problem may be extended by an additional requirement on the output signals. Each output signal yimay be required to be “ready” at depth at most ei≤maxD. This is useful when some output signals continue to propagate in the critical path and other signals may be computed with larger delays, but still at most maxD. The AOR is used while deriving the top matrix as described in Section 2, since when we introduce multiplexers for the combined SBox, the output signals of the top matrix will be required to have different delays. 3.2 Cancellation-Free Heuristics Cancellation-free heuristics is an algorithm that produces linear expression z=a⊕b, where both a and b are Boolean linear expressions in the input variables, and a and b share no common terms. In other words, as we add a and b we will not cancel out any term. Paar [Paa97] suggested a greedy approach to solving the Basic Problem in 3.1. That solution starts with the matrix M and considers all pairs of columns (i, j) in M. Then a metric is defined (on the pairs of columns) as the number of rows where Mr,i=Mr,j=1, i.e. where the input variables xiand xjboth occur. For the column pair with the highest metric, we form a new variable xn=xi⊕xjand add that to the matrix (which now is of size m×(n+1)), and set positions Mr,i=Mr,j=0, and Mr,n+1=1. Also Canright [Can05] used this technique but instead of using the metric function, he performed an exhaustive search over all possible column pairs. This was possible due to the fact that the target matrix in his case was the base conversion matrix only of size 8×8. As we saw in Section 2, our bottom matrix will be considerably larger, and hence we need to take another approach. We also need to consider the AIR and the AOR. Solving the AIR. When performing the algorithm we should keep track of the depth of the newly added XOR gates. This is done by having a vector D=(d0. . . dn−1) with the current depth of all inputs and newly added signals xi. When the new signal xn=xi⊕xjis added, the delay of xnis trivially dn=max(di, dj)+1. We then also restrict the algorithm such that if dn>maxD then we are not allowed to add xnas a new input signal. The AIR is hereby solved automatically. Solving the AOR. Similarly, when adding a new input variable xn, we need to check if a solution is theoretically possible. We may do that using the function CircuitDepth (detailed as Algorithm 2 in Appendix B.1). If CircuitDepth returns a larger delay than eiwe know that no solution exists, and that particular xnshould be avoided. Probabilistic heuristic approach. Since we cannot perform a full exhaustive search on the bottom matrix due to its size, we need to confine the number of pairs to keep and further evaluate. We have found that keeping the K best candidates (based on the original metric by Paar) and then randomly select which one to pick for the next XOR gate is a good strategy. In our simulations, this probabilistic approach gave us much smaller circuits than only considering the best metric candidates. Naturally, the execution time will be too long if we pick a too large K, and conversely picking a too small K decreases the chances of deriving a good circuit. In practice we found that K=2 . . . 6 is a reasonable number of candidates to keep and try. 3.3 Cancellation-Allowed Heuristic Cancellation-free approaches are sub-optimal, as it was shown by Boyar and Peralta in [BP10a], where they also introduced a new algorithm that allows cancellations. This was later improved by Reyhani et al in [RMTA18a]. Next, we briefly describe the basic idea of that heuristic. 3.3.1 Basic Cancellation-Allowed Algorithm [BP10a] Every row of M is an n-bit vector of 0s and 1s. That vector can be seen as a n-bit integer value. We define that integer value as a target point. Thus, the matrix M can be seen as the column vector of m target points. The input signals {x0, . . . , xn−1} can also be represented as integer values xi=2i, for i=0 . . . n−1. Let the base set S={s0, . . . , sn−1}={1, 2, 4, . . . , 2n} initially represent the input signals. The key function of the algorithm is the distance function δi(S,yi) that returns the smallest number of XOR gates needed to compute a target point yifrom the set of known points S. The algorithm keeps a vector Δ=[δ0, δ1, . . . , δn−1] which is initially set to the Hamming weight minus one of the rows of M, which would be the number of XOR gates needed without any sharing. The algorithm then proceeds by combining two base points siand sjin the base set S, and XOR them together producing a candidate point c=si⊕sj. The selection of siand sj; is performed by an exhaustive search over all distinct pairs, and then for each candidate point, the sum of the distance vector Σδi, for i∈[0,n−1], is calculated. Note that the distance functions δinow is computed over the set S∪{c}. The pair which gives the smallest distance sum is picked and S is updated S=S∪{c}. In case there is a tie, the algorithm picks the pair that maximizes the Euclidean norm √{square root over (Σδi2)}, for i∈[0,n−1]. If there is a tie after this step too, the authors in [BP10a] investigated different strategies and concluded that all strategies tested performed similarly, and hence a simple random selection can be used. The algorithm then repeats the step of picking two new base points and calculating the distance vector sum, until the distance vector is all-zeros and the targets are all found. In the original description, there is also a notion of “preemptive” choices. A preemptive choice is a candidate point c such that it directly fulfils a target row in the matrix M. If such a candidate is found, it is immediately used as the new point and added to S. Reyhani et al [RMTA18a] improved the original algorithm by directly searching for preemptive candidates in each round and add them all to the set S before the “real” candidate is added and the distance vector recalculated. They also improved the tie resolution strategy and kept all the candidates that were equally good under the Euclidean norm and recursively tried them all, keeping the one that was best in the next round. In our experiments we concluded that keeping two different candidates and recursively evaluating them gave good results. Our improvement to this algorithm is a faster evaluation of the 5 values for moderate sized n-values. It can be found in Appendix B.2. 3.3.2 when the Maximum Depth maxD is a Required Constraint Although the AIR problem can be solved simply by adding the vector of delays D of all known signals alongside with S, the problem of finding a short circuit with a fixed maxD is still quite difficult. Even if we limit every newly added signals in S to be of depth≤maxD, the resulting circuit becomes very large in terms of XOR gates, in fact much larger than what we could achieve with cancellation-free heuristic. One idea is that if there is a tie with respect to the shortest distances δi=δ(S,yi), then we should take c with the smallest delay. But in our simulations it didn't produce better results than the cancellation-free algorithm, even if we add a randomization factor to both algorithms. We must conclude that adoption of the additional maxD-constraint to that cancellation-allowed heuristic algorithm is still an open question. 3.4 Exhaustive Search Methods In this section we present an algorithm for an efficient exhaustive search of the minimal circuit. The overall complexity is exponential in the number of input signals, and linear in the number of output signals. From our experiments we can conclude that this exhaustive search algorithm can be readily applied to circuits of approximately 10-bits input. 3.4.1 Notations and Data Representation Using the same integer representation of the rows of M, and the input signals xias in Section 3.3.1, we can re-phrase the basic problem statement: given the set of input points xiwe want to find the sequence of XORs on those points such that we get all the m wanted target points yi, the rows of the matrix M, with the maximum delay maxD. Input and output points may have different delays diand ei, respectively. For data structures, we can store a set of 2npoints as either a normal set, and/or as a bit-vector. The set makes it possible to loop through the points while the bit-mask representation is efficient to test set membership. 3.4.2 Basic Idea The exhaustive search algorithm is a recursive algorithm, running through the depths, starting at depth 1 and running down to maxD. At each depth D, we try to construct new points from the previous depths, thereby constructing circuits that are of exactly depth D. When all target points are found, we check the number of required XOR gates, keeping track of the smallest solution. We will need the following sets of points:known[maxD+1]—the set of known points at certain depth D.ignored[maxD+1]—the set of points that will be ignore at depth D.targets—the set of target points.candidates—the set of candidate points to be added to the set known at the current recursion step. The initial set of known points is xi, for i=0 . . . n−1, and the set of target points is yi, for i=0 . . . m−1. AIR is solved by initially placing the input point xito the known set at depth di. AOR is solved by setting the point yiwith output delay eito the ignore list on all depth levels that are larger than ei. We will now explain the steps executed at each depth of the recursion, assuming that we currently are at depth D. Step 1—Preemptive points. Check the known[D] set to see if any pair can be combined (XOR:ed) to give a target point not yet found. If all targets are found, or if we have reached maxD, we return from this level of the recursion. Step 2—Collect candidates. Form all possible pairs of points from the known[0 . . . D−1] sets, where at least one of the points is from known [D−1], and XOR the pair to derive a new point. If the derived point is in the set ignored[D] then we skip it, otherwise we add it to candidate set. Step 3—In this step we try to add points from the candidate set to the known list, and call the algorithm recursively again. We start by trying to add 1 point and do the recursive call. If that's not solving the target points, we'll try to add 2 points, and so on until all combinations (or a maximum number of combinations) of the points in the candidate set have been tried. 3.4.3 Ignored Points and Other Optimizations In step 2, we check the candidate against the ignored [D] set, the set of ignored points at depth D. The ignored set is constructed from a set of rules; Intersection: A candidate point p should be ignored if for all target points wiwe get (wi&p)≠p. This means that the point p covers too many of the input variables, and is not covered by any of the points in the targets set; Forward Propagation: We can calculate all possible points on each level starting from the top level D=0 with n known points and going down to D=maxD. Those points that can never appear on some level d are then included into the ignored [d] set. If some target point w has another desired maximum delay ei<maxD, then that point on the following depths should be ignored, i.e., we add w to ignored [ei+1 . . . maxD]; Sum of Direct Inputs: If any of the input signals xi, xjgive the point p=xi⊕xjon level d, then all consecutive levels >d must have the point p in the ignored list; Backward Propagation: As a last check, we can go backwards level by level, starting from d=maxD and ending at level d=1, and for each allowed (not ignored) point on the level d we check whether there is still a not-ignored pair a,b on the previous levels (one of a or b must be on the level d−1) such that it gives p=a⊕b. If not, then the point p should be added to the ignore [d] set; Ignore Candidates: dynamically add a point w to the ignore [d] set if w has been one of the candidates on previous levels<d. 3.5 Conclusions From our simulations we can conclude the following regarding searching for the minimum solution; the top matrix (with only 8 inputs) can be solved with the exhaustive cancellation-allowed search as in Section 3.4. The bottom matrix (with 18 inputs) is too large for a direct exhaustive search, and we should start with a probabilistic cancellation-free heuristic from Section 3.2, and then use a full exhaustive search for the ending part, when the Hamming weights of the remaining rows become small enough to perform the exhaustive search. This approach showed to give the best result. 4 System of Linear Circuits with Multiplexers Assume we want to find a circuit for the combined SBox, where the top and the bottom linear matrixes need to be multiplexed based on the SBox direction. This means that the circuit for the combined linear expressions is basically doubled in size, plus the set of multiplexers. In this section we will show how to deal with multiplexed systems of linear expressions. We will show that the MUX and XOR gates can be resolved in a combined way in order to achieve a very compact circuit. 4.1 Floating Multiplexers Consider that for some signal Y we have to compute two linear expressions YFand YIfor the forward and the inverse SBoxes respectively. Then we apply a multiplexer so that only one of the signals continues as Y. Assume further that the signals YFand YIshare some part of the expression. Then it may be better to push that shared part down under the multiplexer, and the resulting solution can be simplified. For example, let YF=X0⊕X1and YI=X0⊕X2, then normally we should spend 2 XOR gates and 1 multiplexer, so that we get Y=MUX(select,X0⊕X1,X0⊕X2) with 3 gates. However, we can push the common part X0behind the multiplexer as follows: Y=MUX(select,X1,X2)⊕X0, then we get a circuit with only 2 gates. In general, one can pick any linear combination A on input signals and make a substitution: Y=MUX(select,YF,YI)→MUX(select,YF⊕Δ,YI⊕Δ)⊕Δ, Where Δ is then added to the linear matrix as an additional target signal to compute. If that substitution leads to a shorter circuit then we keep it. We should also choose such Δ that the overall depth is not increased. Thus, various multiplexers will be “floating” over the depth of the circuit. Signals with Δ≠0 should have their maximum depth decreased by 1. 4.1.1 Metrics and Linear Expressions to Solve We have n input signals X1. . . Xnand m output signals Y1. . . Ym, where each Yiis represented in its most general form as a triple (Ai, Bi, Ci) such that Yi=Ai⊕MUX(select,Bi,Ci) where Ai, Bi, and Ciare linear expressions on the input signals. We are allowed to modify the above expression as (Ai⊕Δi, Bi⊕Δi, Ci⊕Δi) for any Δi, since the Boolean function of Yiwould not change. Let ABC represent the linear matrix that describes all the rows Ai, Bi, and Ci, for i=0 . . . m, such that ABC×X gives the wanted linear system to realize using minimal number of gates and a given maxD. By choosing favorable values of Δi, one can shrink the number of total gates, since some of the target points of ABC may become equal to each other, and hence ABC can be reduced by at least one row. Also, some of the targets may become 0 or having only one bit—i.e., they equal to corresponding input signals. These targets are also removed from the linear system as they are trivial and cost zero gates. After the above reductions we get a system of linear expressions where all rows are distinct and have Hamming weight at least 2. As before, we interpret the rows of ABC as integers, and adding (XOR:ing) a Δito the three rows Ai, Bi, and Ciwill change those three target points, but not the resulting Yi. Metric. Search for a good combination of Δs requires a lot of computation and it rapidly becomes infeasible to compute a minimal solution for each selection. Thus, we need to decide on a good metric that allows us to truncate the search space down to promising sets of Δs. We propose to adopt a metric that is based on the lower bound of the number of gates of a fixed system (when Δ values are selected), and define the metric to be the number of rows of the reduced ABC matrix, plus the minimum number of extra gates needed to complete the circuit, such as multiplexers. In the following we present several heuristic approaches to find a good set of Δs while minimizing the metric. 4.1.2 Iterative Algorithms to Find Δs: Metric→Minimize The below techniques only work for small n, but in our case it is readily applicable to the 8-input top matrix of the AES SBox. Algorithm-A(k)—Select k triplets (Ai, Bi, Ci) and try to find matching Δis that minimize the metric. If some choice results in a smaller metric, we keep that choice and continue searching with the updated ABC matrix. The algorithm runs in a loop until the metric is not decreasing any more. Algorithm-A(1) is quite fast, and Algorithm-A(2) also has acceptable speed. For larger ks it becomes infeasible. Algorithm-A(k) works fine for a very quick/brief analysis of the given system but the result is quite unstable since for a random initial values of Δis the resulting metric fluctuates heavily. Algorithm-B—unlike Algorithm-A this algorithm is trying to construct a linear system of expressions, starting from an empty set of knowns S and then trying to add new points to S one by one, until all targets of ABC become included in the set S. While testing whether a new candidate C should be added to S we loop through all (Ai, Bi, Ci) and for each one try to find a Δithat minimizes the overall metric. This heuristic algorithm is a lot more stable and gives quite good results. But the smallest possible metric does not guarantee that the final solution will have the smallest number of gates, and the number of non-target intermediates needed is unclear. Thus, it would be a good idea to collect a number of promising systems whose metric is the smallest possible, then try to find the smallest solution amongst them. We will investigate this in the next section. 4.2 New Generic Heuristic Technique for Linear Systems with Floating Multiplexers If we generalize the idea of floating multiplexers and let them float even higher up in the circuit, and also sharing them wider, we could achieve better results. In this section we propose a generic heuristic algorithm that finds a nearly best circuit for such systems. 4.2.1 Problem Statement We are given n-bit input signal Xn, binary matrices Mm×nFand Mm×nI, binary vectors AnF, AnI, BmF, BmI, and vectors of delays DnXand DmY. We want to find a smallest and shortest solution that computes the m-bit output signal Y: YF=MF·(X⊕AF) YI=MI·(X⊕AI) Y=MUX(ZF;YF⊕BF;YI⊕BI), where each input signal Xihas an input arrival delay DiXand each output signal Yjmust have the total delay at most DiY. A* and B* are constant masking vectors for the input and output signals respectively (NOT-gates). ZF is the mux selector, when ZF=1 we pick the first (YF=“forward”) output otherwise the second (YI=“inverse”) output. We also assume there is a complement signal ZI=ZF⊕1 that is also available as an input control signal. 4.2.2 Preliminaries Similar to our previous notation, we define a “point” to be tuple of a point value (.p) and a delay (.d): point:={.p=[f(1 bit)IF(nbits)|i(1 bit)≡I(nbits)],.d=Delay} which is then translated into a 1-bit signal circuit signal:=MUX(ZF;F·X⊕f,I·X⊕i) with a total output delay point.d. I.e., F and I are linear combinations on the n-bit input X, and F and I are negate bits applied to the result in case the selector is “forward” or “inverse”, respectively. The n input points are then represented as: input pointXk:={.p=[AkF|2k|AkI|2k],. d=DkX}, fork=0, . . . ,n−1, and the target m points are: target pointYk:={.p=[BkF|YkF|BkI|YkI],DkY}, fork=0, . . . ,m−1. We should also include the following 4 trivial points to the set of inputs: signalZF:={.p=[1|0|0|0],.d=0} signal 0:={.p=[0|0|0|0],.d=0} signalZI:={.p=[0|0|1|0],.d=0} signal 1:={.p=[1101110],.d=0} Given any two (ordered) points v and w there are at most 6 possible new points that can be generated based on the following gates: MUX(v;w):={.p=[v.f|v.F|w.i|w.I],.d=Dnew} NMUX(v;w):={.p=[v.f⊕1|v.F|w.i⊕1|w.I],.d=Dnew} MUX(w;v):={.p=[w.f|w.F|v.i|v.I],. d=Dnew} NMUX(w;v):={.p=[w.f⊕1|w.F|v.i⊕1|v.I],.d=Dnew} XOR(v;w):={.p=[w.f⊕v.f|w.F⊕v.F|w.i⊕v.i⊕w.I⊕v.I],.d=Dnew} NXOR(v;w):={.p=[w.f⊕v.f⊕1|w.F⊕v.F|w.i⊕v.i⊕1|w.I⊕v.I],. d=Dnew} where Dnew=max{v.d,w.d}+1. Note that the inclusion of the 4 trivial points is important, since then we can limit the number of gate types to be considered. For example, a NOT-gate is then implemented as XOR(v; 1), AND gate with ZF can be implemented as MUX(v; 0), OR gate with ZI is MUX(v; 1), etc. 4.2.3 The Algorithm We start with the set S of input points (of size n+4), and place all target points into the set T. At each step, we compute the set of candidate points C that is generated by applying the above 6 gates to any two points from the set S. Naturally, C should only contain unique points and exclude those already in S. We try to add one candidate point from C to S and compute the distances from S to each of the target points in T. Thereafter we compare metrics to decide which candidate point will be included into Sat this step, and start over by calculating the possible candidates. The algorithm stops when the overall distance δ-metric is 0. The metric consists of several values. The distance δ(S,ti) is the minimum number of basic gates (the above 6) required to get the target point tifrom the points in S, such that the delay is at most DiY. Subsection 4.2.5 discusses how to compute δ(S,ti). The applied metrics and their order of importance are then as follows: γ=(❘"\[LeftBracketingBar]"S❘"\[RightBracketingBar]"-n-4)+∑i=0m-1δ(S,ti)→minδ=∑i=0m-1(δ(S,ti)-(δ(S,ti)==1))→maxτ=delayoftherecentcandidatepointfromCaddedtoS→minv2=∑i=0m-1(δ(S,ti)-(δ(S,ti)==1))2→max The metric γ is the projected number of gates in case there will be no more shared gates; that metric we should definitely minimize. In case there are several candidates that give the same value, then we look into the second metric δ. δ is the sum of distances excluding distances where only 1 gate is needed. Given the smallest γ, we must maximize δ. The larger δ the more opportunities to shrink γ. We exclude distances 1 because of the inclusion of the preemptive step that we will describe below. When we accept candidates to S one by one as described above, the metrics δ and γ are similar, but will become distinct when we, in the next subsection, introduce a search tree where the size of |S| may differ. τ selects the candidate having the minimum depth in case the above two metrics showed the same values for two candidates. In case there are no maximum depth constraints for target points then this metric is not needed. v is the Euclidean norm excluding the preemptive points (similar to δ). This is the last decision metric since it is not a very good predictor, a worse value may give a better result and vice versa. However, if there are two candidates with equal metrics δ, γ, and τ, then ordering of the two candidates may be done based on v. An alternative approach in case of tie-candidates is to choose one of them randomly. Preemptive points. If some distance δ(S,ti)=1 then we accept the point tiinto S immediately without the search through the candidates C. The inclusion of this step in the algorithm forces us to exclude such points from the metrics δ and v. In [RMTA18a] preemptive points were included into the metric, and we believe it was not correct. E.g., when two distance vectors {1, 2, . . . } and {0, 2, . . . } have the same projected gates, then they show a totally equal situation in terms of possible shared gates, and they should result in the same δ, because of the distance 1 will be included immediately (preemptive point), so it does not give any advantage over the second choice where we have the distance 0. Thus, distance 1 should not be counted in δ and μ, but it is accounted in the projected gates γ, instead. 4.2.4 Search Tree Additionally to the above algorithm, we propose to have a search tree where each node is a set S with metrics. Children of such a node are also nodes where S′ is derived from S by adding one of the candidate point S ←C. Thus, every path from the root node to a leaf represents a sequence of accepted candidate points to the root set S. If, at some point, a leaf has metric δ=0 then that leaf represents a possible solution path. We keep a number of children nodes (in our experiments we kept at least 20-50 best children) whose metrics are the best (they may even have different projected gates β). We also define the maximum depth TD of the search tree (in our experiments we tried TD=1 . . . 20). When the tree at depth TD is constructed, we then examine the leaves and see where we get the best metric over all leaves at all different branches. Tracking back to the root, we then choose to keep the top branch that leads to the best leaf(s). Other top branches from the root are removed. We then advance the root node to the first child of the selected branch and try to extend the tree's depth again from the remaining leafs, thus, keeping the search tree at a constant depth TD. If, at every depth of the tree, each leaf is extended with additional 20-50 sub-branches, then the number of leaves will increase exponentially. However, we can apply a truncation algorithm to the leaves before extending the tree to the next depth. We simply keep no more than a certain number of promising leaves that will be expanded to the next depth, and other, less promising leaves we just remove from the tree (in our experiments the truncation level was up to 400 leaves overall for the whole tree). This type of truncation makes it possible to select the best top branch of the root node by “looking further” basically at any depth TD. Notably, the complexity does not depend on the depth TD, but it depends on the truncation level. Truncation strategy. In brief, we keep those leafs with the best metrics, but try to distribute nearly equal leafs among different branches, so that we keep as many diverted solution paths as possible. 4.2.5 Computation of δ(S,ti) The “heart” and the critical part of the algorithm is the algorithm to compute the distances δ(S,ti), given a fresh S. There are many candidates to test at each step, and there are many branches to track, so we need to make this core algorithm as fast as possible. Note that the length of a point (.p is an integer) is 2n+2 bits, plus the delay value. We will ignore the delay (.d) value when doing Boolean operations over two points. Let us assign the number of possible points as: N=22n+2. Let Vk[ ] be a vector of length N cells, each cell Vk[p] corresponds to a (2n+2)-bit point p represented as an integer index, and the value stored in the cell will be the minimum delay p.d of that point such that it can be derived from S with exactly k gates. Set the initial vector V0as ∀p→V0[p]=p.d, if p∈S, and V0[p]=∞, otherwise. Thereafter, the vector Vk+1can be derived from the previously derived vectors V0. . . Vkby applying the allowed 6 gates to points from some level 0≤I<k (VI) and the level k−I (Vk−I), thus resulting in total I+(k−I)+1=k+1 gates. After a new Vk+1is derived, we simply check if it contains new distance values for the targets from T, and we repeat the procedure until all distances δ(S,ti) for all tiin Tare found. A high-level description of the algorithm is given in Algorithm 1, and in Appendix B.3 we provide a more detailed description alongside multiple computational tricks that can be made. Algorithm 1 Algorithm for computing δ(S,ti)1:function Distances(S, T, maxδ) → Δ = {δi}, i = 0, . . . , m − 12:Init δi= ∞ for i = 0, . . . , m − 13:Init ∀p : V0[p] = p.d if p ∈ S, otherwise ∞4:Init k = 05:while true do6:while ∃i: δi= ∞ and Vk[ti] ≤ ti.d do7:δi= k8:if ∀i : δi < ∞ then return OK9:if k ≥ maxδ then return FAIL10:k ← k + 111:Init ∀p : Vk[p] = ∞12:for all a,b : do13:for p in {MUX(a,b),NMUX(a,b),MUX(b,a),NMUX(b,a),X0R(a,b),NX0R(a,b)} do14:for l ← └k/2┘to k − 1 do15:d ← max(Vk-l-1[a], Vi[b]) + 116:Vk[p] ← min(Vk[p],d) 5 Architectural Improvements Most known AES SBox architectures look quite similar, consisting of the Top and Bottom linear parts, and the middle non-linear part, as previously described in Section 2. In this section, we take that classic design and propose a number of improvements, along with a completely new architecture that focuses on low depths solutions. 5.1 Two SBox Architectures—Area and Depth Referring toFIG.2, the architecture A (Area) is the classical one that implements designs based on tower and composite fields. It starts with the 8-bit input signal U to the Top linear matrix, which produces a 22-bit signal Q (as in [BP12]). We managed to reduce the number of needed Q-signals to 18, and refactored the multiplication and linear summation block Mul-Sum to 24 gates and depth 3. (See Appendix D.2 for equations). The output from the Mul-Sum block is the 4-bit signal X which is the input to the inversion over GF(24). The output from the inversion, Y, is non-linearly mixed with the Q signals, derived in the top matrix, and produces 18-bit signal N. The final step is the Bottom linear matrix that takes 18-bit N and linearly derives the output 8-bit signal R. The top and bottom matrices incorporate the SBox's affine transformation that depends on the direction. The new architecture D (Depth) (shown inFIGS.3A and3B) is a new architecture where we tried to remove the bottom matrix and as a result shrinking the depth of the circuit as much as possible. The idea behind is that the bottom matrix only depends on the set of multiplications of the 4-bit signal Y and some linear combinations of the 8-bit input U. Thus, the result R can be achieved as follows: R=Y0·M0·U⊕ . . . ⊕Y3·M3·U where each Miis a 8×8 matrix representing 8 linear equations on the 8-bit input U, to be scalar multiplied by the Yi-bit. Those 4×8 linear circuits can be computed as a 32-bits signal L in parallel with the circuit for the 4-bits of Y. The result R is achieved by summing up four 8-bit sub-results. Therefore, in architecture D we get the depth 3 after the inversion step (critical path: MULL and 8XOR4 blocks), instead of the depth 5-6 in the architecture A. That new architecture D requires a bit more gates, since the assembling bottom circuit needs 56 gates: 32NAND2+8XOR4. The reward is the lower depth. A more detailed sketch of the two architectures is given inFIGS.4and5, respectively, that includes the components of the designs, delays and the number of gates. 5.2 Six Different Scenarios of MULN In the MULN block, where the 18-bit N-signals are computed, we need as input the 18-bit Q-signals and the inversion result Y. But we also need the following additional linear combinations of Y: Y02=Y0⊕Y2, Y13=Y1⊕Y3; Y23=Y2⊕Y3, Y01=Y0⊕Y1, Y00=Y01⊕Y23—these correspond to the signals M41-M45 in [BP12]. Thus, the Y vector is actually extended to 9 bits, and the delays of N bits become different, depending on which of the Yiis used in the multiplication. For example, in the worst case, the delay of Y00is +2 compared to the delay of Y1. Thus, the resulting signals N will have different output delays. However, it is possible to compute these 5 additional Ys in parallel with the base signals Y0, . . . , Y3. This will cost some extra gates, but then the +2 delay can either shrink down to +1 or +0. In general one can consider the following 6 scenarios:S0. We compute only the base signals Y0. . . Y3, and the remaining {Y01, Y23, Y02, Y13, Y00} we compute with XORs as above. The delay is +2 but it has the smallest number of gates.S1. Compute {Y01, Y23} in parallel, the delay is +1.S2. Compute {Y02, Y13} in parallel, the delay is +1.S3. Compute {Y00} in parallel, the delay is +1.S4. Compute {Y01, Y23, Y02, Y13} in parallel, the delay is +1.S5. Compute {Y01, Y23, Y02, Y13, Y00} in parallel, the delay is +0 as there is no signals left to compute afterwards. In the next subsection we show how to find Boolean expressions for the above scenarios. 5.3 INV. Inversion Over GF(24) The inversion formulae are as follows: Y0=X1X2X3⊕X0X2⊕X1X2⊕X2⊕X3 Y1=X0X2X3⊕X0X2⊕X1X2⊕X1X3⊕X3 Y2=X0X1X3⊕X0X2⊕X0X3⊕X0⊕X1 Y3=X0X1X2⊕X0X2⊕X0X3⊕X1X3⊕X1 In [BP12] they found a circuit of depth 4 and 17 XORs, but we would like to shrink the depth ever further by utilizing a wider range of standard gates. Thus, we have considered each expression independently, using a general depth 3 expression: Yi=((Xaop1Xb)op5(Xcop2Xd))op7((Xeop3Xf)op6(Xgop4Xh)), where Xa-hare terms from {0, 1, X0, X1, X2, X3} and op1-7are operators from the set of standard gates {AND, OR, XOR, NAND, NOR, XNOR}. Note that the above does not need to have all terms, for example, the expression AND(x, x) is simply x. The exhaustive search can be organized as follows. Let us have an object Term which consists of a truth table TT of length 16 bits, based on the 4 bits X0. . . X3, and a Boolean function that is associated with the term. We start with the initial set of available terms T(0)={0, 1, X0, . . . , X3}, and then construct an expression for a chosen Yiiteratively. Assume at some step k we have the set of available terms T(k), then the next set of terms and associated expressions can be obtained as: T(k+1)={T(k),T(k)operatorT(k)}, taking care of unique terms. At some step k we will get one or more term(s) whose TTs are equal to target TTs (Yis). Since we could actually get multiple Boolean functions for each Yithen we should select only the “best” functions following the criteria: there is no NOT gates (due to a better sharing capabilities), there is a maximum number of gates that can be shared between the 4 expressions for Y0. . . Y3, and the area/depth in terms of GE is small. Using this technique, we have found a depth 3, 15 gates solution for the inversion. The equations are given below, where we also provide depth 3 solutions for the additional 5 signals {Y01, Y23, Y02, Y13, Y00} such that they can also share a lot of gates in the mentioned scenarios S0-S5. Y0=(X0andX2)xnor ((X1nandX2)nand (X2xorX3)) Y1((X2xorX3)nandX1)xor ((X0andX2)norX3) Y2=(X0andX2)xnor ((X0xorX1)nand (X0nandX3)) Y3=((X0xorX1)nandX3)xor ((X0andX2)norX1) Y01=((X2xor X3)nandX1)nand ((X0nandX3)nandX2) Y23=((X0xorX1)nandX3)nand ((X1nandX2)nandX0) Y13=((X0andX2)nor (X1xnorX3))xor ((X0nandX3)xor (X1nandX2)) Y02=((X2xorX3)nand (X1nandX2))xor ((X0xorX1)nand (X0nandX3)) Y00=((X0andX2)nand (X1xnorX3)) and ((X0norX2)nor (X1andX3)) When implementing the above circuits for the scenarios S0-S5, and to sharing the gates in a best possible way, we then got the following results: TABLE 2INV block for the scenarios S0-S5.S0S1S2S3S4S5Std. area (gates)151920212429Std. depth (gates)333333Tech. area (GE)23.3127.3133.6331.3037.6343.29Tech. depth2.422.422.542.422.542.54(XORs) In our optimal circuits we have used scenario S1, as it showed best results with respect to the area and depth. For the fast and bonus circuits, we used S0 as it has the smallest area. 5.4 Alpha-Beta Approach for the Top and Bottom Linear Matrices We are solving the top matrices with exhaustive search and the bottom matrices with various heuristic techniques. The way those matrices look, naturally influence the final number of gates in the solution. Here we present a simple method to try different top and bottom matrices for the best solution. Assume that the SBox is a black box and it performs the function (excluding the final addition of the constant): SBox(x)=x−1·A8×8, where x−1is the inverse element in the Rijndael field GF(28), and the matrix A8×8is the affine transformation. In any field of characteristic 2: squaring, square root, and multiplication by a constant—are linear functions, thus for a non-trivial choice (α,β) we have: Z(x)=(α·x2β)-1SBox(x)=α·Z(x)2β·A8×8, If the initial Top and Bottom matrices for the forward and inverse SBoxes were TF, BF, TI, FI, respectively, then one can choose any α=1 . . . 255 and β=0 . . . 7, and change the matrices as follows: T′F=TF·E·Cα·Pβ·E B′F=E·A·Pβ−1·Cα·A−1·E·BF T′I=TI·E·A·Cα·Pβ·A−1·E B′I=E·Pβ−1·Cα·E·BI, where:E—is the 8×8 matrix that switches bits endianness (in our circuits input and output bits are in Big Endian)A—is the 8×8 matrix that performs the SBox's affine transformationCα—is the 8×8 matrix that multiplies a field element by the selected constant αPβ—is the 8×8 matrix that raises an element of the Rijndael field to the power of 2βTF/TI—are the original (without modifications) 18×8 matrixes for the top linear transformation of the Forward/Inverse SBoxes, respectively.BF/BIare the original (without modifications) 8×18 matrixes for the bottom linear transformation of the Forward/Inverse SBoxes, respectively. There are 2040 choices for (α,β) pair and each choice gives new linear matrixes. It is easy to test all of them and find the best combination that gives the smallest SBox circuit. We have applied this idea to both the forward as well as the inverse SBox, for both architectures A and D. 5.4.1 Alpha-Beta Approach for the Combined SBox For the combined SBoxes we can apply the alpha-beta approach to the forward and the inverse parts independently. This means that we have 20402=4,161,600 variants of linear matrices to test. We have focused on the architecture D, since there is no bottom matrix and thus we can do a more extensive search. We searched through all those 4 million variants and applied the heuristic algorithm from the Section 4.1 as a quick analysis method to select a set of around 4000 promising cases. We then applied the algorithm given in Section 4.2 to find a solution with floating multiplexers. In our case we have n=8 bit input and thus each point is encoded with 18 bits, and the complexity of calculating the distance δ(S,ti) is quadratic over N=218points. In the search we used the search tree with the maximum depth TD≤20 and the truncation level of 400 leaves. 5.5 Q-Zero Points for the Top Matrices in the Combined SBox The combined SBox needs to have a number of multiplexes for both the top and bottom linear transformations. The top linear matrixes of the forward and inverse SBoxes produce 18-bit signals QFand QI, respectively. This means that normally we should apply 18 multiplexers to switch between the Q-signals, based on the selected direction signal ZF. However, there is a set of Q-triples that are always zero, which are valid for both the forward and the inverse SBoxes: Listing 1: Q-Zero points, valid for both the Forward and the Inverse SBoxes.0 = Q0 + Q11 + Q120 = Q1 + Q3 + Q40 = Q4 + Q9 + Q100 = Q6 + Q7 + Q100 = Q0 + Q14 + Q150 = Q2 + Q8 + Q90 = Q5 + Q12 + Q130 = Q11 + Q16 + Q170 = Q1 + Q2 + Q70 = Q3 + Q6 + Q80 = Q5 + Q15 + Q170 = Q13 + Q14 + Q16 We can use that knowledge to compute only a subset of Q-signals, then multiplex them and compute the remaining Q-signals using the above zero-points. For example, we can compute 10 bits: {Q1, Q6, Q8, Q9, Q10, Q12, Q14, Q15, Q16, Q17} for both forward and inverse SBoxes, then apply 10 multiplexers and after that derive the remaining 8 signals as: Q0=Q14+Q15, Q2=Q8+Q9, Q3=Q6+Q8, Q4=Q9+Q10, Q5=Q15+Q17, Q7=Q6+Q10, Q11=Q16+Q17, Q13=Q14+Q16. Thus, we could save 8 multiplexers and, 2×8 rows of the combined top matrix can be removed. However, we should make sure that exemption of the above 8 bits outside the multiplexers would not increase the depth of the top linear transformation. Note that some of the signals, Q1 and Q12 in the example above, do not participate in computing the final 8 signals, hence those two signals are allowed to have +1 extra depth. I.e., before applying the circuit solver algorithm, we should carefully derive individual maximum delays for each output signal as constraint for a search algorithm. We have tested 59535 variants of utilizing Q-Zero points, in addition to the above mentioned 2040 choices of (α,β). We applied this method only to the architecture A. 6 Results and Comparisons In this section we present our best solutions for the AES SBoxes, both forward and combined. The stand-alone inverse SBox is perhaps not as widely used, and those results can be found in Appendix C. We compare our area and depth using the techniques described in Appendix A and where possible, we have recalculated the corresponding GE for other academic results for easier comparison. We present three different solutions for each SBox (forward, inverse, and combined): “fast”, “optimal”, and “bonus”. The fast one is the solution with the lowest critical path, the optimal is a well-balanced trade-off between area and speed, and the bonus solution is given to establish a new record in terms of the smallest number of gates. Exact circuit expressions for all the derived solutions can be found in Appendix D, where we also indicate which algorithm was used in deriving the solution. 6.0.1 Synthesis Results We have performed a synthesis of the results and compared with other recent academic work. The technology process is GlobalFoundries 22 nm CSC20L [Glo19], and we have synthesized using Design Compiler 2017 from Synopsys in topological mode with the compile_ultra command. We also turned on the flag compile_timing_high_effort to force the compiler to make as fast circuits as possible. In those graphs, the X axis is the clock period (in ps) and the Y axis is the resulting topology estimated area (in μm2). We have not restricted the available gates in any way, so the to compiler is free to use non-standard gates e.g., a 3 input AND-OR gate. To get the graphs in the following subsections, we have started at a 1200 ps clock period (˜833 MHz) and reduced the clock period by 20 ps until the timing constraints could not be met. We note that the area estimate by the compiler fluctuate heavily, and we believe that this is a result the many different strategies the compiler has to minimize the depth. One strategy might be successful for say a 700 ps clock period, but a different strategy (which results in a significantly larger area) could be successful for 720 ps. There is also an element of randomness involved in the strategies for the compiler. TABLE 3Forward SBox: Comparison of the results.Area Size/GatesCritical Path/DepthReferenceStd. gatesTech. GEStd. gatesTech. XORsForward SBoxes: Previous ResultsCanright [Can05]80XO + 34ND + 6NR19XO + 3ND + 1NRmost famous design120226.402320.796Boyar et al [BP12]94XO + 34AD13XO + 3ADour starting point128264.241614.932Boyar et al [Boy]81XO + 32AD21XO + 6ADrecord smallest113231.292724.864Ueno et al [UHS+15]91XO + 38AD + 16OR (+4IV)11XO + 3AD + 1ORrecord fastest145(+4)286.531513.772Reyhani-Light [RMTA18a]69XO + 43ND + 7NR (+4IV)16XO + 4ND (+1IV)at CHES 2018119(+4)213.4520(+1)18.031Reyhani-Fast [RMTA18a]79XO + 43ND + 7NR (+4IV)11XO + 5ND (+1IV)at CHES 2018129(+4)236.7516(+1)13.449Forward SBoxes: Our ResultsForward (fast)81XO + 2XN + 5AD + 42ND + 6NR8XO + 1XN + 1AD + 1ND + 1NRfast with depth 12136248.041210.597Forward (optimal)65XO + 9XN + 1AD + 36ND + 6NR11XO + 2ND + 1NRarea/speed optimal117215.751412.378Forward (bonus)62XO + 7XN + 1AD + 32ND + 6NR20XO + 1XN + 2ND + 1NRnew record smallest108200.102422.371 6.1 Forward S Boxes We have included a number of interesting previous result for comparison in Table 3. The most famous design by Canright is widely used and cited. Our optimal SBox is both faster and smaller. We also included the work done by Boyar et al as their design was the starting point for our research. The two results from CHES′18 by Reyhani et al are the most recent, and our “optimal” SBox has a similar area as their “lightweight” version in terms of GE, but around 30% faster. The optimal SBox is both smaller and faster than their “fast” circuit. Also, our “fast” version is faster by 25% then their “fast” version, while maintaining a decent area increase. The currently fastest SBox done by Ueno has 286GE and 13.772XORs depth, while our fast version is only 248GE and depth 10.597XORs, outperforming the known fastest circuit by around 23%. We also included the current world smallest circuit (in terms of standard gates) done by Boyar in 2016, which has 113 gates (231.29GE) and depth 27 gates. Our “bonus” circuit is even smaller with only 108 gates and depth 24, reaching as low as 200.10GE. Synthesize results are shown inFIG.6. TABLE 4Combined SBox: Comparison of the results.Area Size/GatesCritical Path/DepthReferenceStd. gatesTech. GEStd. gatesTech. XORsCombined SBoxes: Previous ResultsCanright [Can05]94XO + 34ND + 6NR + 16MX (+21V)20XO + 3ND + 2OR + 5NRmost famous design150(+2)297.643025.644Reyhani et al81XO + 32ND + 4OR + 16NR + 16MI17XO + 2ND + 3OR + 6NR[RMTA18b](+8IV)recent results149(+8)290.132823.608Combined SBoxes: Our ResultsCombined81XO + 28XN + 1AD + 46ND + 7NR +8XO + 2XN + 2ND + 1NR +(fast)7MX + 12MI1MIfast with depth 14182356.651412.420Combined-S86XO + 22XN + 1AD + 46ND + 2OR +10XO + 1XN + 2ND + 1NR(fast)6NR + 17MX + 6MIbest synthesis186363.261412.371resultsCombined74XO + 22XN + 1AD + 36ND + 6NR +9XO + 3XN + 2ND + 1NR +(optimal)9MX + 3MI1MIarea/speed151295.991614.413optimalCombined74XO + 10XN + 1AD + 32ND + 6NR +17XO + 3XN + 2ND + 1NR +(bonus)10MX2MXnew record133258.352522.907smallest 6.2 Combined SBoxes Table 4 shows our results compared to the two previously known best results. Our optimal combined SBox has a similar size to that of [Can05] and [RMTA18b], but the speed is a lot faster due to a much lower depth of the circuit. The optimal circuit has depth 16 (in reality only 14.413XORs) and 151 gates (296GE), while Canright's combined SBox is of size 150(+2) gates (298GE) and the depth 30 (25.644XORs). The bonus solution in this paper has slightly smaller depth than the most recent result [RMTA18b] but is significantly smaller in size (133 vs 149(+8) standard gates). Finally, the proposed “fast” designs using Architecture D has the best currently known depth, and we include also the one that provided the best synthesis results shown in the comparison inFIG.7. CONCLUSIONS In this Appendix B we have introduced a number of heuristic and exhaustive search methods for minimizing the circuit realization of the AES SBox. We have proposed a novel idea on how to include the multiplexers of the combined SBox in the minimization algorithms, and derived smaller and faster circuit realizations for the forward, inverse, and combined AES SBox. We also introduced a new architecture where we remove the bottom linear matrix, in order to derive as fast solutions as possible. REFERENCES [Art01] Artisan Components, Inc. TSMC 0.18 μm Process 1.8-Volt SAGE-XTM Standard Cell Library Databook, 2001. URL: www.utdallas.edu.[BFP18] Joan Boyar, Magnus Find, and René Peralta. Small low-depth circuits for cryptographic applications.Cryptography and Communications,11, March 2018.[BHWZ94] Michael Bussieck, Hannes Hassler, Gerhard J. Woeginger, and Uwe T. Zimmermann. Fast algorithms for the maximum convolution problem.Oper. Res. Lett.,15(3):133-141, April 1994. URL: citeseerx.ist.psu.edu.[Boy] Joan Boyar. Circuit minimization work. URL: www.cs.yale.edu.[BP10a] Joan Boyar and René Peralta. A New Combinational Logic Minimization Technique with Applications to Cryptology. In Paola Festa, editor,Experimental Algorithms, pages 178-189, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg. URL: eprint.iacr.org.[BP10b] Joan Boyar and René Peralta. A new combinational logic minimization technique with applications to cryptology. In ofLecture Notes in Computer Science, pages 178-189. Springer, 2010.[BP12] Joan Boyar and René Peralta. A small depth-16 circuit for the AES S-Box. In Dimitris Gritzalis, Steven Furnell, and Marianthi Theoharidou, editors,SEC, volume 376 ofIFIP Advances in Information and Communication Technology, pages 287-298. Springer, 2012. URL: link.springer.com. 1007/978-3-642-30436-1_24.[Bus01] Business Machines Corporation. ASIC SA-27E Databook, Part I Base Library and I/Os. Data Book, 2001. URL: people.csail.mit.edu.[Can05] D. Canright. A Very Compact S-Box for AES. In Josyula R. Rao and Berk Sunar, editors,Cryptographic Hardware and Embedded Systems—CHES2005, pages 441-455, Berlin, Heidelberg, 2005. Springer Berlin Heidelberg. URL: www.iacr.org.[FAR06] FARADAY Technology Co. FSDOA_A 90 nm Logic SP-RVT (Low-K) Process, 2006. URL: www.cl.cam.ac.uk.[Glo19] GlobalFoundries. 22 nm FDX process, 2019. URL: www.globalfoundries.com.[IT88] Toshiya Itoh and Shigeo Tsujii. A fast algorithm for computing multiplicative inverses in GF(2M) using normal bases.Inf. Comput.,78(3):171-177, September 1988. URL: dx.doi.org.[JKL10] Yong-Sung Jeon, Young-Jin Kim, and Dong-Ho Lee. A compact memory-free architecture for the AES algorithm using resource sharing methods.Journal of Circuits, Systems, and Computers,19:1109-1130, 2010.[MNG00] Microelectronics Group, Carl F. Nielsen, and Samuel R. Girgis. WPI 0.5 mm CMOS Standard Cell Library Databook, 2000. URL: lsm.epfl.ch.[NNT+10] Yasuyuki Nogami, Kenta Nekado, Tetsumi Toyota, Naoto Hongo, and Yoshitaka Morikawa. Mixed bases for efficient inversion in F((22)2)2and conversion matrices of SubBytes of AES. pages 234-247, August 2010.[oST01] National Institute of Standards and Technology. Advanced encryption standard.NIST FIPS PUB197, 2001.[Paa97] Christof Paar. Optimized arithmetic for Reed-Solomon encoders. April 1997.[Pet] Graham Petley. Internet resource: VLSI and ASIC Technology Standard Cell Library Design. URL: www.vlsitechnology.org.[Rij00] VincentRijmen. Efficient implementation of the Rijndael S-Box. 2000. URL: www.researchgate.net.[RMTA18a] Arash Reyhani-Masoleh, Mostafa Taha, and Doaa Ashmawy. Smashing the implementation records of AES S-Box.IACR Transactions on Cryptographic Hardware and Embedded Systems,2018(2):298-336, May 2018.[RMTA18b] Arash Reyhani-Masoleh, Mostafa M. I. Taha, and Doaa Ashmawy. New area record for the AES combined S-Box/inverse S-Box. 2018IEEE25th Symposium on Computer Arithmetic(ARITH), pages 145-152, 2018.[Sam00] Samsung Electronics Co., Ltd. STD90/MDL90 0.35 μm 3.3V CMOS StandardCell Library for Pure Logic/MDL ProductsDatabook, 2000. URL: www.digchip.com[SMTM01] Akashi Satoh, Sumio Morioka, Kohji Takano, and Seiji Munetoh. A compact Rijndael hardware architecture with S-Box optimization. In Colin Boyd, editor, ASIA CRYPT, volume 2248 ofLecture Notes in Computer Science, pages 239-254. Springer, 2001. URL: antoanthongtin.vn.[UHS+15] Rei Ueno, Naofumi Homma, Yukihiro Sugawara, Yasuyuki Nogami, and Takafumi Aoki. Highly efficient GF(28) inversion circuit based on redundant GF arithmetic and its application to AES design.IACR Cryptology ePrint Archive,2015:763, 2015. URL: eprint.iacr.org. APPENDIXES TO PART B OF DISCLOSURE A Area and Speed Measurement Methods Firstly, we introduce some notations. Gate names are written in capital letters GATE (examples: AND, OR). The notation mGATEn means m gates of type GATE, each of which has n inputs (example: XOR4, 8XOR4, NAND3, 2AND2). When the number of inputs n is missing then the assumption is that the gate has minimum inputs, usually only 2 (3for MUX). Cells that are constructed as gates combination can be described as GATES1-GATE2, meaning that we first perform one or more gates on the first level GATES1, then the result goes to the gate on the second level 2. Example: NAND2-NOR2, means that the cell has 3 inputs (a, b, c) and the corresponding Boolean function is NOR2(a, NAND2(b, c)). We present two different methods of comparing circuits; the standard method and the technology method. A.1 Standard Method Cells. The basic elements that are considered in the standard method are: {XOR, XNOR, AND, NAND, OR, NOR, MUX, NMUX, NOT}. Negotiation of NOT gates. In some places of the circuit there is a need to use the inverted version of a signal. This can be done in several ways, without the explicit use of a NOT gate. Here we list a few of them. Method 1. One way to implement a NOT gate is to change the previous gate that generates that signal to instead produce an inverted signal. For example, switch XOR into XNOR, AND into NAND, etc. Method 2. In several technologies some gates can produce both the straight signal and the inverted version. For example, XOR gates in many implementations produce both signals simultaneously, and thus the inverted value is readily available. Method 3. We can change the gates following the inverted signal such that the resulting scheme would produce the correct result given the inverted input, using e.g. De Morgans laws. Summarizing the above, we believe that NOT gates may be ignored while evaluating a circuit with the standard method, since it can hardly be counted as a full gate. However, for completeness, we will printout the number of NOT gates in the resulting tables. Area. For area comparisons the number of basic elements is counted without any size distinction between them. The NOT-gates are ignored. Depth. The depth is counted in terms of the number of basic elements on the circuit path. The overall depth of a circuit is therefore the delay of the critical path. The NOT-gates are ignored. A.2 Technology Method Cells. Some papers complement the standard cells with a few extra combinatorial cells, often available in various technologies. For example, the gates NAND2-NAND2, NOR2-NOR2, 2AND2-NOR2, XOR4 could be highly useful to improve and speedup our SBox circuits in this paper. However, for comparison purposes with previous academic results, we will stay with the set of standard cells in order to make a more fair comparison. In this method we do count NOT gates in both the delay and the area. Area. There exist many ASIC technologies (90 nm, 45 nm, 14 nm, etc) from different vendors (Intel, Samsung, GlobalFoundries, etc), with different specifics. In order to develop an ASIC one needs to get a “standard cells library” of a certain technology, and that library usually includes much more versatile cells than the basic elements listed above, so that the designer has a wider choice of building blocks. However, even if we take a standard cell, for example XOR, then for different technologies that cell has different areas and delays. This makes it harder to compare two circuits of the same logic developed by two academic groups, when they chose to apply different technologies. For a fair comparison of circuit area of various solutions in academia we usually utilize the term of gates equivalence (GE), where 1GE is the size of the smallest NAND gate. The size of a circuit in GE terms is then computed as Area(Circuit)/Area(NAND)→t GE. Knowing the estimated GE values for each standard or technology cell makes it possible to compute an estimated area size of a circuit in terms of GE. Although various technologies have slightly different GEs for standard cells, those GE numbers are still pretty close to each other. We have studied several technologies, where data books are available, and came to the decision to utilize GE values given in the data book by the Samsung's STD90/MDL90 0.35 μm 3.3V CMOS technology [Sam00]. The cells to be used are without the speed x-factor. We have also checked other data books of other technologies (e.g., IBM's 0.18 μm [Bus01], WPI 0.5 mm [MNG00], FARADAY's 90 μm [FAR06], TSMC 0.18 μm [Art01], Web resource [Pet], etc., and verified that GE numbers given in [Sam00] are quite fair and close to the reality. This makes it possible to have an approximated comparison of the effectiveness of different circuits, even though they may be developed for different technologies. Depth. Different cells, like XOR and NAND, not only differ in terms of GEs but also differ in terms of the maximum delay of the gates. Normally data books include the delays (e.g., in ns.) for each gate, and for each input-output combinations. We propose to normalize the delays of all used gates by the delay of the XOR gate. I.e., we adopt the worst-case delay of the XOR gate as 1 unit in our measurements of the critical path. Then we look at each standard cell and pick the maximum of the switching characteristics for all in-out paths of the cell and divide it by the maximum delay of the XOR gate, so that we get the normalized delay-units for each of the gates utilized. For multiplexers (MUX and NMUX), we ignore propagation delays for the select bit since in most cases, the select bit is the input to the circuit. For example, in the combined SBox the select bit says if we compute the forward or the inverse SBox, and that selection is ready as an input signal and not switching over the circuit signals propagation, so it is a stable signal. The proposed above method is similar to the idea of GEs, but is adopted for computing the depth of a circuit, normalized in XOR delays. The reason to choose XOR as the base element for delay counting is that the circuits are often have a lot of XOR gates, and thus, it now becomes possible to compare the depths between the standard and the technology methods. For example, in our SBox the critical path contains 14 gates, most of which are XORs, but in reality the depth would be equivalent to only 12.38 XOR-delays, due to the critical path contains also faster gates. A.3 Summary on the Technology Cells The area and delays for the Samsung's STD90/MDL90 0.35 μm gates are summarized in Table 5. TABLE 5Technology gates' area and delays based on [Sam00].Std. cellXORXNORANDNANDORNORMUXNMUXNOTD-Flop/QRef. in[XO2][XN2][AD2][ND2][OR2][NR2][MX2][MX2I][IV][FD1Q][Sam00]Our shortXOXNADNDORNRMXMIIVFDref.Area (GE)2.332.331.331.001.331.002.332.670.674.33Delay1.0000.9930.6440.4180.8400.5420.7751.0560.3591.242(XORs) B Algorithmic Details and Improvements In this section we present some more details to various algorithms previously described in the paper. B.1 The Shortest Circuit for a Single Output Bit. This problem is a recurring objective in many of the algorithms. The problem statement is the following. Given k input signals x0, . . . ,xk−1with corresponding input delays d0, . . . , dk−1, compute y=x0⊕x1⊕ . . . ⊕xk−1with the minimum possible delay. A solution to this problem without considering input delays was given in [RMTA18a]. In Algorithm 2 we extend the result to include input delays as well as removed the sorting step. Algorithm 2 The shortest linear circuit with input delays.Inputs:iDelay : An array with input delays di.iMask : A mask indicating which input xishould be included.If bit i in iMask is set, then xishould be included.Maximum number of variables is 64.The maximum resulting depth cannot be larger than MAXDELAY.Returns:The total delay for the circuit, including the input delay.1:function CircuitDepth(int iDelay[64], uint64 iMask)2:int depth[MAXDELAY ] = {0};3:int c = 0;4:while iMask != 0 do5:if (iMask & 1) then6:depth[iDelay[c]]++;7:iMask = iMask > > 1;8:c++.9:int i = 0;10:while i < MAXDELAY do11:if (depth[i] ≥ 2) then12:depth[i + 1] += depth[i] > > 1;13:if (depth[i] & 1 == 1) then14:i++.15:continue;16:for (i++; i < MAXDELAY && !depth[i]; i++) do17:;18:if i == maxD then19:break;20:depth[i]− −;21:depth[i + 1]++;22:mDep = MAXDELAY − 1;23:while mDep != 0 && depth[mDep] == 0 do24:mDep− −;return mDep; B.2 on the Computation of δ(S, yi) in 3.3.1 In each round the algorithm tests all pairs of the known points from S, and for each pair computes the distance. Straight-forward idea is to compute δ exhaustively, but when the input size n is not too big, there is a better approach. Let V be a vector of 2nentries, each entry V [t] is the minimum distance from the current S to t. Then, the single round of the algorithm is simplified a lot: the shortest distance is then: δ(V,yi,c)=min{V[yi],V[yi⊕c]+1}, where c is the candidate for adding to S (a XOR of other two known points from S), and yiis the target point. Assume that we now decided to add c to S, then the vector V is simply updated as V′[t]=δ(V,t,c), for allt=0, . . . ,2n−1. The time complexity of updating V is O(2n), but it can, in certain cases, be much faster than computing δ exhaustively for each c and yicombination in a round, especially if vectorized CPU instructions such as AVX2 are utilized. B.3 on the Computation of δ(S,ti) in Section 4.2.5 In this section we give a more detailed presentation on how the computation of δ(S, ti) can be done. A slightly re-organized set of algorithms for computing δ(S, ti) is given by Algorithms 3, 4, and 5. Algorithm 3 Computation of all distances1:function Distances2(S, T, maxδ) → Δ = {δi}, i = 0, . . . ,m − 12:Init δi= ∞ for i = 0, . . . , m − 13:Init ∀p : V0[p] = p.d if p ∈ S, otherwise ∞4:Initk= 05:while true do6:while ∃i : δi= ∞ and Vk[ti] ≤ ti.d do δi= k7:if ∀i : δi< ∞ then return OK8:if k ≥ maxδ then return FAIL9:k ← k + 110:Init ∀p : Vk[p] = ∞11:for l ← └k/2┘ to k − 1 do12:ConvolutionXOR(Vk,Vk-l-1,Vl)13:ConvolutionMUX(Vk,Vk-1-1,Vl)14:ConvolutionMUX(Vk,Vl,Vk-l-1) Algorithm 4 Convolution of XOR gates1:function ConvolutionXOR(V,A,B)2:for a = 0 . . . 22n+2− 1 do3:for b = 0 . . . 22n+2− 1 do4:d = max{A[a],B[b]} + 15:p = a ⊕ b6:if V [p] > d then V [p] = d7:p = a ⊕ b ⊕ (1;0..0;1;0..0)8:if V [p] > d then V [p] = d Algorithm 5 Convolution of MUX gates1:function ConvolutionMUX(V,A,B)2:Set ∀i = 0..2n+1− 1 : F[i] = I[i] = ∞3:for a = 0 . . . 22n+2− 1 do4:Set f = a ÷ 2n+1high half of a related toF part5:Set i = a mod 2n+1low half of a relatedto I part6:if F[f] > A[a] then thenF[f] =A[a]7:if I[i] > B[a] then thenI[i] = B[a]8:for f = 0 . . . 2n+1− 1 do9:for i = 0 . . . 2n+1− 1 do10:d = max{F[f],I[i]} + 111:p = (f · 2n+1+ i)12:if V [p] > d then thenV [p] = dMUX(ZF; f; i) gate13:p = p ⊕ (1;0..0;1;0..0)NMUX(ZF; f; i) gate14:if V [p] > d then thenV [p] = d There are two convolution algorithms, for XOR gates and for MUX gates, and they can be performed independently. The MUX-convolution can be done in linear time O(N). We first collect the smallest distances for all possible F-values and I-values (each of which has √{square root over (N)} possible indexes), then the gate MUX can be applied to any of the combinations, so the convolution is O(√{square root over (N)}2=N). The XOR-convolution is more complicated, it has quadratic complexity O(N2) in general case. Algorithmic improvements. Assume for some S we have already computed all distances δi=δ(S,ti). For each candidate c from C, we add it to S so that S′=S ∪c, then we need to compute all distances δ′i=δ(S′, t1) in order to compute the metrics and decide on which c is good. Note that adding a single candidate c implies δ′i≤δifor every target ti. Therefore, we should modify the algorithm Distances(S′, T, maxδ) such that we set maxδ=max{δi}−1, and check in the end that if δ′i==∞ then δ′i=max δ. This simple trick helps to avoid the computation of the last vector Vkand effectively speed up the computations by up to ×20 times. Generation of the candidates C involves testing if a candidate is already in C or in S because those need to be ignored. To speed up this part we can use a temporary vector Z[N] of length N, where all cells are initialized to ∞, and then for each point s from S we set Z[s.p]=s.d. Then, when a new candidate c is generated we simply update the table Z[c.p]=min{c.d,Z[c.p]}. In the end we remove S points from Z, and generate C from Z as follows: for all i=0 . . . N−1, if Z[i]<∞ then add a candidate c={.p=i,.d=Z[i]} to C. This way we construct C with unique candidates and also having the smallest depths. Architectural improvements. MUX(a, b) and MUX(b, a) can be combined in a single MUX-convolution function. In max{d1,d2}+1, move the +1 operation outside the convolution functions, and do it after the convolutions, instead. p⊕{.p=[1|0|1|0],.d=0} is done in order to include gates with negated output. Those can be moved outside the convolution functions as well and be performed in the main function Distances( ) in linear time. This helps to reduce the number of operations in the critical loop of the function convolutionXOR( ), basically this doubles the speed. When A=B, then in convolutionXOR( ) we only need to run b starting from a. When B is not equal to V0, then the convolutionXOR can be done only on the half values of b, since we know that all vectors Vkfor k>0 are symmetric in regards to NOT-gates. When A[a]=∞ in convolutionXOR( ) then we do not need to enter the inner loop for b. The same check for B[b]≠∞ is not justified since it adds an unnecessary branching in the critical loop. Attaching AVX. It is quite clear that convolutionMUX( ) can be easily refactored to use AVX vectorized instructions and utilize its 128-bit registers and intrinsics. However, it is a bit tricky to attach AVX to convolutionXOR( ) function. First of all, assume each cell A[a],B[b] are all of char type (byte), then we must start b aligned to 16 bytes, since our registers in AVX are 128-bit long. Secondly, the result of p=a⊕b for b=0 . . . 15 mod 16 will end up in a permuted location p, but that permutation would only happen in the low 4 bits. With the help of _mm_shuffle_epi8( ) we can make a permutation of the destination 16-byte block, where the permutation vector only depends on the value of a mod 16 (recall that b=0 mod 16). Those permutation vectors can be hard coded in a constant table. Other operations within that convolutionXOR are trivial to implement. One could also attach AVX2 with its 256-bit long registers, thus speeding up the algorithms even more. B.3.1 More on ConvolutionXOR( ) One can notice that the convolutionXOR may be done with the help of the following convolution: V[p]=Σα=0NA[a]·B[p⊕a], where the operation x·ymax{a,b}, and x+ymin{a,b}. Thus, we have a convolution to be done in the (min, max)-algebra. One could think of applying Hadamard transform in O(N logN) but the problem is that that algebra does not have an inverse element. In [BHWZ94] there is an algorithm “MinConv” that can be converted into our convolution problem, and it is claimed to work “around and on average” O(N logN) time. The idea behind MinConv is to sort A and B vectors, then we get the smallest delays in the beginning of the vectors A and B. Thus, we can enumerate the max{A[a],B[b]} delays starting from the smallest. Also, we should take care of the indexes while sorting A and B, so that we can find the destination point p=a⊕b. Every point p hit the first time will receive the smallest possible delay, and thus can be skipped later on. The idea behind the algorithm is that the predicted number of hits to cover all N points of the result will be around N logN. We have programmed that but it did not demonstrate a speedup on our input size (n=8, N=218) and actually performed slower than our AVX-improved quadratic algorithm, at least on our input size. Also, the above algorithm cannot be parallelized. B.3.2 ConvolutionXOR( ) in O(maxDelay2·N Log N) Time Usually the delay values stored in V vectors are small. We can rely on that fact in order to develop an algorithm that may be faster than O(N2). The idea is simple. Construct two vectors Ax[ ] and By[ ] such that Ax[p]=1 if A[p]=x, otherwise Ax[p]=0, do the same for By[ ]. Then compute the classical convolution of two Boolean vectors Axand Bythrough the classical Walsh-Hadamard transform in O(N logN). Let Cd[ ] be the result of that convolution with d=max{x,y}+1. Then we know that if Cd[p]≠0 then the point p may have the depth d. So we just make a linear loop over Cd[p] and check if Cd[p]≠0 and V[p]>d then V[p]=d. We should repeat the above for all combinations of x,y=0 . . . maxD, each step of which has the complexity O(N logN). The value of maxDelay can also be determined in the beginning of the algorithm linearly. Also note that maxDelay may be different for A and B, so that x and y may have different ranges. B.3.3 ConvolutionXOR( ) in O(|S|2) Time When constructing the vector V1from the initial V0is it worth to do the classical way and run through pairs of points of S, instead of doing the full scale convolution over N points. However, the number of newly generated points grows very rapidly and this method can only be applied to the very first V s (in our experiments we have seen some “win” only in V1, then for further Vk,k>1 we have used our optimized convolution algorithms). C Inverse SBoxes The stand-alone inverse SBox is as far as we know, not used very much. But we provide the comparison with previously known solutions in Table 6. TABLE 6Inverse SBox: Comparison of the results.Area Size/GatesCritical Path/DepthReferenceStd. gatesTech. GEStd. gatesTech. XORsInverse SBoxes: Previous ResultsCanright81XO + 34ND + 6NR—[Can05]′05most famous121228.7325??designBoyar et al.93XO + 34AD13XO + 3AD[BP12]′12our starting point127261.911614.932Inverse SBoxes: Our ResultsInverse (fast)72XO + 11XN + 1AD + 46ND + 6NR9XO + 2ND + 1NRfast with depth 12136246.721210.378Inverse658XO + 5XN + 1AD + 36ND + 6NR11XO + 2ND + 1NR(optimal)(+1IV)area/speed optimal116(+1)214.091412.378Inverse60XO + 8XN + 1AD + 32ND + 6NR21XO + 1XN + 2ND +(bonus)(+1IV)1NRnew record107(+1)198.442523.371smallest D Circuits D.1 Preliminaries In the below listings we present 6 circuits for the forward, inverse, and combined SBoxes in two architectures A(small) and D (fast). The used symbols are:##comment—a comment line@filename—means that we should include the code from another file ‘filename’, the listing of which is then given in this section as well.a{circumflex over ( )}b—is the usual XOR gate, other gates are explicitly denoted and taken from the set of {XNOR, AND, NAND, OR, NOR, MUX, NMUX, NOT}(a op b)—where the order of execution (the order of gates connections) is important we specify it by brackets. The input to all SBoxes are the 8 signals {U0 . . . U7} and the output are the 8 signals {R0 . . . R7}. The input and output bits are represented in Big Endian bit order. For combined SBoxes the input has additional signals ZF and ZI where ZF=1 if we perform the forward SBox and ZF=0 if inverse, otherwise; the signal ZI is the compliment of ZF. We have tested all the proposed circuits and verified their correctness. The circuits are divided into sub-programs, according toFIG.5. In Section D.2 we describe the common shared components, and then for each solution we give components (common or specific) for the circuits. D.2 Shared Components Listing 2: MLTLX/INV/S0/S1/8XOR4: Shared components.# File: mulx.aT20 = NAND(Q6, Q12)# File: s0.aT21 = NAND(Q3, Q14)Y02 = Y2 {circumflex over ( )} Y0T22 = NAND(Q1, Q16)Y13 = Y3 {circumflex over ( )} Y1Y23 = Y3 {circumflex over ( )} Y2T10 = (NOR(Q3, Q14) {circumflex over ( )} NAND(Q0, Q7))Y01 = Y1 {circumflex over ( )} Y0T11 = (NOR(Q4, Q13) {circumflex over ( )} NAND(Q10, Q11))Y00 = Y02 {circumflex over ( )} Y13T12 = (NOR(Q2, Q17) {circumflex over ( )} NAND(Q5, Q9))T13 = (NOR(Q8, Q15) {circumflex over ( )} NAND(Q2, Q17))# File: s1.aY01 = NAND(T6, NAND(X2, T3))X0 = T10 {circumflex over ( )} (T20 {circumflex over ( )} T22)Y23 = NAND(T4, NAND(X0, T5))X1 = T11 {circumflex over ( )} (T21 {circumflex over ( )} T20)Y02 = Y2 {circumflex over ( )} Y0X2 = T12 {circumflex over ( )} (T21 {circumflex over ( )} T22)Y13 = Y3 {circumflex over ( )} Y1X3 = T13 {circumflex over ( )} (T21 {circumflex over ( )} NAND(Q4, Q13))Y00 = Y01 {circumflex over ( )} Y23# File: inv.a# File: 8xor4.dT0 = XOR(X2, X3)R0 = (K0 {circumflex over ( )} K1) {circumflex over ( )} (K2 {circumflex over ( )} K3)T1 = AND(X0, X2)R1 = (K4 {circumflex over ( )} K5) {circumflex over ( )} (K6 {circumflex over ( )} K7)T2 = XOR(X0, X1)R2 = (K8 {circumflex over ( )} K9) {circumflex over ( )} (K10 {circumflex over ( )} K11)T3 = NAND(X0, X3)R3 = (K12 {circumflex over ( )} K13) {circumflex over ( )} (K14 {circumflex over ( )} K15)T4 = NAND(X3, T2)R4 = (K16 {circumflex over ( )} K17) {circumflex over ( )} (K18 {circumflex over ( )} K19)T5 = NAND(X1, X2)R5 = (K20 {circumflex over ( )} K21) {circumflex over ( )} (K22 {circumflex over ( )} K23)T6 = NAND(X1, T0)R6 = (K24 {circumflex over ( )} K25) {circumflex over ( )} (K26 {circumflex over ( )} K27)T7 = NAND(T5, T0)R7 = (K28 {circumflex over ( )} K29) {circumflex over ( )} (K30 {circumflex over ( )} K31)T8 = NAND(T2, T3)Y0 = XNOR(T1, T7)Y1 = XOR(T6, NOR(T1, X3))Y2 = XNOR(T1, T8)Y3 = XOR(T4, NOR(T1, X1)) Listing 3: MULN/MULL: Shared components.# File: muln.aK20 = NAND(Y0, L20)K15 = NAND(Y3, L15)N0 = NAND(Y01, Q11)K19 = NAND(Y3, L19)N1 = NAND(Y0 , Q12)K1 = NAND(Y1, L1)K23 = NAND(Y3, L23)N2 = NAND(Y1 , Q0)K5 = NAND(Y1, L5)K27 = NAND(Y3, L27)N3 = NAND(Y23, Q17)K9 = NAND(Y1, L9)K31 = NAND(Y3, L31)N4 = NAND(Y2, Q5)K13 = NAND(Y1, L13)N5 = NAND(Y3 , Q15)K17 = NAND(Y1, L17)# File: mull.fN6 = NAND(Y13, Q14)K21 = NAND(Y1, L21)K4 = AND(Y0, L4)N7 = NAND(Y00, Q16)K25 = NAND(Y1, L25)K8 = AND(Y0, L8)N8 = NAND(Y02, Q13)K29 = NAND(Y1, L29)K24 = AND(Y0, L24)N9 = NAND(Y01, Q7)K28 = AND(Y0, L28)N10 = NAND(Y0 , Q10)K2 = NAND(Y2, L2)N11 = NAND(Y1 ,Q6)K6 = NAND(Y2, L6)# File: mull.iN12 = NAND(Y23, Q2)K10 = NAND(Y2, L10)K4 = NAND(Y0, L4)N13 = NAND(Y2, Q9)K14 = NAND(Y2, L14)K8 = NAND(Y0, L8)N14 = NAND(Y3, Q8)K18 = NAND(Y2, L18)K24 = NAND(Y0, L24)N15 = NAND(Y13, Q3)K22 = NAND(Y2, L22)K28 = NAND(Y0, L28)N16 = NAND(Y00, Q1)K26 = NAND(Y2, L26)N17 = NAND(Y02, Q4)K30 = NAND(Y2, L30)# File: mull.cK4 = NAND(Y0, L4) {circumflex over ( )} ZF# File: mull.dK3 = NAND(Y3, L3)K8 = NAND(Y0, L8) {circumflex over ( )} ZFK0 = NAND(Y0, L0)K7 = NAND(Y3, L7)K24 = NAND(Y0, L24) {circumflex over ( )} ZFK12 = NAND(Y0, L12)K11 = NAND(Y3, L11)K28 = NAND(Y0, L28) {circumflex over ( )} ZFK16 = NAND(Y0, L16) D.3 Forward SBox (Fast) Listing 4: Forward SBox with the smallest delay (fast)# Forward (fast)Q4 = Q16 {circumflex over ( )} U4L13 = U3 {circumflex over ( )} [email protected] = Z18 {circumflex over ( )} Z160L17 = U4 {circumflex over ( )} [email protected] = U1 {circumflex over ( )} U3L29 = Z96 {circumflex over ( )} [email protected] = Z10 {circumflex over ( )} Q2L14 = Q11 {circumflex over ( )} [email protected] = U0 {circumflex over ( )} U7L26 = Q11 {circumflex over ( )} [email protected] = U2 {circumflex over ( )} U5L30 = Q11 {circumflex over ( )} [email protected] = Z36 {circumflex over ( )} Q5L7 = Q12 {circumflex over ( )} Q1L19 = U2 {circumflex over ( )} Z96L11 = Q12 {circumflex over ( )} L15# File: ftop.dQ9 = Z18 {circumflex over ( )} L19L27 = L30 {circumflex over ( )} L10# ExhaustiveQ10 = Z10 {circumflex over ( )} Q1Q17 = U0→ searchQ12 = U3 {circumflex over ( )} L28L0 = Q10Z18 = U1 {circumflex over ( )} U4Q13 = U3 {circumflex over ( )} Q2L4 = U6L28 = Z18 {circumflex over ( )} U6L10 = Z36 {circumflex over ( )} Q7L20 = Q0Q0 = U2 {circumflex over ( )} L28Q14 = U6 {circumflex over ( )} L10L24 = Q16Z96 = U5 {circumflex over ( )} U6Q15 = U0 {circumflex over ( )} Q5L1 = Q6Q1 = U0 {circumflex over ( )} Z96L8 = U3 {circumflex over ( )} Q5L9 = U5Z160 = U5 {circumflex over ( )} U7L12 = Q16 {circumflex over ( )} Q2L21 = Q11Q2 = U6 {circumflex over ( )} Z160L16 = U2 {circumflex over ( )} Q4L25 = Q13Q11 = U2 {circumflex over ( )} U3L15 = U1 {circumflex over ( )} Z96L2 = Q9L6 = U4 {circumflex over ( )} Z96L31 = Q16 {circumflex over ( )} L15L18 = U1Q3 = Q11 {circumflex over ( )} L6L5 = Q12 {circumflex over ( )} L31L22 = Q15Q16 = U0 {circumflex over ( )} Q11L3 = Q8L23 = U0 D.4 Forward SBox (Optimal) Listing 5: Forward SBox circuit with area/depth trade-off (optimal)# ForwardZ66 = U1 {circumflex over ( )} U6H9 = N3 {circumflex over ( )} H7→ (optimal)Z114 = Q11 {circumflex over ( )} Z66H10 = N15 {circumflex over ( )} [email protected] = U7 {circumflex over ( )} Z114H11 = N9 {circumflex over ( )} [email protected] = Q1 {circumflex over ( )} Z114H12 = N12 {circumflex over ( )} [email protected] = Q7 {circumflex over ( )} Z114H13 = N1 {circumflex over ( )} [email protected] = U2 {circumflex over ( )} Q13H14 = N5 {circumflex over ( )} [email protected] = Z9 {circumflex over ( )} Z66H15 = N7 {circumflex over ( )} [email protected] = Q16 {circumflex over ( )} Q13H16 = H10 {circumflex over ( )} H11Q15 = U0 {circumflex over ( )} U2H17 = N16 {circumflex over ( )} H8# File: ftop.aQ17 = Z9 {circumflex over ( )} Z114H18 = H6 {circumflex over ( )} H8# ExhaustiveQ4 = U7H19 = H10 {circumflex over ( )} H12→ searchH20 = N2 {circumflex over ( )} H3Z6 = U1 {circumflex over ( )} U2# File: fbot.aH21 = H6 {circumflex over ( )} H14Q12 = Z6 {circumflex over ( )} U3# ProbabilisticH22 = N8 {circumflex over ( )} H12Q11 = U4 {circumflex over ( )} U5→ heuristicH23 = H13 {circumflex over ( )} H15Q0 = Q12 {circumflex over ( )} Q11H0 = N3 {circumflex over ( )} N8Z9 = U0 {circumflex over ( )} U3H1 = N5 {circumflex over ( )} N6R0 = XNOR(H16, H2)Z80 = U4 {circumflex over ( )} U6H2 = XNOR(H0, H1)R1 = H2Q1 = Z9 {circumflex over ( )} Z80H3 = N1 {circumflex over ( )} N4R2 = XNOR(H20, H21)Q7 = Z6 {circumflex over ( )} U7H4 = N9 {circumflex over ( )} N10R3 = XNOR(H17, H2)Q2 = Q1 {circumflex over ( )} Q7H5 = N13 {circumflex over ( )} N14R4 = XNOR(H18, H2)Q3 = Q1 {circumflex over ( )} U7H6 = N15 {circumflex over ( )} H4R5 = H22 {circumflex over ( )} H23Q13 = U5 {circumflex over ( )} Z80H7 = N0 {circumflex over ( )} H3R6 = XNOR(H19, H9)Q5 = Q12 {circumflex over ( )} Q13H8 = N17 {circumflex over ( )} H5R7 = XNOR(H9, H18) D.5 Forward SBox (Bonus) We include these bonus circuits just to update the world record for the smallest SBox. The new record is 108 gates with depth 24. Listing 6: Forward SBox circuit withthe smallest number of gates (bonus)# Forward (bonus)# File: fbot.bH11 = H9 {circumflex over ( )} [email protected] = N1 {circumflex over ( )} N5H12 = N7 {circumflex over ( )} [email protected] = N4 {circumflex over ( )} H0Q0 = Z0 {circumflex over ( )} [email protected] = XNOR(N2, H1)Z1 = U1 {circumflex over ( )} U6Q1 = Q7 {circumflex over ( )} Q2H2 = N9 {circumflex over ( )} N15Q7 = Z0 {circumflex over ( )} Z1Q3 = U0 {circumflex over ( )} Q7H3 = N11 {circumflex over ( )} N17Q2 = U2 {circumflex over ( )} Q0Q4 = U0 {circumflex over ( )} Q2R6 = XNOR(H2, H3)H13 = H4 {circumflex over ( )} H12Q5 = U1 {circumflex over ( )} Q4H4 = N11 {circumflex over ( )} N14R4 = N1 {circumflex over ( )} H13Q6 = U2 {circumflex over ( )} U3H14 = XNOR(N0, R7)Q10 = Q6 {circumflex over ( )} Q7# File: ftop.bH15 = H9 {circumflex over ( )} H14Q8 = U0 {circumflex over ( )} Q10Z0 = U3 {circumflex over ( )} U4H16 = H7 {circumflex over ( )} H15Q9 = Q8 {circumflex over ( )} Q2Q17 = U1 {circumflex over ( )} U7R1 = XNOR(N6, H16)Q12 = Z0 {circumflex over ( )} Q17Q16 = U5 {circumflex over ( )} Q17H17 = N4 {circumflex over ( )} H14Q15 = U7 {circumflex over ( )} Q4H5 = N9 {circumflex over ( )} N12H18 = N3 {circumflex over ( )} H17Q13 = Z0 {circumflex over ( )} Q15R5 = H4 {circumflex over ( )} H5R0 = H13 {circumflex over ( )} H18H6 = N16 {circumflex over ( )} [email protected] = R2 {circumflex over ( )} [email protected] = N10 {circumflex over ( )} [email protected] = XNOR(H6, H8)H9 = N8 {circumflex over ( )} H1Q14 = Q0 {circumflex over ( )} Q15H10 = N13 {circumflex over ( )} H8Q11 = U5R3 = H5 {circumflex over ( )} H10 D.6 Inverse SBox (Fast) Listing 7: Inverse SBox with the smallest delay (fast)# Inverse (fast)Q9 = Q10 {circumflex over ( )} Q4L5 = L27 {circumflex over ( )} [email protected] = U4 {circumflex over ( )} U5L19 = Q14 {circumflex over ( )} [email protected]= U2 {circumflex over ( )} U7L26 = Q3 {circumflex over ( )} [email protected] = L12 {circumflex over ( )} Z132L13 = L19 {circumflex over ( )} [email protected] = Q0 {circumflex over ( )} Q11L17 = L12 {circumflex over ( )} [email protected] = U3 {circumflex over ( )} Z132L21 = XNOR(U1, Q1)@8xor4.dQ13 = U0 {circumflex over ( )} L27L25 = Q5 {circumflex over ( )} L3Q14 = XNOR(Q10, U2)L14 = U3 {circumflex over ( )} Q12# File: itop.dQ15 = Q14 {circumflex over ( )} Q0L18 = U0 {circumflex over ( )} Q1# ExhaustiveQ16 = XNOR(Q8, U7)L22 = XNOR(Q5, U6)→ searchQ17 = Q16 {circumflex over ( )} Q11L8 = Q11Q8 = XNOR(U1, U3)L23 = Q15 {circumflex over ( )} Z132L28 = Q7Q0 = Q8 {circumflex over ( )} U5L0 = U0 {circumflex over ( )} L23L9 = Q12Q1 = U6 {circumflex over ( )} U7L3 = Q2 {circumflex over ( )} Q11L29 = Q10Q7 = U3 {circumflex over ( )} U4L4 = Q6 {circumflex over ( )} L3L2 = U5Q2 = Q7 {circumflex over ( )} Q1L16 = Q3 {circumflex over ( )} L27L10 = Q17Q3 = U0 {circumflex over ( )} U4L1 = XNOR(U2, U3)L30 = Q2Q4 = Q3 {circumflex over ( )} Q1L6 = L1 {circumflex over ( )} Q0L7 = U4Q5 = XNOR(U1, Q3)L20 = L6 {circumflex over ( )} Q2L11 = Q5Q10 = XNOR(U0, U1)L15 = XNOR(U2, Q6)L31 = Q9Q6 = Q10 {circumflex over ( )} Q7L24 = L15 {circumflex over ( )} U0 D.7 Inverse SBox (Optimal) Listing 8: Inverse SBox circuit with area/depth trade-off (optimal)# InverseQ5 = U0 {circumflex over ( )} Q6H6 = N4 {circumflex over ( )} H1→ (optimal)Q7 = U3 {circumflex over ( )} Q0H7 = N0 {circumflex over ( )} [email protected] = Z66 {circumflex over ( )} Z132H8 = N15 {circumflex over ( )} [email protected] = U5 {circumflex over ( )} Q17H9 = N9 {circumflex over ( )} [email protected] = U0 {circumflex over ( )} U5H10 = N6 {circumflex over ( )} [email protected] = U4 {circumflex over ( )} Z33H11 = H3 {circumflex over ( )} [email protected] = Q4 {circumflex over ( )} Q10H12 = N7 {circumflex over ( )} [email protected] = XNOR(U4, Z129)H13 = N8 {circumflex over ( )} H0Q13 = XNOR(Z20, Z40)H14 = N3 {circumflex over ( )} N5# File: itop.aQ16 = XNOR(Z66, U7)H15 = H5 {circumflex over ( )} H8# ExhaustiveQ14 = Q13 {circumflex over ( )} Q16H16 = N6 {circumflex over ( )} N7→ searchQ15 = Z33 {circumflex over ( )} Q3H17 = H12 {circumflex over ( )} H13Z20 = U2 {circumflex over ( )} U4Q11 = NOT(U2)H18 = H5 {circumflex over ( )} H16Z129 = U0 {circumflex over ( )} U7H19 = H3 {circumflex over ( )} H10Q0 = Z20 {circumflex over ( )} Z129# File: ibot.aH20 = H10 {circumflex over ( )} H14Q4 = U1 {circumflex over ( )} Z20# ProbabilisticR0 = H7 {circumflex over ( )} H18Z66 = U1 {circumflex over ( )} U6→ heuristicR1 = H7 {circumflex over ( )} H19Q3 = U3 {circumflex over ( )} Z66H0 = N2 {circumflex over ( )} N14R2 = H2 {circumflex over ( )} H11Q1 = Q4 {circumflex over ( )} Q3H1 = N1 {circumflex over ( )} N5R4 = H8 {circumflex over ( )} H9Q2 = U6 {circumflex over ( )} Z129H2 = N10 {circumflex over ( )} N11R3 = R4 {circumflex over ( )} H20Z40 = U3 {circumflex over ( )} U5H3 = N13 {circumflex over ( )} H0R5 = N2 {circumflex over ( )} H6Z132 = U2 {circumflex over ( )} U7H4 = N16 {circumflex over ( )} N17R6 = H15 {circumflex over ( )} H17Q6 = Z40 {circumflex over ( )} Z132H5 = N1 {circumflex over ( )} H2R7 = H4 {circumflex over ( )} H11Note:the above ‘NOT(U2)’ in the file ‘itop.a’ is removable by setting Q11 = U2 and accurately negating some of the gates and variables downwards where Q11 is involved. For example, the variable Y01 should be negated as well due to: N0 = NAND(Y01, Q11) consequently, all gates involving Y01 s.b. negated which leads to negate other Q variables, etc . . . D.8 Inverse SBox (Bonus) Listing 9: Inverse SBox circuit withthe smallest number of gates (bonus)# Inverse (bonus)Q2 = Q8 {circumflex over ( )} Q9R3 = H3 {circumflex over ( )} [email protected] = Q1 {circumflex over ( )} Q2H7 = N9 {circumflex over ( )} [email protected] = Z33 {circumflex over ( )} Q7R5 = N10 {circumflex over ( )} [email protected] = Q17 {circumflex over ( )} Q15H8 = N8 {circumflex over ( )} [email protected] = Q3 {circumflex over ( )} Q8H9 = N6 {circumflex over ( )} [email protected] = XNOR(U1, Q0)H10 = N7 {circumflex over ( )} [email protected] = Q15 {circumflex over ( )} Q0H11 = N1 {circumflex over ( )} R0Q13 = Q16 {circumflex over ( )} Q14H12 = N0 {circumflex over ( )} H11# File: itop.bQ11 = NOT(U1)R2 = H9 {circumflex over ( )} H12Z33 = U0 {circumflex over ( )} U5H13 = H8 {circumflex over ( )} H10Z3 = U0 {circumflex over ( )} U1# File: ibot.bR1 = R2 {circumflex over ( )} H13Q1 = XNOR(Z3, U3)H0 = N4 {circumflex over ( )} N5H14 = H5 {circumflex over ( )} H13Q16= XNOR(Z33, U6)H1 = N1 {circumflex over ( )} N2H15 = N13 {circumflex over ( )} H14Q17= XNOR(U1, Q16)R6 = H0 {circumflex over ( )} H1R7 = N12 {circumflex over ( )} H15Q8 = U4 {circumflex over ( )} Q17H2 = N13 {circumflex over ( )} N14H16 = N4 {circumflex over ( )} H9Q3 = XNOR(U2, Z33)H3 = R6 {circumflex over ( )} H2H17 = R5 {circumflex over ( )} H16Q4 = Q1 {circumflex over ( )} Q3H4 = N17 {circumflex over ( )} H3R4 = N3 {circumflex over ( )} H17Q15= XNOR(U4, U7)R0 = N16 {circumflex over ( )} H4Q10 = U3 {circumflex over ( )} Q15H5 = N15 {circumflex over ( )} H4Q9 = Q4 {circumflex over ( )} Q10H6 = N10 {circumflex over ( )} N11 D.9 Combined SBox (Fast) Listing 10: Combined SBox circuits with the smallest delay (fast/-S)# Combined (fast)L5 = A0 {circumflex over ( )} A8L7 = Q12 {circumflex over ( )} [email protected] or @ctop.dsL11 = Q16 {circumflex over ( )} L5L8 = Q7 {circumflex over ( )} [email protected] = MUX(ZF, U2, U6)A19 = NMUX(ZF, U1,@inv.aA10 = XNOR(A2, A9)A4)@mull.cQ5 = A1 {circumflex over ( )} A10A20 = XNOR(U6, A19)@mull.dQ15 = U0 {circumflex over ( )} Q5Q9 = XNOR(A16, A20)@8xor4.dA11 = U2 {circumflex over ( )} U3Q10 = A18 {circumflex over ( )} A20A12 = NMUX(ZF, A2,L9 = Q0 {circumflex over ( )} Q9# File: ctop.dA11)A21 = U1 {circumflex over ( )} A2# FloatingQ13 = A6 {circumflex over ( )} A12A22 = NMUX(ZF, A21,multiplexersQ12 = Q5 {circumflex over ( )} Q13A5)A0 = XNOR(U2, U4)A13 = A5 {circumflex over ( )} A12Q2 = A20 {circumflex over ( )} A22A1 = XNOR(U1, A0)Q0 = Q5 {circumflex over ( )} A13Q6 = XNOR(A4, A22)A2 = XNOR(U5, U7)Q14 = U0 {circumflex over ( )} A13Q8 = XNOR(A16, A22)A3 = U0 {circumflex over ( )} U5A14 = XNOR(U3, A3)A23 = XNOR(Q5, Q9)A4 = XNOR(U3, U6)A15 = NMUX(ZF, A0,L10 = XNOR(Q1, A23)A5 = U2 {circumflex over ( )} U6U3)L4 = Q14 {circumflex over ( )} L10A6 = NMUX(ZF, A4,A16 = XNOR(U5, A15)A24 = NMUX(ZF, Q2,U1)Q3 = A4 {circumflex over ( )} A16L4)Q11 = A5 {circumflex over ( )} A6L6 = Q11 {circumflex over ( )} Q3L12 = XNOR(Q16,Q16 = U0 {circumflex over ( )} Q11A17 = U2 {circumflex over ( )} A10A24)A7 = U3 {circumflex over ( )} A1Q7 = XNOR(A8, A17)L25 = XNOR(U3, A24)L24 = MUX(ZF, Q16,A18 = NMUX(ZF, A14,A25 = MUX(ZF, L10,A7)A2)A3)A8 = NMUX(ZF, A3,Q1 = XNOR(A4, A18)L17 = U4 {circumflex over ( )} A25U6)Q4 = XNOR(A16, A18)A26 = MUX(ZF, A10,L28 = XNOR(L7, A35)Q8 = ZF {circumflex over ( )} A7Q4)A36 = NMUX(ZF, Q6,A8 = XNOR(A0, A2)L14 = L24 {circumflex over ( )} A26L11)L25 = NMUX(ZF, A8,L23 = A25 {circumflex over ( )} A26L31 = A30 {circumflex over ( )} A36U1)A27 = MUX(ZF, A1,A37 = MUX(ZF, L26,A9 = U2 {circumflex over ( )} A1U5)A0)Q2 = NMUX(ZF, A8,L30 = Q12 {circumflex over ( )} A27L22 = Q16 {circumflex over ( )} A37A9)A28 = NMUX(ZF, L10,Q17 = U0Q7 = Q1 {circumflex over ( )} Q2L5)L0 = Q10Q9 = Q8 {circumflex over ( )} Q2L21 = XNOR(L14, A28)L1 = Q6A10 = XNOR(U0, A7)L27 = XNOR(L30, A28)L2 = Q9A11 = XNOR(U5, U6)A29 = XNOR(U5, L4)L3 = Q8Q3 = MUX(ZF, A10,L29 = A28 {circumflex over ( )} A29A11)L15 = A19 {circumflex over ( )} A29# File: ctop.dsA30 = XNOR(A3, A10)# FloatingL18 = NMUX(ZF, A19,multiplexersA30)A0 = U3 {circumflex over ( )} U6A31 = XNOR(A7, A21)A1 = XNOR(U5, U7)L16 = A25 {circumflex over ( )} A31Q1 = XNOR(A0, A1)L26 = L18 {circumflex over ( )} A31A2 = U1 {circumflex over ( )} U4A32 = MUX(ZF, U7,A3 = U0 {circumflex over ( )} U5A5)Q17 = MUX(ZF, U5,L13 = A32 {circumflex over ( )} A7A3)A33 = NMUX(ZF, A15,A4 = U7 {circumflex over ( )} A2U0)Q5 = U3 {circumflex over ( )} A4L19 = XNOR(L6, A33)Q15 = Q17 {circumflex over ( )} Q5A34 = NOR(ZF, U6)A5 = XNOR(U0, U2)L20 = A34 {circumflex over ( )} Q0A6 = U1 {circumflex over ( )} A5A35 = XNOR(A4, A8)A7 = XNOR(U2, U3)Q4 = Q1 {circumflex over ( )} Q3A19 = NMUX(ZF, A9,L15 = Q8 {circumflex over ( )} L11Q6 = Q8 {circumflex over ( )} Q3A2)A20 = XNOR(Q17, L6)A12 = XNOR(U5, A6)L11 = U0 {circumflex over ( )} A19L31 = XNOR(A16, A20)L9 = MUX(ZF, A12, A1)L0 = XNOR(A17, L31)L19 = MUX(ZF, U1,A21 = MUX(ZF, U4,A12)Q12)L13 = Q6 {circumflex over ( )} L9L8 = Q7 {circumflex over ( )} A21A13 = U2 {circumflex over ( )} U4L12 = Q6 {circumflex over ( )} A21A14 = MUX(ZF, A7,A22 = MUX(ZF, L8, U4)A11)L28 = L25 {circumflex over ( )} A22Q16 = XNOR(A13,L23 = L12 {circumflex over ( )} A22A14)A23 = L25 {circumflex over ( )} A16Q11 = Q17 {circumflex over ( )} Q16L17 = L19 {circumflex over ( )} A23A15 = OR(ZF, A0)L29 = Q8 {circumflex over ( )} A23Q13 = A6 {circumflex over ( )} A15A24 = OR(ZI, A11)Q12 = Q5 {circumflex over ( )} Q13L7 = XNOR(U4, A24)Q14 = Q16 {circumflex over ( )} Q13A25 = NMUX(ZF, U6,A16 = MUX(ZF, A12,U7)Q12)L2 = A1 {circumflex over ( )} A25A17 = A8 {circumflex over ( )} Q6A26 = Q1 {circumflex over ( )} A6L20 = NMUX(ZF, A17,L24 = NMUX(ZF, A7,U2)A26)L6 = XNOR(U4, A17)A27 = MUX(ZF, U3,L27 = Q4 {circumflex over ( )} L20Q1)A18 = Q5 {circumflex over ( )} Q2L22 = Q15 {circumflex over ( )} A27L10 = MUX(ZF, Q12,A28 = MUX(ZF, A25,A18)A10)L14 = Q9 {circumflex over ( )} L10L18 = XNOR(L13, A28)A29 = A21 {circumflex over ( )} L18L4 = Q11 {circumflex over ( )} A29L3 = MUX(ZF, A29,Q6)A30 = XNOR(L25, Q7)L30 = XNOR(L14, A30)A31 = L22 {circumflex over ( )} A30L21 = XNOR(L27, A31)L26 = XNOR(Q3, A31)A32 = MUX(ZF, A4,A3)L16 = XNOR(A8, A32)A33 = MUX(ZF, A3,U0)Q10 = A32 {circumflex over ( )} A33A34 = MUX(ZF, Q11,A7)L1 = Q10 {circumflex over ( )} A34A35 = MUX(ZF, U0,A4)Q0 = A19 {circumflex over ( )} A35A36 = MUX(ZF, L14,Q7)L5 = A18 {circumflex over ( )} A36 D.10 Combined SBox (Optimal) Listing 11: Combined SBox circuit with area/depth trade-off (optimal)# Combined (optimal)Q8 = XNOR(Q1, A10)# Probabilistic [email protected] = XNOR(U0, U2)H1 = N1 {circumflex over ( )} [email protected] = ZF {circumflex over ( )} A11H3 = N15 {circumflex over ( )} [email protected] = U1 {circumflex over ( )} U3H4 = N12 {circumflex over ( )} [email protected] = A1 {circumflex over ( )} A12H5 = N0 {circumflex over ( )} [email protected] = MUX(ZF, A13,H6 = N7 {circumflex over ( )} [email protected])H8 = N10 {circumflex over ( )} N11Q15 = U4 {circumflex over ( )} A14H9 = H4 {circumflex over ( )} H8# File: ctop.aA15 = NMUX(ZF, U5,S4 = H3 {circumflex over ( )} H9# Floating multiplexersA0)H10 = N12 {circumflex over ( )} N14A0 = XNOR(U0, U6)Q5 = XNOR(A14, A15)H11 = N16 {circumflex over ( )} H8Q1 = XNOR(U1, ZF)Q17 = XNOR(U4, A15)S14 = N17 {circumflex over ( )} H11A1 = U2 {circumflex over ( )} U5A16 = MUX(ZF, A5, A2)H12 = N1 {circumflex over ( )} N2A2 = XNOR(U3, U4)Q16 = XNOR(A13, A16)H13 = N3 {circumflex over ( )} N5A3 = XNOR(U3, U7)A17 = A3 {circumflex over ( )} A8H14 = N4 {circumflex over ( )} N5A4 = MUX(ZF, A2, U2)Q2 = XNOR(A10, A17)H15 = N9 {circumflex over ( )} N11A5 = A0 {circumflex over ( )} A1A18 = U4 {circumflex over ( )} U6H16 = N6 {circumflex over ( )} H13Q6 = A4 {circumflex over ( )} A5A19 = U1 {circumflex over ( )} U2H17 = H6 {circumflex over ( )} H14A6 = XNOR(Q1, A1)Q11 = Q6 {circumflex over ( )} A19H18 = N4 {circumflex over ( )} H5A7 = NMUX(ZF, U0, A3)A20 = MUX(ZF, A18,H30 = H18 {circumflex over ( )} ZFQ4 = A5 {circumflex over ( )} A7A19)S1 = H17 {circumflex over ( )} H30Q3 = Q1 {circumflex over ( )} Q4Q13 = U5 {circumflex over ( )} A20H19 = H3 {circumflex over ( )} H15A8 = NMUX(ZF, U6, A2)A21 = XNOR(U4, Q0)S6 = XNOR(H18, H19)A9 = Q1 {circumflex over ( )} A3Q14 = XNOR(A14, A21)S11 = H17 {circumflex over ( )} H19Q9 = A8 {circumflex over ( )} A9A22 = XNOR(A4, A6)H20 = H10 {circumflex over ( )} H15Q10 = Q4 {circumflex over ( )} Q9Q12 = XNOR(U6, A22)S0 = XNOR(S6, H20)A10 = XNOR(A4, A7)S5 = H17 {circumflex over ( )} H20Q7 = XNOR(Q9, A10)# File: cbot.aH21 = N7 {circumflex over ( )} H12H22 = H16 {circumflex over ( )} H21S12 = H20 {circumflex over ( )} H22S13 = S4 {circumflex over ( )} H22H23 = N15 {circumflex over ( )} N16H24 = N9 {circumflex over ( )} N10H25 = N8 {circumflex over ( )} H24H26 = H12 {circumflex over ( )} H14S7 = XNOR(S4, H26)H27 = H4 {circumflex over ( )} H23S2 = H30 {circumflex over ( )} H27H28 = N8 {circumflex over ( )} H16S3 = S14 {circumflex over ( )} H28H29 = H21 {circumflex over ( )} H25S15 = H23 {circumflex over ( )} H29R0 = S0R1 = S1R2 = S2R3 = MUX(ZF, S3, S11)R4 = MUX(ZF, S4, S12)R5 = MUX(ZF, S5, S13)R6 = MUX(ZF, S6, S14)R7 = MUX(ZF, S7, S15) D.11 Combined SBox (Bonus) Listing 12: Combined SBox circuit with the smallest number of gates (bonus)# Combined (bonus)Q8 = Q3 {circumflex over ( )} [email protected] = MUX(ZF, Q1, A4)@mulx.aQ9 = U6 {circumflex over ( )} [email protected] = Q8 {circumflex over ( )} [email protected] = Q4 {circumflex over ( )} [email protected] = Q6 {circumflex over ( )} [email protected] = MUX(ZF, A0, U4)Q12 = XNOR(U7, A9)# File: ctop.bQ11 = Q0 {circumflex over ( )} Q12# FloatingA10 = MUX(ZF, A6,multiplexersQ12)A0 = XNOR(U3, U6)A11 = A2 {circumflex over ( )} A10Q15 = XNOR(U1, ZF)A12 = A4 {circumflex over ( )} A11A1 = U5 {circumflex over ( )} Q15Q5 = Q0 {circumflex over ( )} A12A2 = U2 {circumflex over ( )} A0Q13 = Q11 {circumflex over ( )} A12A3 = U4 {circumflex over ( )} A1Q17 = Q14 {circumflex over ( )} A12A4 = U4 {circumflex over ( )} U6Q16 = Q14 {circumflex over ( )} Q13A5 = MUX(ZF, A2, A4)Q4 = XNOR(A3, A5)# File: cbot.bQ0 = U0 {circumflex over ( )} Q4H0 = N9 {circumflex over ( )} N10Q14 = Q15 {circumflex over ( )} Q0H1 = N16 {circumflex over ( )} H0A6 = XNOR(U0, U2)H2 = N4 {circumflex over ( )} N5Q3 = ZF {circumflex over ( )} A6S4 = N7 {circumflex over ( )} (N8 {circumflex over ( )} H2)Q1 = Q4 {circumflex over ( )} Q3H4 = N0 {circumflex over ( )} N2A7 = MUX(ZF, U1, Q0)H6 = N15 {circumflex over ( )} H1Q6 = XNOR(A5, A7)H7 = H4 {circumflex over ( )}(N3 {circumflex over ( )}N5)H20 = H6 {circumflex over ( )} ZFR5 = MUX(ZF, S5, S13)S2 = H20 {circumflex over ( )} H7R6 = MUX(ZF, S6, S14)S14 = S4 {circumflex over ( )} H7R7 = MUX(ZF, S7, S15)H8 = N13 {circumflex over ( )} H0H9 = N12 {circumflex over ( )} H8S1 = H20 {circumflex over ( )} H9H10 = N17 {circumflex over ( )} H1H12 = H2 {circumflex over ( )} (N1 {circumflex over ( )} N2)S0 = H6 {circumflex over ( )} H12H21 = N8 {circumflex over ( )} H4S5 = N6 {circumflex over ( )} (H9 {circumflex over ( )} H21)S11 = H12 {circumflex over ( )} S5S6 = S1 {circumflex over ( )} S11H15 = N14 {circumflex over ( )} H10H16 = H8 {circumflex over ( )} H15S12 = S5 {circumflex over ( )} H16H22 = N9 {circumflex over ( )} N11S7 = XNOR(S4, H10 {circumflex over ( )}H22)H19 = XNOR(H7, S7)S3 = H16 {circumflex over ( )} H19S15 = S11 {circumflex over ( )} H19S13 = S4 {circumflex over ( )} (N12 {circumflex over ( )}H15)R0 = S0R1 = S1R2 = S2R3 = MUX(ZF, S3, S11)R4 = MUX(ZF, S4, S12) PART C OF THE DISCLOSURE The results from Part A and Part B can be further extended by a closer investigation of the inversion circuit. The resulting embodiments of the SBox are circuits having an even shorter critical path. Inversion over GF(24) The inversion formulae are as follows: Y0=X1X2X3⊕X0X2 ⊕X1X2⊕X2⊕X3, Y1=X0X2X3⊕X0X2 ⊕X1X2⊕X1X3⊕X3, Y2=X0X1X3 ⊕X1X2 ⊕X0X3 ⊕X0⊕X1, Y3=X0X1X2 ⊕X0X2 ⊕X0X3 ⊕X1X3 ⊕X1. In [BP12] a circuit of depth 4 and 17 XORs was found, but it is desired to shrink the depth even further by utilizing a wider range of standard gates. Accordingly, the algorithm from Part B, section 4.2 has been adapted to also find a small solution for the INV block. The idea is simple; each Yiis a truth table of length 16 bits, based on a 4-bit input X0, . . . , X3. We define our “point” to be a 16-bit value. All standard gates, AND, OR, XOR, MUX, NOT, including their negate versions, can be applied to any combination of “known” points (S), and distances to target points T can be computed in a similar manner as before. Using this slightly modified algorithm for floating multiplexers, a solution was found having only 9 gates and depth 3. The results are shown in Equation 2 and the improved circuits are given in Appendix E. T0= AND(X0, X2)T3 = MUX(X1, X2, 1)Y1 = MUX(T2, X3, T3)T1 = NOR(X1, X3)T4 = MUX(X3, X0, 1)Y2 = MUX(X0, T2, X1)(2)T2 = XNOR(T0, T1)Y0 = MUX(X2, T2, X3)Y3 = MUX(T2, X1, T4) In case it is desired to avoid multiplexers in the INV block then there is an alternative set of equations that are also presented in this section. Each expression has been considered independently, using a general depth 3 expression: Yi=((Xaop1Xb)op5(Xcop2Xd))op7((Xeop3Xf)op6(Xgop4Xh)), where Xa-hare terms from {0, 1, X0, X1, X2, X3} and op1-7are operators from the set of standard gates {AND, OR, XOR, NAND, NOR, XNOR}. Note that the above does not need to have all terms, for example, the expression AND(x, x) is simply x. The exhaustive search can be organized as follows. Let there be an object Term which consists of a truth table TT of length 16 bits, based on the 4 bits X0, . . . , X3, and a Boolean function that is associated with the term. Starting with the initial set of available terms T(0)={0, 1, X0, . . . , X3}, an expression for a chosen Yiis constructed iteratively. Assume at some step k one has the set of available terms T(k), then the next set of terms and associated expressions can be obtained as: T(k+1)={T(k),T(k) operatorT(k)}, taking care of unique terms. At some step k one will get one or more term(s) whose TTs are equal to target TTs (Yis). Since it is possible to actually get multiple Boolean functions for each Yi, one should select only the “best” functions following the criteria: there are no NOT gates (due to better sharing capabilities), there is a maximum number of gates that can be shared between the 4 expressions for Y0, . . . , Y3, and the area/depth in terms of GE is small. Using this technique, we have found a depth 3, 15 gates solution for the inversion. The equations are given below, where depth 3 solutions are also provided for the additional 5 signals {Y01, Y23, Y02, Y13, Y00} such that they can share a lot of gates in the mentioned scenarios S0-S5 in Part B. Y0=xnor (and(X0,X2),nand(nand(X1,X2),xor(X2,X3))) Y1=xor(nand(xor(X2,X3),X1),nor (and(X0,X2),X3)) Y2=xnor (and(X0,X2),nand(xor(X0,X1),nand(X0,X3))) Y3=xor(nand(xor(X0,X1),X3),nor (and(X0,X2),X1)) Y01=nand(nand(xor(X2,X3),X1),nand(nand(X0,X3),X2)) Y23=nand(nand(xor(X0,X1),X3),nand(nand(X1,X2),X0)) Y13=xor(nor (and(X0,X2),xnor(X1,X3)),xor(nand(X0,X3),nand(X1,X2))) Y02=xor(nand(xor(X2,X3),nand(X1nand(xor(X0,X1),nand(X0,X3)))X2)) Y00=and(nand(and(X0,X2),xnor(X1nor(nor(X0,X2), and(X1,X3)))X3)) APPENDIX E In this section, circuits using the improved inversion formula presented in Part C are presented. Preliminaries In the listings presented below, specifications for three circuits for the forward, inverse, and combined SBoxes in the novel architecture D (fast) are described. The symbols used in the following listings are as follows, and have the indicated meanings:#comment—a comment line@filename—means that the code from another file called ‘filename’ should be included, the listing of which is then given in this section as well.a{circumflex over ( )}b—is the usual XOR gate; other gates are explicitly denoted and taken from the set of {XNOR, AND, NAND, OR, NOR, MUX, NMUX, NOT}(a op b)—where the order of execution (the order of gates connections) is important we specify it by brackets. The inputs to all SBoxes are the 8 signals {U0 . . . U7} and the outputs are the 8 signals {R0 . . . R7}. The input and output bits are represented in Big Endian bit order. For combined SBoxes the input has additional signals ZF and ZI where ZF=1 if one performs the forward SBox and ZF=0 if inverse, otherwise; the signal ZI is the compliment of ZF. All the proposed circuits have been tested and their correctness verified. The circuits are divided into sub-programs that correspond, respectively, to the functions/layers shown inFIG.5. The discussion starts with a description of the common shared components, and then for each solution the components (common or specific) for the circuits are described. Shared Components The shared components are used in several implementations in the following and thus described only once here. Listing: MULX/8XOR4/INV: Shared components# File: mulx.aR2 = (K8 {circumflex over ( )} K9) {circumflex over ( )} (K10 {circumflex over ( )} K11)T20 = NAND(Q6, Q12)R3 = (K12 {circumflex over ( )} K13) {circumflex over ( )} (K14 {circumflex over ( )} K15)T21 = NAND(Q3, Q14)R4 = (K16 {circumflex over ( )} K17) {circumflex over ( )} (K18 {circumflex over ( )} K19)T22 = NAND(Q1, Q16)R5 = (K20 {circumflex over ( )} K21) {circumflex over ( )} (K22 {circumflex over ( )} K23)R6 = (K24 {circumflex over ( )} K25) {circumflex over ( )} (K26 {circumflex over ( )} K27)T10 = (NOR(Q3, Q14) {circumflex over ( )} NANDR7 = (K28 {circumflex over ( )} K29) {circumflex over ( )} (K30 {circumflex over ( )} K31)(Q0, Q7))T11 = (NOR(Q4, Q13) {circumflex over ( )} NAND(Q10, Q11))# File: inv.aT12 = (NOR(Q2, Q17) {circumflex over ( )} NANDT0 = NAND(X0, X2)(Q5, Q9))T13 = (NOR(Q8, Q15) {circumflex over ( )} NANDT1 = NOR(X1, X3)(Q2, Q17))T2 = XNOR(T0, T1)Y0 = MUX(X2, T2, X3)X0 = T10 {circumflex over ( )} (T20 {circumflex over ( )} T22)Y2 = MUX(X0, T2, X1)X1 = T11 {circumflex over ( )} (T21 {circumflex over ( )} T20)T3 = MUX(X1, X2, 1)X2 = T12 {circumflex over ( )} (T21 {circumflex over ( )} T22)Y1 = MUX(T2, X3, T3)X3 = T13 {circumflex over ( )} (T21 {circumflex over ( )} NANDT4 = MUX(X3, X0, 1)(Q4, Q13))Y3 = MUX(T2, X1, T4)# File: 8xor4.dR0 = (K0 {circumflex over ( )} K1) {circumflex over ( )} (K2 {circumflex over ( )} K3)R1 = (K4 {circumflex over ( )} K5) {circumflex over ( )} (K6 {circumflex over ( )} K7) Listing: MULN/MULL: Shared components.# File: muln.aK9 = NAND(Y1, L9)N0 = NAND(Y01, Q11)K13 = NAND(Y1, L13)N1 = NAND(Y0, Q12)K17 = NAND(Y1, L17)N2 = NAND(Y1 , Q0)K21 = NAND(Y1, L21)N3 = NAND(Y23, Q17)K25 = NAND(Y1, L25)N4 = NAND(Y2, Q5)K29 = NAND(Y1, L29)N5 = NAND(Y3, Q15)N6 = NAND(Y13, Q14)K2 = NAND(Y2, L2)N7 = NAND(Y00, Q16)K6 = NAND(Y2, L6)N8 = NAND(Y02, Q13)K10 = NAND(Y2, L10)N9 = NAND(Y01, Q7)K14 = NAND(Y2, L14)N10 = NAND(Y0 , Q10)K18 = NAND(Y2, L18)N11 = NAND(Y1 ,Q6)K22 = NAND(Y2, L22)N12 = NAND(Y23, Q2)K26 = NAND(Y2, L26)N13 = NAND(Y2, Q9)K30 = NAND(Y2, L30)N14 = NAND(Y3, Q8)N15 = NAND(Y13, Q3)K3 = NAND(Y3, L3)N16 = NAND(Y00, Q1)K7 = NAND(Y3, L7)N17 = NAND(Y02, Q4)K11 = NAND(Y3, L11)K15 = NAND(Y3, L15)# File: mull.dK19 = NAND(Y3, L19)K0 = NAND(Y0, L0)K23 = NAND(Y3, L23)K12 = NAND(Y0, L12)K27 = NAND(Y3, L27)K16 = NAND(Y0, L16)K31 = NAND(Y3, L31)K20 = NAND(Y0, L20)K1 = NAND(Y1,L1)#File: mull.fK5 = NAND(Y1, L5)K4 = AND(Y0, L4)K8 = AND(Y0, L8)K24 = AND(Y0, L24)K28 = AND(Y0, L28)# File: mull.iK4 = NAND(Y0, L4)K8 = NAND(Y0, L8)K24 = NAND(Y0, L24)K28 = NAND(Y0, L28)# File: mull.cK4 = NAND(Y0, L4) {circumflex over ( )} ZFK8 = NAND(Y0, L8) {circumflex over ( )} ZFK24 = NAND(Y0, L24) {circumflex over ( )} ZFK28 = NAND(Y0, L28) {circumflex over ( )} ZF Forward SBox (Fast) Listing: Forward SBox with the smallest delay (fast)# Forward (fast)Q5 = Z18 {circumflex over ( )} Z160L29 = Z96 {circumflex over ( )} [email protected] = U1 {circumflex over ( )} U3L14 = Q11 {circumflex over ( )} [email protected] = Z10 {circumflex over ( )} Q2L26 = Q11 {circumflex over ( )} [email protected] = U0 {circumflex over ( )} U7L30 = Q11 {circumflex over ( )} [email protected] = U2 {circumflex over ( )} U5L7 = Q12 {circumflex over ( )} [email protected] = Z36 {circumflex over ( )} Q5L11 = Q12 {circumflex over ( )} [email protected] = U2 {circumflex over ( )} Z96L27 = L30 {circumflex over ( )} L10Q9 = Z18 {circumflex over ( )} L19Q17 = U0# File: ftop.dQ10 = Z10 {circumflex over ( )} Q1L0 = Q10# Exhaustive searchQ12 = U3 {circumflex over ( )} L28L4 = U6Z18 = U1 {circumflex over ( )} U4Q13 = U3 {circumflex over ( )} Q2L20 = Q0L28 = Z18 {circumflex over ( )} U6L10 = Z36 {circumflex over ( )} Q7L24 = Q16Q0 = U2 {circumflex over ( )} L28Q14 = U6 {circumflex over ( )} L10L1 = Q6Z96 = U5 {circumflex over ( )} U6Q15 = U0 {circumflex over ( )} Q5L9 = U5Q1 = U0 {circumflex over ( )} Z96L8 = U3 {circumflex over ( )} Q5L21 = Q11Z160 = U5 {circumflex over ( )} U7L12 = Q16 {circumflex over ( )} Q2L25 = Q13Q2 = U6 {circumflex over ( )} Z160L16 = U2 {circumflex over ( )} Q4L2 = Q9Q11 = U2 {circumflex over ( )} U3L15 = U1 {circumflex over ( )} Z96L18 = U1L6 = U4 {circumflex over ( )} Z96L31 = Q16 {circumflex over ( )} L15L22 = Q15Q3 = Q11 {circumflex over ( )} L6L5 = Q12 {circumflex over ( )} L31L3 = Q8Q16 = U0 {circumflex over ( )} Q11L13 = U3 {circumflex over ( )} Q8L23 = U0Q4 = Q16 {circumflex over ( )} U4L17 = U4 {circumflex over ( )} L10 Combined SBox (Fast) Listing: Combined SBox with the smallest delay# Combined (fast)A11 = U2 {circumflex over ( )} [email protected] = NMUX(ZF, A2, A11)@mulx.aQ13 = A6 {circumflex over ( )} [email protected] = Q5 {circumflex over ( )} [email protected] = A5 {circumflex over ( )} [email protected] = Q5 {circumflex over ( )} [email protected] = U0 {circumflex over ( )} A13A14 = XNOR(U3, A3)# File: ctop.dA15 = NMUX(ZF, A0, U3)# Floating multiplexersA16 = XNOR(U5, A15)A0 = XNOR(U2, U4)Q3 = A4 {circumflex over ( )} A16A1 = XNOR(U1, A0)L6 = Q11 {circumflex over ( )} Q3A2 = XNOR(U5, U7)A17 = U2 {circumflex over ( )} A10A3 = U0 {circumflex over ( )} U5Q7 = XNOR(A8, A17)A4 = XNOR(U3, U6)A18 = NMUX(ZF, A14, A2)A5 = U2 {circumflex over ( )} U6Q1 = XNOR(A4, A18)A6 = NMUX(ZF, A4, U1)Q4 = XNOR(A16, A18)Q11 = A5 {circumflex over ( )} A6L7 = Q12 {circumflex over ( )} Q1Q16 = U0 {circumflex over ( )} Q11L8 = Q7 {circumflex over ( )} L7A7 = U3 {circumflex over ( )} A1A19 = NMUX(ZF, U1, A4)L24 = MUX(ZF, Q16, A7)A20 = XNOR(U6, A19)A8 = NMUX(ZF, A3, U6)Q9 = XNOR(A16, A20)L5 = A0 {circumflex over ( )} A8Q10 = A18 {circumflex over ( )} A20L11 = Q16 {circumflex over ( )} L5L9 = Q0 {circumflex over ( )} Q9A9 = MUX(ZF, U2, U6)A21 = U1 {circumflex over ( )} A2A10 = XNOR(A2, A9)A22 = NMUX(ZF, A21, A5)Q5 = A1 {circumflex over ( )} A10Q2 = A20 {circumflex over ( )} A22Q15 = U0 {circumflex over ( )} Q5Q6 = XNOR(A4, A22)Q8 = XNOR(A16, A22)L18 = NMUX(ZF, A19, A30)A23 = XNOR(Q5, Q9)A31 = XNOR(A7, A21)L10 = XNOR(Q1, A23)L16 = A25 {circumflex over ( )} A31L4 = Q14 {circumflex over ( )} L10L26 = L18 {circumflex over ( )} A31A24 = NMUX(ZF, Q2, L4)A32 = MUX(ZF, U7, A5)L12 = XNOR(Q16, A24)L13 = A7 {circumflex over ( )} A32L25 = XNOR(U3, A24)A33 = NMUX(ZF, A15, U0)A25 = MUX(ZF, L10, A3)L19 = XNOR(L6, A33)L17 = U4 {circumflex over ( )} A25A34 = NOR(ZF, U6)A26 = MUX(ZF, A10, Q4)L20 = Q0 {circumflex over ( )} A34L14 = L24 {circumflex over ( )} A26A35 = XNOR(A4, A8)L23 = A25 {circumflex over ( )} A26L28 = XNOR(L7, A35)A27 = MUX(ZF, A1, U5)A36 = NMUX(ZF, Q6, L11)L30 = Q12 {circumflex over ( )} A27L31 = A30 {circumflex over ( )} A36A28 = NMUX(ZF, L10, L5)A37 = MUX(ZF, L26, A0)L21 = XNOR(L14, A28)L22 = Q16 {circumflex over ( )} A37L27 = XNOR(L30, A28)Q17 = U0A29 = XNOR(U5, L4)L0 = Q10L29 = A28 {circumflex over ( )} A29L1 = Q6L15 = A19 {circumflex over ( )} A29L2 = Q9A30 = XNOR(A3, A10)L3 = Q8 Inverse SBox (Fast) Listing: Inverse SBox circuits with the smallest delay (fast)# Inverse (fast)Q14 = XNOR(Q10, U2)@itop.dQ15 = Q14 {circumflex over ( )} [email protected] = XNOR(Q8, U7)@inv.aQ17 = Q16 {circumflex over ( )} [email protected] = Q15 {circumflex over ( )} [email protected] = U0 {circumflex over ( )} [email protected] = Q11 {circumflex over ( )} Q2L4 = Q6 {circumflex over ( )} L3# File: itop.dL16 = Q3 {circumflex over ( )} L27# Exhaustive searchL1 = XNOR(U2, U3)Q8 = XNOR(U1, U3)L6 = L1 {circumflex over ( )} Q0Q0 = Q8 {circumflex over ( )} U5L20 = L6 {circumflex over ( )} Q2Q1 = U6 {circumflex over ( )} U7L15 = XNOR(U2, Q6)Q7 = U3 {circumflex over ( )} U4L24 = U0 {circumflex over ( )} L15Q2 = Q7 {circumflex over ( )} Q1L5 = L27 {circumflex over ( )} Q2Q3 = U0 {circumflex over ( )} U4L19 = Q14 {circumflex over ( )} U5Q4 = Q3 {circumflex over ( )} Q1L26 = Q3 {circumflex over ( )} L3Q5 = XNOR(U1, Q3)L13 = L19 {circumflex over ( )} L26Q10 = XNOR(U0, U1)L17 = U0 {circumflex over ( )} L12Q6 = Q10 {circumflex over ( )} Q7L21 = XNOR(U1, Q1)Q9 = Q10 {circumflex over ( )} Q4L25 = Q5 {circumflex over ( )} L3L12 = U4 {circumflex over ( )} U5L14 = U3 {circumflex over ( )} Q12Z132= U2 {circumflex over ( )} U7L18 = U0 {circumflex over ( )} Q1Q11 = L12 {circumflex over ( )} Z132L22 = XNOR(Q5, U6)Q12 = Q0 {circumflex over ( )} Q11L8 = Q11L27 = U3 {circumflex over ( )} Z132L28 = Q7Q13 = U0 {circumflex over ( )} L27L9 = Q12L29 = Q10L2 = U5L10 = Q17L30 = Q2L7 = U4L11 = Q5L31 = Q9 | 164,956 |
11943333 | Further areas of applicability of the present disclosure will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description of exemplary embodiments are intended for illustration purposes only and are, therefore, not intended to necessarily limit the scope of the disclosure. DETAILED DESCRIPTION Glossary of Terms Blockchain—A public ledger of all transactions of a blockchain-based currency. One or more computing devices may comprise a blockchain network, which may be configured to process and record transactions as part of a block in the blockchain. Once a block is completed, the block is added to the blockchain and the transaction record thereby updated. In many instances, the blockchain may be a ledger of transactions in chronological order, or may be presented in any other order that may be suitable for use by the blockchain network. In some configurations, transactions recorded in the blockchain may include a destination address and a currency amount, such that the blockchain records how much currency is attributable to a specific address. In some instances, the transactions are financial and others not financial, or might include additional or different information, such as a source address, timestamp, etc. In some embodiments, a blockchain may also or alternatively include nearly any type of data as a form of transaction that is or needs to be placed in a distributed database that maintains a continuously growing list of data records hardened against tampering and revision, even by its operators, and may be confirmed and validated by the blockchain network through proof of work and/or any other suitable verification techniques associated therewith. In some cases, data regarding a given transaction may further include additional data that is not directly part of the transaction appended to transaction data. In some instances, the inclusion of such data in a blockchain may constitute a transaction. In such instances, a blockchain may not be directly associated with a specific digital, virtual, fiat, or other type of currency. System for Improved Confirmation of Blockchain Transactions FIG.1illustrates a system100for the confirmation of a newly submitted blockchain transaction that utilizes output of an earlier transaction still awaiting inclusion in a blockchain. The system100may include a blockchain node102, which may be one of a plurality of nodes (e.g., the blockchain node102and a plurality of additional nodes106) that comprise a blockchain network104. Each blockchain node102and additional node106may be a computing system, such as illustrated inFIG.2andFIG.5, discussed in more detail below, that is configured to perform functions related to the processing and management of the blockchain, including the generation of blockchain data values, verification of proposed blockchain transactions, verification of digital signatures, generation of new blocks, validation of new blocks, and maintenance of a copy of the blockchain. The blockchain may be a distributed ledger that is comprised of at least a plurality of blocks. Each block may include at least a block header and one or more data values. Each block header may include at least a timestamp, a block reference value, and a data reference value. The timestamp may be a time at which the block header was generated, and may be represented using any suitable method (e.g., UNIX timestamp, DateTime, etc.). The block reference value may be a value that references an earlier block (e.g., based on timestamp) in the blockchain. In some embodiments, a block reference value in a block header may be a reference to the block header of the most recently added block prior to the respective block. In an exemplary embodiment, the block reference value may be a hash value generated via the hashing of the block header of the most recently added block. The data reference value may similarly be a reference to the one or more data values stored in the block that includes the block header. In an exemplary embodiment, the data reference value may be a hash value generated via the hashing of the one or more data values. For instance, the block reference value may be the root of a Merkle tree generated using the one or more data values. The use of the block reference value and data reference value in each block header may result in the blockchain being immutable. Any attempted modification to a data value would require the generation of a new data reference value for that block, which would thereby require the subsequent block's block reference value to be newly generated, further requiring the generation of a new block reference value in every subsequent block. This would have to be performed and updated in every single node in the blockchain network104prior to the generation and addition of a new block to the blockchain in order for the change to be made permanent. Computational and communication limitations may make such a modification exceedingly difficult, if not impossible, thus rendering the blockchain immutable. In some embodiments, the blockchain may be used to store information regarding blockchain transactions conducted between two different blockchain wallets. A blockchain wallet may include a private key of a cryptographic key pair that is used to generate digital signatures that serve as authorization by a payer for a blockchain transaction, where the digital signature can be verified by the blockchain network106using the public key of the cryptographic key pair. In some cases, the term “blockchain wallet” may refer specifically to the private key. In other cases, the term “blockchain wallet” may refer to a computing device (e.g., first computing device108, second computing device110) that stores the private key for use thereof in blockchain transactions. For instance, each computing device may each have their own private key for respective cryptographic key pairs, and may each be a blockchain wallet for use in transactions with the blockchain associated with the blockchain network. Computing devices may be any type of device suitable to store and utilize a blockchain wallet, such as a desktop computer, laptop computer, notebook computer, tablet computer, cellular phone, smart phone, smart watch, smart television, wearable computing device, implantable computing device, etc. Each blockchain data value stored in the blockchain may correspond to a blockchain transaction or other storage of data, as applicable. A blockchain transaction may consist of at least: a digital signature of the sender of currency (e.g., the first computing device108) that is generated using the sender's private key, a blockchain address of the recipient of currency (e.g., the second computing device110) generated using the recipient's public key, and a blockchain currency amount that is transferred or other data being stored. In some blockchain transactions, the transaction may also include one or more blockchain addresses of the sender where blockchain currency is currently stored (e.g., where the digital signature proves their access to such currency), as well as an address generated using the sender's public key for any change that is to be retained by the sender. Addresses to which cryptographic currency has been sent that can be used in future transactions are referred to as “output” addresses, as each address was previously used to capture output of a prior blockchain transaction, also referred to as “unspent transactions,” due to there being currency sent to the address in a prior transaction where that currency is still unspent. In some cases, a blockchain transaction may also include the sender's public key, for use by an entity in validating the transaction. For the traditional processing of a blockchain transaction, such data may be provided to a blockchain node102in the blockchain network104, either by the sender or the recipient. The node may verify the digital signature using the public key in the cryptographic key pair of the sender's wallet and also verify the sender's access to the funds (e.g., that the unspent transactions have not yet been spent and were sent to address associated with the sender's wallet), and then include the blockchain transaction in a new block. The new block may be validated by other nodes in the blockchain network104before being added to the blockchain and distributed to all of the blockchain nodes102in the blockchain network104in traditional blockchain implementations. In cases where a blockchain data value may not be related to a blockchain transaction, but instead the storage of other types of data, blockchain data values may still include or otherwise involve the validation of a digital signature. In the system100, the first computing device108may submit a new blockchain transaction to the blockchain node102, where the transaction is for the transfer of cryptographic currency to the blockchain wallet of the second computing device110. In an example, the first computing device108may be submitting to transmit 100 units to the second computing device110. The blockchain node102may receive the new transaction, referred to herein as the “first” transaction or “initial” transaction. The new transaction may include at least one prior transaction output, a digital signature generated by the first computing device's private key, a recipient address generated using the second computing device's public key, and the amount to be transferred. The initial transaction may be stored in a pool of pending transactions that are awaiting confirmation and inclusion in a new block. The initial transaction may wait in the pool until a blockchain node102confirms the transaction and includes it in a new block that is distributed to the additional nodes106for confirmation and inclusion in the blockchain. The second computing device110may be aware of the payment that is attempted by the first computing device108in the initial transaction, and may have a desire to use that payment. Traditionally, the second computing device110would have to wait until the initial transaction is confirmed and included in a new block that is confirmed and added to the blockchain before a new transaction, referred to herein as the “second” or “subsequent” transaction, could be submitted and confirmed. In the system100, the second computing device110may submit the second transaction to the blockchain node102for confirmation prior to inclusion of the first transaction in the blockchain. The second blockchain transaction may include the digital signature generated using the second computing device's private key, a payment amount, a recipient address for a blockchain wallet to receive the payment amount, and at least one transaction output that includes an output of the initial blockchain transaction. The second computing device110may be able to identify and/or generate the output of the initial blockchain transaction as the second computing device110has knowledge of all of the data included in the initial blockchain transaction (as a recipient thereof, as the second computing device110will know its recipient address and the amount being sent thereto), without the initial blockchain transaction being included in the blockchain. This second blockchain transaction may be submitted to the blockchain node102using any traditional communication network and method. The blockchain node102may receive the second blockchain transaction and place the transaction in the pool of pending transactions for mining and confirmation. When a blockchain node102selects the second blockchain transaction for confirmation, the blockchain node102may identify that the second blockchain transaction is relying on the output of a prior transaction that has not yet been included in the blockchain (e.g., by searching the transaction outputs in blockchain data values currently in the blockchain). The blockchain node102may then query the pool of pending transactions to identify the initial blockchain transaction therein, where its transaction output matches the input for the subsequent blockchain transaction. Once the initial blockchain transaction is identified, the blockchain node102may validate the initial blockchain transaction. The validation of the initial blockchain transaction may include confirmation of the initial blockchain transaction by the blockchain node102, or validation that the initial blockchain transaction has been previously confirmed by the blockchain node102or an additional node106. For instance, the initial blockchain transaction may have bene previously confirmed for inclusion in a block that was not added in the blockchain (e.g., an orphaned block), where the initial blockchain transaction does not need to be confirmed again. In cases where confirmation must still occur, the blockchain node102may confirm the initial blockchain transaction. Confirmation of the transaction may include validation of the digital signature using the appropriate public key, verification of the input(s) for the transaction, and verification that the payment amount(s) being paid are covered by the transaction input(s). Once the initial blockchain transaction is confirmed and validated, the blockchain node102may confirm the subsequent blockchain transaction. Confirmation of the subsequent blockchain transaction may be performed in the same manner as confirmation of the initial blockchain transaction, where part of the confirmation is verification of the input of the subsequent blockchain transaction that matches the output of the initial blockchain transaction. If the second blockchain transaction is also confirmed successfully, then both pending transactions may be included in a new block that is generated. The new block may include a new block header generated by the blockchain node102and a plurality of blockchain data values that includes at least the first and second blockchain transactions. The new block header may include at least a timestamp, a block reference value that is a hash of the block header of the most recent block that was already added to the blockchain, and a data reference value that refers to the blockchain data values included in the new block. In some cases, the data reference value may be the root of a Merkle tree generated using the blockchain data values. The new block may be transmitted to the additional nodes106in the blockchain network104for confirmation thereof, such as by ensuring that the block reference value and data reference values are correct. A majority of the additional nodes106may transmit a confirmation message back to the blockchain node102, which may indicate successful confirmation of the new block. The new block may then be distributed to all nodes in the blockchain network104, effectively adding the block to the blockchain. The first and second blockchain transactions may thereby be added to the blockchain, where the second computing device110was able to spend their payment received in the first blockchain transaction, prior to the first blockchain transaction even being included in the blockchain. The result is that the second computing device110does not need to wait for confirmation of the first transaction before proceeding with their own transaction. This allows for users to not be at the mercy of blockchain nodes102as miners and mining fees that may not be attractive as paid by the first computing device108. Inclusion of both pending transactions into a single new block that is added prevents the ability for one of the transactions to end up in an orphaned block, which could cause an error in future attempted confirmations in the blockchain. As a result, the methods and systems discussed herein provide for continued operation of the blockchain as an immutable record where new transactions do not have to wait on inclusion of an earlier transaction as input therein to be submitted for addition to the blockchain. Blockchain Node FIG.2illustrates an embodiment of a blockchain node102in the system100. It will be apparent to persons having skill in the relevant art that the embodiment of the blockchain node102illustrated inFIG.2is provided as illustration only and may not be exhaustive to all possible configurations of the blockchain node102suitable for performing the functions as discussed herein. For example, the computer system500illustrated inFIG.5and discussed in more detail below may be a suitable configuration of the blockchain node102. The additional nodes106in the system100and illustrated inFIG.1may be implemented as the blockchain node102illustrated inFIG.2and discussed herein. The blockchain node102may include a receiving device202. The receiving device202may be configured to receive data over one or more networks via one or more network protocols. In some instances, the receiving device202may be configured to receive data from additional nodes106, first computing devices108, second computing devices110, and other systems and entities via one or more communication methods, such as radio frequency, local area networks, wireless area networks, cellular communication networks, Bluetooth, the Internet, etc. In some embodiments, the receiving device202may be comprised of multiple devices, such as different receiving devices for receiving data over different networks, such as a first receiving device for receiving data over a local area network and a second receiving device for receiving data via the Internet. The receiving device202may receive electronically transmitted data signals, where data may be superimposed or otherwise encoded on the data signal and decoded, parsed, read, or otherwise obtained via receipt of the data signal by the receiving device202. In some instances, the receiving device202may include a parsing module for parsing the received data signal to obtain the data superimposed thereon. For example, the receiving device202may include a parser program configured to receive and transform the received data signal into usable input for the functions performed by the processing device to carry out the methods and systems described herein. The receiving device202may be configured to receive data signals electronically transmitted by additional nodes106as other blockchain nodes102in the blockchain network104, which may be superimposed or otherwise encoded with blockchain data values for confirmation, confirmation of blockchain data values, new blocks for confirmation, confirmation messages of potential blocks, new blocks for addition to the blockchain, etc. The receiving device202may also be configured to receive data signals electronically transmitted by first computing devices108and second computing devices110that may be superimposed or otherwise encoded with new blockchain transactions for confirmation and inclusion in the blockchain. The blockchain node102may also include a communication module204. The communication module204may be configured to transmit data between modules, engines, databases, memories, and other components of the blockchain node102for use in performing the functions discussed herein. The communication module204may be comprised of one or more communication types and utilize various communication methods for communications within a computing device. For example, the communication module204may be comprised of a bus, contact pin connectors, wires, etc. In some embodiments, the communication module204may also be configured to communicate between internal components of the blockchain node102and external components of the blockchain node102, such as externally connected databases, display devices, input devices, etc. The blockchain node102may also include a processing device. The processing device may be configured to perform the functions of the blockchain node102discussed herein as will be apparent to persons having skill in the relevant art. In some embodiments, the processing device may include and/or be comprised of a plurality of engines and/or modules specially configured to perform one or more functions of the processing device, such as a querying module214, generation module216, validation module218, etc. As used herein, the term “module” may be software or hardware particularly programmed to receive an input, perform one or more processes using the input, and provides an output. The input, output, and processes performed by various modules will be apparent to one skilled in the art based upon the present disclosure. The blockchain node102may also include a memory206. The memory206may be configured to store data for use by the blockchain node102in performing the functions discussed herein, such as public and private keys, symmetric keys, etc. The memory206may be configured to store data using suitable data formatting methods and schema and may be any suitable type of memory, such as read-only memory, random access memory, etc. The memory206may include, for example, encryption keys and algorithms, communication protocols and standards, data formatting standards and protocols, program code for modules and application programs of the processing device, and other data that may be suitable for use by the blockchain node102in the performance of the functions disclosed herein as will be apparent to persons having skill in the relevant art. In some embodiments, the memory206may be comprised of or may otherwise include a relational database that utilizes structured query language for the storage, identification, modifying, updating, accessing, etc. of structured data sets stored therein. The memory206may be configured to store, for example, cryptographic keys, salts, nonces, communication information for blockchain nodes102and blockchain networks104, address generation and validation algorithms, digital signature generation and validation algorithms, hashing algorithms for generating reference values, rules regarding generation of new blocks and block headers, a pool of pending transactions, etc. The blockchain node102may include a querying module214. The querying module214may be configured to execute queries on databases to identify information. The querying module214may receive one or more data values or query strings, and may execute a query string based thereon on an indicated database, such as the memory206of the blockchain node102to identify information stored therein. The querying module214may then output the identified information to an appropriate engine or module of the blockchain node102as necessary. The querying module214may, for example, execute a query on the memory206to identify a pending transaction as an initial transaction to be confirmed prior to confirmation of a subsequent blockchain transaction that is itself awaiting confirmation and inclusion in the blockchain. The blockchain node102may also include a generation module216. The generation module216may be configured to generate data for use by the blockchain node102in performing the functions discussed herein. The generation module216may receive instructions as input, may generate data based on the instructions, and may output the generated data to one or more modules of the blockchain node102. For example, the generation module216may be configured to generate blockchain reference values, data reference values, new block headers, new blocks, etc. The blockchain node102may also include a validation module218. The validation module218may be configured to perform validations for the blockchain node102as part of the functions discussed herein. The validation module218may receive instructions as input, which may also include data to be used in performing a validation, may perform a validation as requested, and may output a result of the validation to another module or engine of the blockchain node102. The validation module218may, for example, be configured to validate digital signatures, inputs for new blockchain transactions, payment amounts in new blockchain transactions, confirmations of initial blockchain transactions, etc. The blockchain node102may also include a transmitting device220. The transmitting device220may be configured to transmit data over one or more networks via one or more network protocols. In some instances, the transmitting device220may be configured to transmit data to additional nodes106, first computing devices108, second computing devices110, and other entities via one or more communication methods, local area networks, wireless area networks, cellular communication, Bluetooth, radio frequency, the Internet, etc. In some embodiments, the transmitting device220may be comprised of multiple devices, such as different transmitting devices for transmitting data over different networks, such as a first transmitting device for transmitting data over a local area network and a second transmitting device for transmitting data via the Internet. The transmitting device220may electronically transmit data signals that have data superimposed that may be parsed by a receiving computing device. In some instances, the transmitting device220may include one or more modules for superimposing, encoding, or otherwise formatting data into data signals suitable for transmission. The transmitting device220may be configured to electronically transmit data signals to additional nodes106as other blockchain nodes102in the blockchain network104, which may be superimposed or otherwise encoded with new blockchain data values for confirmation, confirmations of blockchain data values, new blocks for confirmation, confirmation messages for new blocks, and confirmed blocks for inclusion in the blockchain. The transmitting device220may also be configured to electronically transmit data signals to first computing devices108and second computing devices110that may be superimposed or otherwise encoded with notification messages regarding submitted blockchain transactions, such as confirmations thereof, error messages (e.g., invalid signatures, failed confirmations, etc.), or any other notifications that may be suitable as a result of the methods and systems discussed herein. Process for Confirming a Blockchain Transaction Utilizing a Pending Transaction FIG.3illustrates a process300for the confirmation of a newly submitted blockchain transaction that utilizes an output of a pending transaction as input thereof, while the pending transaction is still awaiting inclusion in a blockchain. In step302, the receiving device202of the blockchain node102may receive a new blockchain transaction for confirmation and inclusion in the blockchain, such as may be submitted by the first computing device108or second computing device110. The new blockchain transaction may include a digital signature, one or more transaction inputs, one or more output addresses, and a payment amount for each output address. In step304, the blockchain node102may determine if each transaction input, also referred to as “unspent” transactions, has been posted to the blockchain. If, in step304, at least one input transaction has not been posted to the blockchain, then, in step306, the validation module218of the blockchain node102may validate the unspent transaction(s), which may include confirmation thereof, such as of the digital signature, payment amounts, and transaction inputs included therein. In step308, the blockchain node102may determine if the validation of the unspent transaction(s) were successful. If the validation was unsuccessful, then confirmation of the new blockchain transaction fails. Then, in step310, the transmitting device220of the blockchain node102may electronically transmit a notification message to the submitting computing device, which may include an indication that validation of one of the transaction inputs has failed. If the unspent transaction(s) are determined to be valid, in step308, or have already been posted to the blockchain, in step304, then, in step312, confirmation of the new blockchain transaction may be attempted by the validation module218. In step314, the blockchain node102may determine if confirmation of the new blockchain transaction was successful. If the confirmation failed, then the process300may return to step310where a notification message may be transmitted to the submitting computing device by the transmitting device220of the blockchain node102. If, in step314, the blockchain node102finds that confirmation of the new blockchain transaction was successful, then, in step316, the generation module216of the blockchain node102may generate a new block for the blockchain that includes the new blockchain transaction and, if applicable, any unspent transactions that were confirmed in step306. In step318, the transmitting device220of the blockchain node102may distribute the newly generated block to additional nodes106in the blockchain network104for confirmation and inclusion in the blockchain. Exemplary Method for Confirmation of a Blockchain Transaction FIG.4illustrates a method400for confirmation of a blockchain transaction that utilizes an output from a prior blockchain transaction that is still waiting for inclusion in a blockchain. In step402, a plurality of waiting blockchain transactions may be stored in a memory (e.g., memory206) of a node (e.g., blockchain node102, additional node106) in a blockchain network (e.g., blockchain network104), where each of the plurality of waiting blockchain transactions is not included in a blockchain associated with the blockchain network. In step404, a new blockchain transaction may be received by a receiver (e.g., receiving device202) of the node, the new blockchain transaction including at least a transaction amount, destination address, digital signature, and an unspent transaction output, where the unspent transaction output is a reference to one of the plurality of waiting blockchain transactions. In step406, the new blockchain transaction may be validated by a processor (e.g., validation module218) of the node, where validation includes confirmation of the one of the plurality of waiting blockchain transactions. In step408, a new block may be generated by the processor (e.g., generation module216) of the node, the new block including at least a block header and a plurality of blockchain data entries, the blockchain data entries including at least the new blockchain transaction and the one of the plurality of waiting blockchain transactions. In step410, the generated new block may be transmitted by a transmitter (e.g., transmitting device220) to a plurality of additional nodes (e.g., additional nodes106) in the blockchain network for confirmation. In one embodiment, the method400may further include receiving, by the receiver of the node, a confirmation message from the plurality of additional nodes confirming the new block for inclusion in the blockchain. In some embodiments, validating the new blockchain transaction may further include validating the digital signature included in the new blockchain transaction. In a further embodiment, the digital signature may be validated using a public key transmitted with the new blockchain transaction. In one embodiment, the method400may also include validating, by the processor of the node, the one of the plurality of waiting blockchain transactions prior to validating the new blockchain transaction. In some embodiments, the one of the plurality of waiting blockchain transactions may have been previously confirmed by the node or one of the plurality of additional nodes in the blockchain network. In one embodiment, the method400may further include generating, by the processor (e.g., generation module216) of the node, the block header included in the new block prior to generation of the new block, where the block header includes at least a timestamp, a block reference value, and a transaction reference value. In a further embodiment, the method400may even further include: generating, by the processor of the node, the transaction reference value by applying a hashing algorithm to the blockchain data entries included in the new block, and generating, by the processor of the node, the block reference value by applying the hashing algorithm to a header in a most recent block in the blockchain. Computer System Architecture FIG.5illustrates a computer system500in which embodiments of the present disclosure, or portions thereof, may be implemented as computer-readable code. For example, the blockchain node102and additional nodes106ofFIG.1may be implemented in the computer system500using hardware, software, firmware, non-transitory computer readable media having instructions stored thereon, or a combination thereof and may be implemented in one or more computer systems or other processing systems. Hardware, software, or any combination thereof may embody modules and components used to implement the methods ofFIGS.3and4. If programmable logic is used, such logic may execute on a commercially available processing platform configured by executable software code to become a specific purpose computer or a special purpose device (e.g., programmable logic array, application-specific integrated circuit, etc.). A person having ordinary skill in the art may appreciate that embodiments of the disclosed subject matter can be practiced with various computer system configurations, including multi-core multiprocessor systems, minicomputers, mainframe computers, computers linked or clustered with distributed functions, as well as pervasive or miniature computers that may be embedded into virtually any device. For instance, at least one processor device and a memory may be used to implement the above described embodiments. A processor unit or device as discussed herein may be a single processor, a plurality of processors, or combinations thereof. Processor devices may have one or more processor “cores.” The terms “computer program medium,” “non-transitory computer readable medium,” and “computer usable medium” as discussed herein are used to generally refer to tangible media such as a removable storage unit518, a removable storage unit522, and a hard disk installed in hard disk drive512. Various embodiments of the present disclosure are described in terms of this example computer system500. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the present disclosure using other computer systems and/or computer architectures. Although operations may be described as a sequential process, some of the operations may in fact be performed in parallel, concurrently, and/or in a distributed environment, and with program code stored locally or remotely for access by single or multi-processor machines. In addition, in some embodiments the order of operations may be rearranged without departing from the spirit of the disclosed subject matter. Processor device504may be a special purpose or a general purpose processor device specifically configured to perform the functions discussed herein. The processor device504may be connected to a communications infrastructure506, such as a bus, message queue, network, multi-core message-passing scheme, etc. The network may be any network suitable for performing the functions as disclosed herein and may include a local area network (LAN), a wide area network (WAN), a wireless network (e.g., WiFi), a mobile communication network, a satellite network, the Internet, fiber optic, coaxial cable, infrared, radio frequency (RF), or any combination thereof. Other suitable network types and configurations will be apparent to persons having skill in the relevant art. The computer system500may also include a main memory508(e.g., random access memory, read-only memory, etc.), and may also include a secondary memory510. The secondary memory510may include the hard disk drive512and a removable storage drive514, such as a floppy disk drive, a magnetic tape drive, an optical disk drive, a flash memory, etc. The removable storage drive514may read from and/or write to the removable storage unit518in a well-known manner. The removable storage unit518may include a removable storage media that may be read by and written to by the removable storage drive514. For example, if the removable storage drive514is a floppy disk drive or universal serial bus port, the removable storage unit518may be a floppy disk or portable flash drive, respectively. In one embodiment, the removable storage unit518may be non-transitory computer readable recording media. In some embodiments, the secondary memory510may include alternative means for allowing computer programs or other instructions to be loaded into the computer system500, for example, the removable storage unit522and an interface520. Examples of such means may include a program cartridge and cartridge interface (e.g., as found in video game systems), a removable memory chip (e.g., EEPROM, PROM, etc.) and associated socket, and other removable storage units522and interfaces520as will be apparent to persons having skill in the relevant art. Data stored in the computer system500(e.g., in the main memory508and/or the secondary memory510) may be stored on any type of suitable computer readable media, such as optical storage (e.g., a compact disc, digital versatile disc, Blu-ray disc, etc.) or magnetic tape storage (e.g., a hard disk drive). The data may be configured in any type of suitable database configuration, such as a relational database, a structured query language (SQL) database, a distributed database, an object database, etc. Suitable configurations and storage types will be apparent to persons having skill in the relevant art. The computer system500may also include a communications interface524. The communications interface524may be configured to allow software and data to be transferred between the computer system500and external devices. Exemplary communications interfaces524may include a modem, a network interface (e.g., an Ethernet card), a communications port, a PCMCIA slot and card, etc. Software and data transferred via the communications interface524may be in the form of signals, which may be electronic, electromagnetic, optical, or other signals as will be apparent to persons having skill in the relevant art. The signals may travel via a communications path526, which may be configured to carry the signals and may be implemented using wire, cable, fiber optics, a phone line, a cellular phone link, a radio frequency link, etc. The computer system500may further include a display interface502. The display interface502may be configured to allow data to be transferred between the computer system500and external display530. Exemplary display interfaces502may include high-definition multimedia interface (HDMI), digital visual interface (DVI), video graphics array (VGA), etc. The display530may be any suitable type of display for displaying data transmitted via the display interface502of the computer system500, including a cathode ray tube (CRT) display, liquid crystal display (LCD), light-emitting diode (LED) display, capacitive touch display, thin-film transistor (TFT) display, etc. Computer program medium and computer usable medium may refer to memories, such as the main memory508and secondary memory510, which may be memory semiconductors (e.g., DRAMs, etc.). These computer program products may be means for providing software to the computer system500. Computer programs (e.g., computer control logic) may be stored in the main memory508and/or the secondary memory510. Computer programs may also be received via the communications interface524. Such computer programs, when executed, may enable computer system500to implement the present methods as discussed herein. In particular, the computer programs, when executed, may enable processor device504to implement the methods illustrated byFIGS.3and4, as discussed herein. Accordingly, such computer programs may represent controllers of the computer system500. Where the present disclosure is implemented using software, the software may be stored in a computer program product and loaded into the computer system500using the removable storage drive514, interface520, and hard disk drive512, or communications interface524. The processor device504may comprise one or more modules or engines configured to perform the functions of the computer system500. Each of the modules or engines may be implemented using hardware and, in some instances, may also utilize software, such as corresponding to program code and/or programs stored in the main memory508or secondary memory510. In such instances, program code may be compiled by the processor device504(e.g., by a compiling module or engine) prior to execution by the hardware of the computer system500. For example, the program code may be source code written in a programming language that is translated into a lower level language, such as assembly language or machine code, for execution by the processor device504and/or any additional hardware components of the computer system500. The process of compiling may include the use of lexical analysis, preprocessing, parsing, semantic analysis, syntax-directed translation, code generation, code optimization, and any other techniques that may be suitable for translation of program code into a lower level language suitable for controlling the computer system500to perform the functions disclosed herein. It will be apparent to persons having skill in the relevant art that such processes result in the computer system500being a specially configured computer system500uniquely programmed to perform the functions discussed above. Techniques consistent with the present disclosure provide, among other features, systems and methods for confirming a blockchain transaction utilizing output from a transaction still waiting inclusion in a blockchain. While various exemplary embodiments of the disclosed system and method have been described above it should be understood that they have been presented for purposes of example only, not limitations. It is not exhaustive and does not limit the disclosure to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practicing of the disclosure, without departing from the breadth or scope. | 42,402 |
11943334 | DETAILED DESCRIPTION The exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings. The exemplary embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. These embodiments are provided so that this disclosure will be thorough and complete and will fully convey the exemplary embodiments to those of ordinary skill in the art. Moreover, all statements herein reciting embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure). Thus, for example, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating the exemplary embodiments. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named manufacturer. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms “includes,” “comprises,” “including,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Furthermore, “connected” or “coupled” as used herein may include wirelessly connected or coupled. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first device could be termed a second device, and, similarly, a second device could be termed a first device without departing from the teachings of the disclosure. FIGS.1-19are simplified illustrations of a blockchain environment20, according to exemplary embodiments. A miner system22receives one or more inputs24via a communications network26from a blockchain network server28. While the inputs24may be any electronic data30, in the blockchain environment20, the inputs24are blockchain transactions32(such as financial transactions, inventory/shipping data, and/or healthcare medical data). The actual form or content represented by the electronic data30and the blockchain transactions32may be unimportant. The blockchain network server28sends, distributes, or broadcasts the inputs24to some or all of the authorized mining participants (such as the miner system22). The blockchain network server28may also specify a proof-of-work (“PoW”) target scheme34, which may accompany the inputs24or be separately sent from the inputs24. The miner system22may mine the inputs24. When the miner system22receives the inputs24, the miner system22has a hardware processor (such as CPU36) and a solid-state memory device38that collects the inputs24(such as the blockchain transactions32) into a block40of data. The miner system22then finds a difficult proof-of-work (“PoW”) result42based on the block40of data. The miner system22performs, executes, or calls/requests a proof-of-work (“PoW”) mechanism44. The proof-of-work mechanism44is a computer program, instruction(s), or code that instruct or cause the miner system22to call, request, and/or execute an encryption algorithm46. The proof-of-work mechanism44may instruct or cause the miner system22to call, request, and/or execute a difficulty algorithm48that generates or creates a difficulty50. The proof-of-work mechanism44may also instruct or cause the miner system22to call, request, and/or execute a proof-of-work (“PoW”) algorithm52. The proof-of-work mechanism44may thus be one or more software applications or programming schemes that separate the encryption algorithm46from the difficulty algorithm48and/or from the proof-of-work algorithm52. Because the encryption algorithm46may be separately executed/called from the difficulty algorithm48and/or from the proof-of-work algorithm52, encryption of the electronic data30(representing the inputs24) is separately performed from the difficulty50of solving the proof-of-work. In other words, any encryption algorithm46may be used, along with any difficulty algorithm48, and/or along with any proof-of-work algorithm52. FIG.2further illustrates the proof-of-work mechanism44. While the encryption algorithm46may utilize any encryption scheme, process, and/or function, many readers may be familiar with a cryptographic hashing algorithm54(such as the SHA-256 used by BITCOIN®). The cryptographic hashing algorithm54may thus generate an output56(sometimes called a digest58) by implementing or executing the cryptographic hashing algorithm54using the inputs24(such as the blockchain transactions32). So, whatever the arbitrary bit values of the inputs24, and whatever the arbitrary bit length of the inputs24, the cryptographic hashing algorithm54may generate the output56as one or more hash values60, perhaps having a fixed length (or n-bit). The miner system22may thus receive the inputs24from the blockchain network server28, call and/or execute the encryption algorithm46(such as the cryptographic hashing algorithm54), and generate the hash value(s)60. AsFIG.3illustrates, the miner system22may separately perform or call the proof-of-work algorithm52. After the encryption algorithm46creates the output(s)56, the miner system22may read/retrieve the output(s)56and send the output(s)56to the proof-of-work algorithm52. The miner system22may thus generate the proof-of-work result42by calling and/or by executing the proof-of-work algorithm52using the output(s)56. The miner system22, for example, may send the hash value(s)60(generated by the cryptographic hashing algorithm54) to the proof-of-work algorithm52, and the proof-of-work algorithm52generates the proof-of-work result42using the hash value(s)60. The proof-of-work algorithm52may also compare the proof-of-work result42to the proof-of-work (“PoW”) target scheme34. The proof-of-work algorithm52may, in general, have to satisfy or solve a mathematical puzzle62, perhaps defined or specified by the proof-of-work target scheme34. The proof-of-work target scheme34may also specify, or relate to, the difficulty50of solving the mathematical puzzle62. That is, the more stringent or precise the proof-of-work target scheme34(e.g., a minimum/maximum value of the hash value60), the more difficult the mathematical puzzle62is to solve. In other words, the difficulty50is a measure of how difficult it is to mine the block40of data, given the solution requirements of the proof-of-work target scheme34. The miner system22may own the block40of data. If the miner system22is the first to satisfy the proof-of-work target scheme34(e.g., the proof-of-work result42satisfies the mathematical puzzle62), the miner system22may timestamp the block40of data and broadcast the block40of data, the timestamp, the proof-of-work result42, and/or the mathematical puzzle62to other miners in the blockchain environment20. The miner system22, for example, may broadcast a hash value representing the block40of data, and the other miners begin working on a next block in the blockchain64. Today's BITCOIN® difficulty is increasing. On or about Jun. 16, 2020, BITCOIN's network adjusted its difficulty level (the measure of how hard it is for miners to compete for block rewards on the blockchain) to 15.78 trillion, which was nearly a 15% increase in the difficulty50. As the difficulty50increases, older, less capable, and less power efficient miners are unable to compete. As a result, today's BITCOIN® miners must have the latest, fastest hardware (such as an ASIC) to profitably solve the mathematical puzzle62according to the proof-of-work target scheme34. Indeed, Satoshi envisioned that increasing hardware speed would allow miners to easier solve the proof-of-work. Satoshi thus explained that the difficulty would be a moving target to slow down generation of the blocks40of data. Conventional mining schemes are integrated. When a conventional blockchain miner attempts to solve the mathematical puzzle62, the conventional blockchain miner executes a conventional scheme that integrates hashing, difficulty, and proof-of-work. That is, conventional proof-of-work schemes require the miners to execute a combined software offering or pre-set combination of encryption and proof. These conventional proof-of-work scheme, in other words, integrate a predetermined encryption/hashing algorithm into or with a predetermined difficulty and a predetermined proof-of-work algorithm. These conventional proof-of-work schemes thus force the miners to execute a predetermined or predefined scheme that functionally marries or bundles encryption, difficulty, and proof-of-work. The conventional schemes specify a difficulty mechanism. BITCOIN's difficulty mechanism, for example, is a measure of how difficult it is to mine a BITCOIN® block of data. BITCOIN® miners are required to find a hash value below a given target (e.g., SHA256 (nonce+input) has n leading zeros, where n determines the mining difficulty). The difficulty adjustment is directly related to the total estimated mining power (sometimes estimated in Total Hash Rate per second). BITCOIN's difficulty mechanism is adjusted to basically ensure that ten (10) minutes of computation are required before a miner may solve the mathematical puzzle62. The conventional schemes force the use of specialized hardware. When blockchain mining first appeared, home/desktop computers and laptops (and their conventional processors or CPUs) were adequate. However, as blockchain mining became more difficult and competitive, miners gained an advantage by repurposing a dedicated graphics processing unit (or GPU) for blockchain mining. As an example, the RADEON® HD 5970 GPU has a clocked processing speed of executing about 3,200 of 32-bit instructions per clock, which is about 800 times more than the speed of a CPU that executes only four (4) 32-bit instructions per clock. This increased processor clock speed allowed GPUs to perform far more calculations and made GPUs more desirable for cryptocurrency/blockchain mining. Later, field programmable gate arrays (FPGAs) were also re-modeled for cryptocurrency/blockchain mining. FPGAs were able to compute the mathematical operations required to mine the block40of data twice as fast as the GPU. However, FPGA devices were more labor-intensive to build and still require customized configurations (both software programming and hardware). Today's BITCOIN® miners have pushed the hardware requirements even further by using a specialized application-specific integrated circuit (ASIC) that is exclusively designed for blockchain mining. These ASICs may be 100 billion times faster than mere CPUs. These ASICs have made BITCOIN® mining undemocratic and only possible by a relatively few, well capitalized entities running mining farms. Today's BITCOIN® miners thus consume great quantities of electrical power and pose concerns for the electrical grid. Today's conventional mining hardware has further specialized. Some ASICs have also been further designed for particular blockchains to achieve additional optimizations. For example, a hardware implementation of the SHA-256 hash is much faster than a version coded in software. Today, nearly all BITCOIN® mining is performed using hardware ASICs. Specialized hardware has even been developed for particular hashing functions. The RAVENCOIN® scheme, as an example, uses several different hashing algorithms, and a particular hashing algorithm is picked for one block based off of a hash of a previous block (the RAVENCOIN® scheme resembles a random selection of the hashing algorithm). However, because fifteen (15) of the sixteen (16) algorithms sit on the sidelines unused at any given time, the RAVENCOIN® scheme makes it very expensive for a miner to buy sixteen (16) different hardware rigs in order to mine according to the RAVENCOIN® scheme. Even if a miner decides to only mine the blocks that match a particular hardware requirement, the hardware still sits idle 14-15 cycles on average. Some blockchains may also alter or modify the mining scheme. For example, the MONERO® mining scheme uses a specialized hashing function that implements a random change. That is, the MONERO® mining scheme uses a hash algorithm that unpredictably rewrites itself. The MONERO® mining network introduced a RandomX mining algorithm that was designed to deter ASICs and to improve the efficiency of conventional CPUs. MONERO's RandomX mining algorithm uses random code execution and memory-intensive techniques, rendering ASICs too expensive and ineffective to develop. The conventional mining schemes thus have many disadvantages. Conventional mining schemes have become so specialized and so expensive that only a small number of large miners have the resources to compete. Blockchain mining, in other words, has become centralized and undemocratic. Some conventional schemes try to find new hashing algorithms, new proof-of-work schemes, or modify existing schemes to de-centralize and to democratize mining participants. Some conventional mining schemes (such as ETHERIUM®) require very large memory spaces in bytes, which disadvantages its hardware. LITECOIN® also disadvantages hardware by copying large byte amounts of data. AsFIGS.4-6illustrate, though, exemplary embodiments may mix-and-match the encryption algorithm46, the difficulty algorithm48, and the proof-of-work algorithm52. The inventor has observed that there is no mining law or scheme that requires a preset or predefined difficulty scheme (such as BITCOIN'S counting zeroes on the hash to decide its difficulty). Instead, exemplary embodiments may use any encryption algorithm46that a cryptographic coin, network, or scheme desires or specifies. Exemplary embodiments may use any difficulty algorithm48that the cryptographic coin, network, or scheme desires or specifies. Exemplary embodiments may use any proof-of-work algorithm52that the cryptographic coin, network, or scheme desires or specifies.FIG.4illustrates the encryption algorithm46, the difficulty algorithm48, and proof-of-work algorithm52as separate software mechanisms.FIG.5illustrates alternative software mechanism where the difficulty algorithm48and proof-of-work algorithm52may be functionally intertwined, but the encryption algorithm46is a separate, stand-alone program, file, or service.FIG.6illustrates the inputs and outputs for the encryption algorithm46, the difficulty algorithm48, and proof-of-work algorithm52. FIG.7illustrates agnostic hashing. Exemplary embodiments may use any encryption algorithm46that a cryptographic coin, blockchain network, or scheme desires or specifies. Because most blockchain mining schemes use hashing,FIG.7illustrates the cryptographic hashing algorithm54. The proof-of-work (“PoW”) target scheme34may thus use any cryptographic hashing algorithm54, as exemplary embodiments are agnostic to hashing/encryption. The encryption algorithm46may be any cryptographic hashing algorithm54(e.g., the SHA-2 family (SHA-256 and SHA-512) and/or the SHA-3 family). The miner system22need only request, call, and/or execute the particular cryptographic hashing algorithm54specified by the proof-of-work target scheme34.FIG.7thus illustrates an electronic database70of encryption algorithms accessible to the miner system22. While the database70of encryption algorithms is illustrated as being locally stored in the memory device38of the miner system22, the database70of encryption algorithms may be remotely stored and accessed/queried at any networked location. Even though the database70of encryption algorithms may have any logical structure, a relational database is perhaps easiest to understand.FIG.7thus illustrates the database70of encryption algorithms as an electronic table72that maps, converts, or translates different proof-of-work target schemes34to their corresponding or associated encryption algorithm46(such as the particular cryptographic hashing algorithm54). The miner system22may thus identify the encryption algorithm46by querying the electronic database70of encryption algorithms for the proof-of-work target scheme34specified for use by the blockchain environment20. So, once the particular cryptographic hashing algorithm54is identified, the miner system22may acquire or retrieve any inputs24(such as the blockchain transactions32) and execute the cryptographic hashing algorithm54specified by the proof-of-work target scheme34. The miner system22may optionally send the inputs24via the Internet or other network (e.g., the communications network26illustrated inFIGS.1-3) to a remote destination for service execution (as later paragraphs will explain). The encryption algorithm46(e.g., the cryptographic hashing algorithm54specified by the proof-of-work target scheme34) may thus generate the output56/digest58represented as the hash value(s)60. FIG.8illustrates agnostic difficulty. Exemplary embodiments may use any difficulty algorithm48that a cryptographic coin, blockchain network, or scheme desires or specifies. For example, when or even after the encryption algorithm46(e.g., the cryptographic hashing algorithm54) generates the output56(such as the hash value(s)60), the miner system22may request, call, and/or execute the particular difficulty algorithm48selected by, or specified by, the proof-of-work target scheme34and/or the blockchain environment20. The proof-of-work target scheme34may thus use any difficulty algorithm48, as the miner system22is agnostic to difficulty.FIG.8, for example, illustrates an electronic database74of difficulty algorithms that is accessible to the miner system22. While the database74of difficulty algorithms is illustrated as being locally stored in the memory device38of the miner system22, the database74of difficulty algorithms may be remotely stored and accessed/queried at any networked location. Even though the database74of difficulty algorithms may have any logical structure, a relational database is again perhaps easiest to understand.FIG.8thus illustrates the database74of difficulty algorithms as an electronic table76that maps, converts, or translates different proof-of-work target schemes34to their corresponding or associated difficulty algorithm48(such as the particular cryptographic hashing algorithm54). The miner system22may thus identify the difficulty algorithm48by querying the electronic database74of difficulty algorithms. So, once the particular difficulty algorithm48is identified, the miner system22may acquire or retrieve any inputs that are required by the difficulty algorithm48(such as the output hash value(s)60generated by the cryptographic hashing algorithm54). The miner system22may execute the difficulty algorithm48specified by the proof-of-work target scheme34. The miner system22may optionally send the hash value(s)60via the Internet or other network (e.g., the communications network26illustrated inFIGS.1-3) to a remote destination for service execution (as later paragraphs will explain). The difficulty algorithm48creates or generates the difficulty50based on the hash value(s)60. FIG.9illustrates agnostic proof-of-work. Exemplary embodiments may use any proof-of-work algorithm52that a cryptographic coin, blockchain network, or scheme desires or specifies. The proof-of-work target scheme34may thus use any proof-of-work algorithm52, as the miner system22is agnostic to encryption, difficulty, and/or proof-of-work.FIG.9, for example, illustrates an electronic database78of proof-of-work algorithms that is accessible to the miner system22. While the database78of proof-of-work algorithms is illustrated as being locally stored in the memory device38of the miner system22, the database78of proof-of-work algorithms may be remotely stored and accessed/queried at any networked location. Even though the database78of proof-of-work algorithms may have any logical structure, a relational database is again perhaps easiest to understand.FIG.9thus illustrates the database78of proof-of-work algorithms as an electronic table80that maps, converts, or translates different proof-of-work target schemes34to their corresponding proof-of-work algorithm52. The miner system22may thus identify the proof-of-work algorithm52by querying the electronic database78of proof-of-work algorithms. After the hash value(s)60are generated, and perhaps after the difficulty50is generated, the miner system22may execute the proof-of-work algorithm52(specified by the proof-of-work target scheme34) using the hash value(s)60and/or the difficulty50as inputs. The miner system22may optionally send the hash value(s)60and/or the difficulty50via the Internet or other network to a remote destination for service execution (as later paragraphs will explain). The proof-of-work algorithm52generates the proof-of-work result42using the hash value(s)60and/or the difficulty50. The proof-of-work algorithm52may also compare the proof-of-work result42to the proof-of-work (“PoW”) target scheme34to ensure or to prove a solution to the mathematical puzzle62. Exemplary embodiments may thus use any encryption algorithm46, any difficulty algorithm48, and/or any proof-of-work algorithm52. Exemplary embodiments may implement any cryptographic security. Instead of merely counting zeroes (as specified by BITCOIN®), exemplary embodiments may run the resulting hash value60through the difficulty algorithm48to calculate the difficulty50in order to determine whether it's more or less difficult than other hashes. AsFIG.10illustrates, exemplary embodiments may use any PoW target scheme34. There are many different target schemes, some of which use or specify random number/nonce values, addresses, starting points, and other security schemes. The proof-of-work algorithm52, for example, may have to compare the hash value(s)60to a target hash value82. The target hash value82may be any minimum or maximum hash value that must be satisfied. If the hash value60is less than or perhaps equal to the target hash value82, then the proof-of-work algorithm52has perhaps solved the mathematical puzzle62. However, if the hash value60is greater than the target hash value82, then perhaps the proof-of-work algorithm52has failed to solve the mathematical puzzle62. Likewise, the hash value60may need to be equal to or greater than the target hash value82to be satisfactory. Regardless, should the hash value60fail to satisfy the target hash value82, exemplary embodiments may modify any data or input (e.g., the electronic data30, a random number/nonce value, address, starting points, etc.) according to the proof-of-work target scheme34, again call or request the cryptographic hashing algorithm54to generate the corresponding hash value(s)60, and compare the hash value(s)60to the target hash value82. Exemplary embodiments may repeatedly modify the electronic data30and/or any other parameters until the corresponding hash value(s)60satisfy the target hash value82. Exemplary embodiments may also use any difficulty scheme. The inventor envisions that there will be many different difficulty schemes. The difficulty algorithm48, for example, may have to compare the difficulty50to a target difficulty84. The target difficulty84has a bit or numeric value that represents a satisfactory difficulty of the corresponding cryptographic hashing algorithm54and/or the hash value60. For example, suppose the target difficulty84is a minimum value that represents a minimum permissible difficulty associated with the corresponding cryptographic hashing algorithm54. If the difficulty50is less than or perhaps equal to the target difficulty84, then perhaps the corresponding cryptographic hashing algorithm54and/or the hash value60is adequately difficult. However, if the difficulty50is greater than the target difficulty84, then perhaps the corresponding cryptographic hashing algorithm54and/or the hash value60is too difficult. Likewise, the difficulty50may need to be equal to or greater than the target difficulty84to be adequately difficult. Regardless, should the difficulty50fail to satisfy the target difficulty84, exemplary embodiments may modify any data or input (e.g., the electronic data30, a random number/nonce value, address, starting points, etc.) and recompute the corresponding hash value(s)60. Moreover, exemplary embodiments may additionally or alternatively change the cryptographic hashing algorithm54and/or the difficulty algorithm48and recompute. Exemplary embodiments may thus functionally separate hashing, difficulty, and proof-of-work. The conventional proof-of-work target scheme34functionally combines or performs both hashing and difficulty. The conventional proof-of-work target scheme34integrates or combines the difficulty in the hash. The conventional proof-of-work target scheme34integrates or combines the difficulty in the hash, thus greatly complicating the hash determination. Exemplary embodiments, instead, may separate the hashing algorithm54from the difficulty algorithm48. Exemplary embodiments put the difficulty50in the measurement of the difficulty50. Exemplary embodiments remove the difficulty50from the hashing algorithm54. The hashing algorithm54is not complicated by also having to integrate/calculate the difficulty algorithm48. The difficulty algorithm48may thus be a separate, stand-alone function or service that determines or calculates which hash is more difficult. The hashing algorithm54is much simpler to code and much faster to execute, as the hashing algorithm54requires less programming code and less storage space/usage in bytes. The hashing algorithm54need not be complicated to deter ASIC mining. Exemplary embodiments need not rely on the hashing algorithm54to also determine the difficulty50and/or the proof-of-work. The difficulty algorithm48is, instead, a separate functional mechanism, perhaps performed or executed by a service provider. Exemplary embodiments thus need not use an electrical power-hungry mechanism that is inherent in the conventional proof-of-work scheme. FIG.11illustrates a randomized database table90. The difficulty algorithm48and/or the proof-of-work algorithm52may use or consult the database table90when conducting any proof-of-work (e.g.,34and/or44). While exemplary embodiments may use any encryption scheme, most blockchain mining uses some form of hashing.FIG.11thus the proof-of-work target scheme34that utilizes the separate cryptographic hashing algorithm54, but the difficulty algorithm48and/or the proof-of-work algorithm52implements a further randomization of the resulting hash value(s)60. The proof-of-work target scheme34or mechanism44may generate, store, and/or use the database table90when performing any proof-of-work. Exemplary embodiments may implement a bit shuffle operation92on the hash value(s)60. Exemplary embodiments may use entries in the database table90to perform the bit shuffle operation92(as later paragraphs will explain). Each entry94in the database table90may contain a random selection of bits/bytes96. The difficulty algorithm48and/or the proof-of-work algorithm52may select any bit values representing the hash value(s)60and swap any one or more of the bit values with any one or more entries94specified by the database table90. The difficulty algorithm48and/or the proof-of-work algorithm52may read or select a bit portion of the bit values representing the hash value(s)60and exchange or replace the bit portion with an entry94contained in, or referenced by, the database table90. Each entry94in the database table90represents or is associated with random bits or bytes. Exemplary embodiments may thus randomly shuffle the hash value(s)60generated by the cryptographic hashing algorithm54. Exemplary embodiments randomize byte or memory block access. FIG.12illustrates RAM binding. Exemplary embodiments may discourage or deter the use of specialized hardware (such as GPUs and ASICs) in blockchain mining. The proof-of-work target scheme34, for example, may take advantage of, or target, memory size restrictions and cache latency of any on-board processor cache memory100. As the reader may understand, any hardware processing element (whether a GPU, an ASIC, or the CPU36) may have integrated/embedded L1, L2, and L3 SRAM/DRAM cache memory. The processor cache memory100is generally much smaller than a system/main memory (such as the memory device38), so the hardware processing element may store frequently-needed data and instructions. Because the processor cache memory100is physically much closer to the processing core, any hardware processing element is able to quickly fetch or hit needed information. If the processor cache memory100does not store the needed information, then a cache miss has occurred and the hardware processing element must request and write blocks of data via a much-slower bus from the system/main memory38. A cache miss implies a cache latency in time and/or cycles to fetch the needed information from the system/main memory38. Any hardware processing element (again, whether a GPU, an ASIC, or the CPU36) may sit idle, or stall, while awaiting fetches from the system/main memory38. Exemplary embodiments may thus force latency, cache misses, and stalls. Exemplary embodiments may target cache latency and processor stalls by generating, storing, and/or using the database table90when determining the hash value(s)60(as later paragraphs will explain). The database table90, however, may be sized to overload the processor cache memory100. The database table90, in other words, may have a table byte size102(in bits/bytes) that exceeds a storage capacity or cache byte size104of the processor cache memory100. The database table90, for example, may exceed one gigabyte (1 GB). Today's L1, L2, and L3 processor cache memory is typically hundreds of megabits in size. Because the database table90may exceed one gigabyte (1 GB), any caching operation will miss or invalidate. That is, the L1, L2, and L3 processor cache memory100lacks the storage capacity or byte size104to store the entire database table90. Perhaps only a portion (or perhaps none) of the database table90may be stored in the processor cache memory100. Indeed, exemplary embodiments thus force some, most, or even all of the database table90to be written or stored to the main/host memory device38(or accessed/retrieved from a remote source, as later paragraphs will explain). Because any hardware processing element (again, whether a GPU, an ASIC, or the CPU36) is unable to cache the entire database table90, exemplary embodiments force a cache miss and further force the hardware processing element to repeatedly use the processor cache memory100to fetch and load a portion of the database table90. The main/system memory38thus provides perhaps a particular portion of the database table90via the bus to the processor cache memory100, and the processor cache memory100then provides that particular portion of the database table90to the hardware processing element. The hardware processing element may then purge or delete that particular portion of the database table90from the processor cache memory100and request/fetch/load another portion of the database table90. Because exemplary embodiments may force repeated cache misses, the hardware processing element may continuously repeat this cycle for loading/retrieving most or all portions of the database table90. The hardware processing element, in other words, repeatedly queries the processor cache memory100and/or the main/host memory device38and awaits data retrieval. The hardware processing element must therefore sit, perhaps mostly idle, while the processor cache memory100and/or the main/host memory device38processes, retrieves, and sends different segments/portions/blocks of the database table90. The processor cache memory100and/or the main/host memory device38have the cache latency (perhaps measured in clock cycles, data transfer rate, or time) that limits blockchain computations. A faster processor/GPU/ASIC, in other words, will not improve memory access times/speeds, so any computational speed/performance is limited by the latency of repeatedly accessing the processor cache memory100and/or the main/host memory device38. The database table90thus deters GPU/ASIC usage when processing the blockchain transactions32. The database table90may thus be purposefully designed to be non-cacheable by intensively using the processor cache memory100and/or the main/host memory device38as an ASIC-deterrence mechanism. Byte or memory block access may be randomized. Whatever the hashing algorithm54, exemplary embodiments may implement the bit shuffle operation92on the hash value(s)60. Exemplary embodiments may use the entries94in the database table90to perform the bit shuffle operation92(as later paragraphs will further explain). The proof-of-work target scheme34may use bit values representing the hash value(s)60, but the proof-of-work target scheme34may swap any one or more of the bit values with any one or more entries94specified by the database table90. Each entry94in the database table90may contain a random selection of bits/bytes. The proof-of-work target scheme34may cause the proof-of-work algorithm52to read or to select a bit portion of the bit values representing the hash value(s)60and exchange or replace the bit portion with an entry94contained in, or referenced by, the database table90. Each entry94in the database table90represents or is associated with random bits or bytes. The proof-of-work target scheme34may thus randomly shuffle the hash value(s)60generated by the cryptographic hashing algorithm54. Exemplary embodiments may discourage or deter specialized hardware in blockchain mining. The miner system22must have access to the database table90in order to execute the bit shuffle operation92, difficulty algorithm48, and/or the proof-of-work algorithm52. Because any processing component (e.g., ASIC, GPU, or the CPU36) is unable to cache the entire database table90, exemplary embodiments force the processing component to query the processor cache memory100and/or the main/host memory device38and to await data retrieval. The hardware processing component must therefore sit, perhaps mostly idle, while the processor cache memory100and/or the main/host memory device38processes, retrieves, and sends different segments/portions/blocks of the database table90. A faster GPU/ASIC will thus not improve memory access times/speeds. Exemplary embodiments thus force miners to choose the CPU36, as a faster GPU/ASIC provides no performance/speed gain. Moreover, because a faster GPU/ASIC is ineffective, the extra capital expense of a faster GPU/ASIC offers little or no benefit and cannot be justified. Exemplary embodiments thus bind miners to the CPU36for blockchain processing/mining. Exemplary embodiments thus include RAM hashing. The electronic database table90may have a random number of columns and/or a random number of rows. The electronic database table90may have a random number of database entries94. Moreover, each columnar/row database entry94may also have a random sequence or selection of bits/bytes (1's and 0's). So, whatever the hash values60generated by the hashing algorithm54, the separate difficulty algorithm48and/or proof-of-work algorithm52may use the electronic database table90to further randomize the hash values60for additional cryptographic security. Indeed, because only at least a portion of the electronic database table90may be stored in the processor cache memory100, exemplary embodiments effectively confine hashing operations to the main/host memory device38(such as a subsystem RAM). Regardless of what device or service provider executes the hashing algorithm54, the electronic database table90, which is mostly or entirely stored in the main/host memory device38, provides the randomized inputs to the separate difficulty algorithm48and/or proof-of-work algorithm52. Operationally and functionally, then, exemplary embodiments divorce or functionally separate any hardware processing element from the hashing operation. Simply put, no matter what the performance/speed/capability of the ASIC, GPU, or the CPU36, the database table90may be randomly sized to always exceed the storage capacity or cache byte size104of the processor cache memory100. Hashing operations are thus reliant on cache latency, cache misses, and processor stalls when using the database table90. The hashing operations are thus largely confined to, and performed by, the off-board or off-processor main/host memory device38(such as a subsystem RAM). Because the main/host memory device38performs most or all of the cryptographic security, the hardware processing component (ASIC, GPU, or the CPU36) may play little or no role in the hashing operations (perhaps only performing database lookup queries). Again, a better/faster ASIC or GPU provides little to no advantage in the hashing operations. Moreover, the main/host memory device38consumes much less electrical power, thus further providing reduced energy costs that deter/resist ASIC/GPU usage. Exemplary embodiments may also add cryptographic security. Exemplary embodiments may force the miner/network to possess, or have authorized access to, the database table90. In simple words, the proof-of-work target scheme34swaps random bytes in the hash value60with other random bytes specified by the database table90. Any party that provides or determines a proof-of-work must possess (or have access to) the database table90. If the difficulty algorithm48and/or the proof-of-work algorithm52lacks authorized access to the database table90, then the difficulty algorithm48and/or the proof-of-work algorithm52cannot query the database table90nor perform database lookup operations. Difficulty and/or proof-of-work will fail without having access to the database table90. Exemplary embodiments may also separately specify the difficulty algorithm48. The proof-of-work target scheme34may cause the miner system22to apply the bit shuffle operation92to the hash value60. The proof-of-work target scheme34may also specify the difficulty algorithm48and the target difficulty84, perhaps having a high number or value. Because these byte accesses to the processor cache memory100are random and over a gigabyte of the memory space, the byte accesses blow or exceed the retrieval and/or byte size storage capabilities of the processor cache memory100. The proof-of-work target scheme34thus forces the miner system22to wait on the slower main/host memory device38(rather than waiting on the speed of the hardware processing component). A faster/better hardware processing element (such as an ASIC), in other words, does not alleviate the bottleneck of accessing the main/host memory device38. Moreover, because exemplary embodiments may heavily rely on the main/host memory device38(rather than the hardware processing component) to do proof of work, the miner system22consumes significantly less of electrical power (supplied by a power supply110). Because the proof-of-work algorithm52and the difficulty algorithm48may be separate from the cryptographic hashing algorithm54, exemplary embodiments utilize the security of a well-tested hashing function, but exemplary embodiments also require the proof-of-work scheme to use the main/host memory device38, which makes it unreasonable to build ASICS. Exemplary embodiments may thus force usage of a particular physical memory. Exemplary embodiments, for example, may overload the processor cache memory100by gorging the byte size of the database table90with additional database entries. Even as L1, L2, and L3 processor cache memory100increases in the storage capacity or byte size104, exemplary embodiments may concomitantly increase the table byte size102(in bits/bytes) to ensure the database table90continues to exceeds the storage capacity or byte size104of the processor cache memory100. Exemplary embodiments may thus bind the encryption algorithm46, the difficulty algorithm48, and/or the proof-of-work algorithm52to the main/host memory device38to deter GPU/ASIC usage. Exemplary embodiments may also unbind the hashing algorithm54from the difficulty algorithm48. Exemplary embodiments easily validate the proof-of-work by changing how proof-of-work is calculated without changing the hashing algorithm54. Because the hashing algorithm54is disassociated or disconnected from the difficulty algorithm48, the cryptographically security of the hashing algorithm54is increased or improved. Moreover, the separate difficulty algorithm48and/or proof-of-work algorithm52may have other/different objectives, without compromising the cryptographically security of the hashing algorithm54. The difficulty algorithm48and/or proof-of-work algorithm52, for example, may be designed for less consumption of the electrical power. The difficulty algorithm48and/or proof-of-work algorithm52may additionally or alternatively be designed to deter/resist ASIC/GPU usage, such as increased usage of the processor cache memory100and/or the main/host memory device38. The difficulty algorithm48and/or proof-of-work algorithm52need not be cryptographically secure. Because the hashing algorithm54ensures the cryptographically security, the difficulty algorithm48and/or proof-of-work algorithm52need not be burdened with providing the cryptographically security. The difficulty algorithm48and/or proof-of-work algorithm52each require less programming code and less storage space/usage in bytes, so each is much simpler to code and much faster to execute. FIG.13illustrates network binding. Because the encryption algorithm46, the difficulty algorithm48, and the proof-of-work algorithm52may be separate software modules, routines, or clients, network communications may be used to deter specialized hardware. AsFIG.13illustrates, the miner system22communicates with the blockchain network server28via the communications network26. Because the miner system22may be authorized to perform blockchain mining (perhaps according to the proof-of-work target scheme34specified or used by the blockchain network server28), the miner system22may receive the inputs24from the blockchain network server28. The miner system22, in other words, must use the communications network26to receive the inputs24and to subsequently mine the inputs24. The miner system22uses the inputs24to determine the hash value60and/or the difficulty50(as this disclosure above explains). However, suppose the blockchain network server28stores the database table90that is required for the difficulty algorithm48and/or the proof-of-work algorithm52. Even though the miner system22may execute the encryption algorithm46, the difficulty algorithm48, and/or the proof-of-work algorithm52, the miner system22may be forced to send one or more database queries to the blockchain network server28. The blockchain network server28may have a hardware processing element and a memory device (not shown for simplicity) that stores the database table90. The blockchain network server28may also store and execute a query handler software application (also not shown for simplicity) that receives queries from clients, identifies or looks up entries94in the database table90, and sends query responses to the clients. So, when the miner system22is instructed to perform, or require, the bit shuffle operation92, the miner system22may thus be forced to retrieve any entry94(specified by the database table90) via the communications network26from the blockchain network server28. The miner system22may thus send the database query to the network address assigned to or associated with the blockchain network server28. The miner system22then awaits a query response sent via the communications network26from the blockchain network server28, and the query response includes or specifies the random selection of bits/bytes retrieved from the particular entry94in the database table90. The miner system22may then perform the bit swap operation92on the hash value(s)60(as this disclosure above explains). Exemplary embodiments may use a network latency112to discourage or deter specialized hardware. Because the blockchain network server28may store the database table90, the miner system22is performance bound by the network latency112in the communications network26. Packet communications between the blockchain network server28and the destination miner system22require time, and the network latency112is affected by network routing, network segment travel distances, network traffic, and many other factors. Exemplary embodiments may thus additionally or alternatively force the miner system22to wait on the communications network26to obtain any entry94in the database table90. A faster/better hardware processing component (such as an ASIC) does not overcome bottleneck(s) due to the network latency112in the communications network26. Moreover, because the electrical power required by a network interface114is likely less than the hardware processing component, the miner system22consumes significantly less of electrical power. FIG.14illustrates party binding. Here the miner system22may utilize an authorized proof-of-work (“PoW”) service provider120that provides a PoW service122. The miner system22may communicate with a PoW server124via the communications network26, and the PoW server124is operated by, or on behalf of, the PoW service provider120. Perhaps only the PoW service provider120may be authorized to execute the difficulty algorithm48and/or the proof-of-work algorithm52as a provable party. The PoW server124may have a hardware processing element and a memory device (not shown for simplicity) that stores the difficulty algorithm48and/or the proof-of-work algorithm52. If an incorrect or unauthorized party attempts the proof-of-work, the proof-of-work is designed to fail. As an example,FIG.14illustrates a party identifier126as one of the inputs24to the difficulty algorithm48and to the proof-of-work algorithm52. While the party identifier126may be supplied or sent from any network location (such as the blockchain network server28and/or the miner system22), the party identifier126may be locally retrieved from the memory device of the PoW server124. The miner system22may send a PoW request128to a network address (e.g., IP address) associated with the PoW server124. The PoW request128may include or specify one or more of the inputs24to the difficulty algorithm48and/or to the proof-of-work algorithm52. Suppose, for example, that the PoW request128includes or specifies the hash value(s)60(determined by the hashing algorithm54, as above explained). The PoW server124may generate the difficulty50(by calling or executing the difficulty algorithm48) and/or the proof-of-work result42(by calling and/or by executing the proof-of-work algorithm52) using the hash value(s)60and the party identifier126. The PoW server124may then send the difficulty50and/or the proof-of-work result42as a PoW service response130back to the IP address associated with the miner system22and/or back to the IP address associated with the blockchain network server28. Either or both of the PoW server124and/or the blockchain network server28may compare the difficulty50and/or the proof-of-work result42to the proof-of-work (“PoW”) target scheme34. If the difficulty50and/or the proof-of-work result42satisfies the proof-of-work (“PoW”) target scheme34, then the correct, authorized party has solved the mathematical puzzle62associated with the mining scheme. Exemplary embodiments may thus be socially bound. Because the party identifier126may be an input to the difficulty algorithm48and/or to the proof-of-work algorithm52, the party identifier126must specify the correct name, code, alphanumeric combination, binary value, or any other representation of the PoW service provider120. If the wrong, incorrect, or unauthorized value is input, the difficulty algorithm48and/or the proof-of-work algorithm52will generate incorrect results that cannot satisfy the proof-of-work (“PoW”) target scheme34. An unauthorized party has been used to conduct the proof-of-work. FIG.15illustrates machine binding. Here the miner system22may utilize a particular machine, device, or other computer to provide the PoW service122. The miner system22, for example, must use the PoW server124to execute the difficulty algorithm48and/or the proof-of-work algorithm52as a provable party. That is, perhaps only the PoW server124is authorized to execute the difficulty algorithm48and/or the proof-of-work algorithm52. A different computer or server, even if also operated by, or on behalf of, the PoW service provider120, is ineligible or unauthorizedFIG.15thus illustrates a machine identifier132as one of the inputs24to the difficulty algorithm48and/or to the proof-of-work algorithm52. The machine identifier132is any value, number, or alphanumeric combination that uniquely identifies the PoW server124executing the difficulty algorithm48and/or the proof-of-work algorithm52. The machine identifier132, for example, may be a chassis or manufacturer's serial number, MAC address, or IP address that is assigned to or associated with the PoW server124. When the PoW server124receives the input(s)24from the miner system22(perhaps via the PoW request128, as above explained), the PoW server124may generate the difficulty50and/or the proof-of-work result42using the hash value(s)60and the machine identifier132as inputs. The PoW server124may then send the difficulty50and/or the proof-of-work result42as a PoW service response130back to the IP address associated with the miner system22and/or back to the IP address associated with the blockchain network server28. Either or both of the PoW server124and/or the blockchain network server28may compare the difficulty50and/or the proof-of-work result42to the proof-of-work (“PoW”) target scheme34. If the difficulty50and/or the proof-of-work result42satisfy the proof-of-work (“PoW”) target scheme34, then the correct, authorized machine or device has solved the mathematical puzzle62associated with the mining scheme. Exemplary embodiments may thus be machine bound. If the wrong, incorrect, or unauthorized machine identifier132is input, the difficulty algorithm48and/or the proof-of-work algorithm52will generate incorrect results that cannot satisfy the proof-of-work (“PoW”) target scheme34. An unauthorized computer has been used to conduct the proof-of-work. FIG.16further illustrates network binding. Here a predetermined network addressing scheme must be used to conduct the difficulty50and/or the proof-of-work result42. Suppose, for example, that the proof-of-work (“PoW”) target scheme34requires one or more predetermined network addresses134when executing the difficulty algorithm48and/or the proof-of-work algorithm52. The inputs24to the difficulty algorithm48and/or to the proof-of-work algorithm52, for example, may include one or more source addresses136and/or one or more destination addresses138when routing packetized data via the communications network26from the miner system22to the PoW service provider120(e.g., the PoW server124). The hash values60, in other words, must traverse or travel a predetermined network routing140in order to satisfy the proof-of-work (“PoW”) target scheme34. The predetermined network routing140may even specify a chronological list or order of networked gateways, routers, switches, servers, and other nodal addresses that pass or route the inputs24from the miner system22to the PoW server124. The source addresses136, the destination addresses138, and/or the predetermined network routing140may thus be additional data inputs24to the difficulty algorithm48and/or to the proof-of-work algorithm52. The PoW server124may perform network packet inspection to read/retrieve the source addresses136, the destination addresses138, and/or the predetermined network routing140associated with, or specified by, a data packet. When the PoW server124receives the input(s)24from the miner system22(perhaps via the PoW request128, as above explained), the PoW server124may generate the difficulty50and/or the proof-of-work result42using the hash value(s)60, the source addresses136, the destination addresses138, and/or the predetermined network routing140. The PoW server124may then send the difficulty50and/or the proof-of-work result42as the PoW service response130back to the IP address associated with the miner system22and/or back to the IP address associated with the blockchain network server28. Either or both of the PoW server124and/or the blockchain network server28may compare the difficulty50and/or the proof-of-work result42to the proof-of-work (“PoW”) target scheme34. If the difficulty50and/or the proof-of-work result42satisfy the proof-of-work (“PoW”) target scheme34, then the correct, authorized networked devices were used to solve the mathematical puzzle62associated with the mining scheme. If a wrong, incorrect, or unauthorized routing was used, the difficulty algorithm48and/or the proof-of-work algorithm52will fail to satisfy the proof-of-work (“PoW”) target scheme34. An unauthorized network of computers has been used to conduct the proof-of-work. FIG.17illustrates vendor processing. The miner system22may communicate with one or more service providers via the communications network26. The miner system22may enlist or request that any of the service providers provide or perform a processing service. An encryption service provider150, for example, may provide an encryption service152by instructing an encryption server154to execute the encryption algorithm46chosen or specified by the miner system22and/or the blockchain network server28. A difficulty service provider156may provide a difficulty service158by instructing a difficulty server160to execute the difficulty algorithm48chosen or specified by the miner system22and/or the blockchain network server28. The proof-of-work (PoW) service provider120(e.g., the PoW server124) may provide the PoW service122by executing the proof-of-work algorithm52chosen or specified by the miner system22and/or the blockchain network server28. The miner system22may thus outsource or subcontract any of the encryption algorithm46, the difficulty algorithm48, and/or the proof-of-work algorithm52to the service provider(s). Because the encryption algorithm46, the difficulty algorithm48, and/or the proof-of-work algorithm52may be separate software mechanisms or packages, the service providers150,156, and120may specialize in their respective algorithms46,48, and52and/or services152,158, and122. The encryption service provider150, for example, may offer a selection of different encryption services152and/or encryption algorithms46, with each encryption service152and/or encryption algorithm46tailored to a specific encryption need or feature. The difficulty service provider156may offer a selection of different difficulty services158and/or difficulty algorithms48that are tailored to a specific difficulty need or feature. The PoW service provider120may offer a selection of different PoW services122and/or PoW algorithms52that are tailored to a specific proof-of-work need or feature. The blockchain network server28, the miner system22, and/or the proof-of-work (“PoW”) target scheme34may thus mix-and-match encryption, difficulty, and proof-of-work options. Exemplary embodiments may thus decouple encryption, difficulty, and proof-of-work efforts. Because the encryption algorithm46may be a stand-alone software offering or module, exemplary embodiments greatly improve encryption security. The encryption algorithm46(such as the hashing algorithm54) need not intertwine with the difficulty algorithm48and/or the proof-of-work algorithm52. Because the hashing algorithm54may be functionally divorced from difficulty and proof-of-work calculations, the hashing algorithm54remains a safe, secure, and proven cryptology scheme without exposure to software bugs and errors introduced by difficulty and proof-of-work needs. The difficulty algorithm48may also be severed or isolated from encryption and proof-of-work, thus allowing a blockchain scheme to dynamically alter or vary different difficulty calculations without affecting encryption and/or proof-of-work. The proof-of-work algorithm52may also be partitioned, split off, or disconnected from encryption and difficulty, thus allowing any blockchain scheme to dynamically alter or vary different proof-of-work calculations or schemes without affecting encryption and/or difficulty. FIG.18illustrates democratic mining. Exemplary embodiments reduce or even eliminate the need for graphics processors and specialized application-specific integrated circuits. The miner system22may thus rely on a conventional central processing unit (such as the CPU36) to process the blockchain transactions32. The miner system22may thus be a conventional home or business server/desktop160or laptop computer162that is much cheaper to purchase, use, and maintain. Moreover, the miner system22may even be a smartphone164, tablet computer166, or smartwatch168, as these devices also have adequate processing and memory capabilities to realistically mine and win the block40of data (illustrated inFIGS.1-10). Indeed, the miner system22may be any network-connected device, as exemplary embodiments reduce or even eliminate the need for specialized hardware processors. The miner system22thus opens-up blockchain mining to any network-connected appliance (e.g., refrigerator, washer, dryer), smart television, camera, smart thermostat, or other Internet of Thing. FIG.19also illustrates democratic mining. Because exemplary embodiments reduce or even eliminate the need for graphics processors and specialized application-specific integrated circuits, the miner system22may even be a car, truck, or other vehicle170. As the reader may realize, the vehicle170may have many electronic systems controlling many components and systems. For example, the engine may have an engine electronic control unit or “ECU”172, the transmission may have a powertrain electronic control unit or “PCU”174, the braking system may have a brake electronic control unit or “BCU”176, and the chassis system may have a chassis electronic control unit or “CUC”178. There may be many more electronic control units throughout the vehicle170. A controller area network180thus allows all the various electronic control units to communicate with each other (via messages sent/received via a CAN bus). All these controllers may also interface with the communications network26via a wireless vehicle transceiver182(illustrated as “TX/RX”). The vehicle170may thus communicate with the blockchain network server28to receive the inputs24(such as the blockchain transactions32). The vehicle170may then use the various controllers172-178to mine the blockchain transactions32using the encryption algorithm46, the difficulty algorithm48, and/or the PoW algorithm52(as this disclosure above explains). The reader may immediately see that the vehicle170is a powerful processing platform for blockchain mining. The vehicle170may mine the blockchain transactions32when moving or stationary, as long as electrical power is available to the various controllers172-178and to the vehicle transceiver182. Indeed, even when parked with the ignition/battery/systems on or off, exemplary embodiments may maintain the electrical power to mine the blockchain transactions32. So, a driver/user may configure the vehicle17to mine the blockchain transactions32, even when the vehicle sits during work hours, sleep hours, shopping hours, and other times of idle use. The reader may also immediately see that vehicular mining opens up countless additional possibilities to win the block40of data (i.e., solve the puzzle62) without additional investment in mining rigs. Thousands, millions, or even billions of vehicles170(e.g., cars, trucks, boats, planes, buses, trains, motorcycles) may mine the blockchain transactions32, thus providing a potential windfall to offset the purchasing and operational expenses. Exemplary embodiments reduce energy consumption. Because a conventional, general purpose central processing unit (e.g., the CPU36) is adequate for mining the blockchain transactions32, exemplary embodiments consume much less electrical power. Moreover, because a conventional central processing unit consumes much less electrical power, the CPU operates at much cooler temperatures, generates less waste heat/energy, and therefore requires less cooling, air conditioning, and refrigerant machinery. Exemplary embodiments are thus much cheaper to operate than GPUs and ASICs. Exemplary embodiments thus democratize blockchain mining. Because encryption, difficulty, and proof-of-work efforts may be functionally divided, general-purpose computer equipment has the processing and memory capability to compete as blockchain miners. For example, because the function(s) that calculate(s) the magnitude of the proof of work (such as the difficulty algorithm48and/or the proof-of-work algorithm52) may be detached or isolated from the function that performs cryptography (such as the hashing algorithm54), encryption need not be modified in order to improve security (e.g., such as the MONERO® mining scheme). The well-tested SHA-256 hashing function, for example, remains stable and unaffected by difficulty and/or proof-of-work. The difficulty algorithm48, in other words, need not be determined by or with the hashing algorithm54. The difficulty algorithm48, instead, may be separately determined as a true, independent measure of the difficulty50. The inventor has realized that most or all proof of work schemes generally may have two functions (i.e., one function to do a cryptographic hash and another function to determine the level of difficulty of a given hash). Exemplary embodiments may separate, or take away, what makes proof of work hard from the cryptographic hash and, perhaps instead, put it in the difficulty algorithm48that calculates which hash is more difficult. The difficulty algorithm48, for example, may be functionally combined with the proof-of-work algorithm52that calculates the magnitude of the proof of work instead of using the hashing algorithm54(asFIG.5illustrates). Exemplary embodiments need not try to design, develop, or modify hashing functions that deter ASIC mining. Encryption may thus be independent from proof-of-work determinations. The proof of work (such as the difficulty algorithm48and/or the proof-of-work algorithm52) may be a different or separate software mechanism from the hashing mechanism. The difficulty50of the proof-of-work, for example, may be a separate component from staking in a blockchain. The difficulty algorithm48and/or the proof-of-work algorithm52may require communications networking between provably different parties. The difficulty algorithm48and/or the proof-of-work algorithm52may require network delays and/or memory bandwidth limitations. The difficulty algorithm48and/or the proof-of-work algorithm52may have a random component (such as incorporating a random function), such that the difficulty algorithm48and/or the proof-of-work algorithm52may randomly to determine the difficulty50and/or the proof-of-work result42. Exemplary embodiments thus reduce or even eliminate the power intensive mechanism that is inherent in today's proof of work schemes by changing how the proof of work is calculated. Exemplary embodiments need not change the hashing algorithm54, and exemplary embodiments allow a more easily validated proof of work. The hashing algorithm54is not bound or required to determine the proof of work. The proof of work need not be cryptographically secure. The liberated, autonomous hashing algorithm54generates and guarantees an input (e.g., the hash values60) that cannot be predicted by some other faster algorithm. The disassociated hashing algorithm54effectively generates the hash values60as random numbers. The hashing algorithm54, in other words, provides cryptographic security, so neither the difficulty algorithm48nor the proof-of-work algorithm52need be cryptographically secure. The difficulty algorithm48and/or the proof-of-work algorithm52need not be folded into the hashing algorithm54. Exemplary embodiments provide great value to blockchains. Exemplary embodiments may functionally separate encryption (e.g., the hashing algorithm54) from proof of work (such as the difficulty algorithm48and/or the proof-of-work algorithm52). Exemplary embodiments may thus bind proof-of-work to a conventional central processing unit. Deploying a different cryptographic hash is hugely dangerous for blockchains, but deploying another difficulty or proof of work mechanism is not so dangerous. Exemplary embodiments allow blockchains to experiment with different difficulty functions (the difficulty algorithms48) and/or different proof-of-work algorithms52without changing the hashing algorithm54. Exemplary embodiments thus mitigate risk and reduce problems with cryptographic security. Many blockchain environments would prefer to make their technology CPU mineable for lower power, lower costs, and more democratic participation. The barrier, though, is that conventionally these goals would require changing their hash function. Exemplary embodiments, instead, reduce costs and increase the pool of miner systems without changing the hash function. The difficulty algorithm48and/or the proof-of-work algorithm52may be refined, modified, or even replaced with little or no impact on the hashing algorithm54. Exemplary embodiments reduce electrical power consumption. Blockchain mining is very competitive, as the first miner that solves the mathematical puzzle62owns the block40of data and is financially rewarded. Large “farms” have thus overtaken blockchain mining, with each miner installation using hundreds or even thousands of ASIC-based computers to improve their chances of first solving the calculations specified by the mathematical puzzle62. ASIC-based blockchain mining requires tremendous energy resources, though, with some studies estimating that each BITCOIN® transaction consumes more daily electricity than an average American home. Moreover, because ASIC-based blockchain mining operates 24/7/365 at full processing power, the ASIC-based machines quickly wear out or fail and need periodic (perhaps yearly) replacement. Exemplary embodiments, instead, retarget blockchain mining back to CPU-based machines that consume far less electrical power and that cost far less money to purchase. Because the capital costs and expenses are greatly reduced, more miners and more CPU-based machines may effectively participate and compete. The CPU-based machines, in other words, have a realistic and profitable chance of first solving the calculations specified by the mathematical puzzle62. Democratic participation is greatly increased. FIGS.20-21are more detailed illustrations of an operating environment, according to exemplary embodiments.FIG.20illustrates the blockchain network server28communicating with the miner system22via the communications network26. The blockchain network server28and the miner system22operate in the blockchain environment20. The blockchain network server28has a hardware processing component190(e.g., “P”) that executes a server-side blockchain software application192stored in a local memory device194. The blockchain network server28has a network interface to the communications network26, thus allowing two-way, bidirectional communication with the miner system22. The server-side blockchain software application192includes instructions, code, and/or programs that cause the blockchain network server28to perform operations, such as sending the inputs24(such as the blockchain transactions32) and/or the proof-of-work (“PoW”) target scheme34via the communications network26to the network address (e.g., Internet protocol address) associated with or assigned to the miner system22. The inputs24may be any electronic data30that is shared among miners participating in the blockchain environment20. The miner system22operates as a mining node in the blockchain environment20. The miner system22has the central processing unit (e.g., “CPU”)36that executes a client-side blockchain mining software application196stored in the local memory device38. The miner system22has a network interface to the communications network26, thus allowing two-way, bidirectional communication with the blockchain network server28. The client-side blockchain mining software application196includes instructions, code, and/or programs that cause the miner system22to perform operations, such as receiving the inputs24, the electronic data30, and/or the proof-of-work (“PoW”) target scheme34. The client-side blockchain mining software application196may then cause the miner system22to execute the proof-of-work (“PoW”) mechanism44based on the electronic data30representing the inputs24. The client-side blockchain mining software application196may instruct the CPU36to call and/or to execute the encryption algorithm46, the difficulty algorithm48, and/or the PoW algorithm52. The CPU36calls or executes any or all of the encryption algorithm46, the difficulty algorithm48, and/or the PoW algorithm52using the electronic data30. The miner system22mines blockchain transactional records. Whatever the electronic data30represents, the miner system22applies the electronic data30according to the proof-of-work target scheme34. While the proof-of-work target scheme34may specify any encryption algorithm46, most blockchains specify the hashing algorithm54. The miner system22may thus generate the hash values60by hashing the electronic data30(e.g., the blockchain transactions32) using the hashing algorithm54. The miner system22may generate the difficulty50by executing the difficulty algorithm48using the hash values60. The miner system22may generate the proof-of-work result42using the hash value(s)60as inputs to the proof-of-work algorithm52. If the proof-of-work result42satisfies the mathematical puzzle62, according to the rules/regulations specified by the blockchain network server28and/or the proof-of-work target scheme34, then perhaps the miner system22earns or owns the right or ability to write/record blockchain transaction(s) to the block40of data. The miner system22may also earn or be rewarded with a compensation (such as a cryptographic coin, points, other currency/coin/money, or other value). The miner system22may own the block40of data. If the miner system22is the first to satisfy the proof-of-work target scheme34(e.g., the proof-of-work result42satisfies the mathematical puzzle62), the miner system22earns the sole right or ability to write the blockchain transactions32to the block40of data. The miner system22may timestamp the block40of data and broadcast the block40of data, the timestamp, the proof-of-work result42, and/or the mathematical puzzle62to other miners in the blockchain environment20. The miner system22, may broadcast a hash value representing the block40of data. The miner system22thus adds or chains the block40of data (and perhaps its hash value) to the blockchain64, and the other miners begin working on a next block in the blockchain64. The proof-of-work target scheme34and/or the mathematical puzzle62may vary. Satoshi's BITCOIN® proof-of-work scanned for a value that, when hashed, the hash value begins with a number of zero bits. The average work required is exponential in the number of zero bits required and can be verified by executing a single hash. BITCOIN's miners may increment a nonce in the block40of data until a value is found that gives the block's hash the required zero bits. FIG.21further illustrates the operating environment. The miner system22may optionally utilize vendors for any of the hashing algorithm54, the difficulty algorithm48, and the proof-of-work algorithm52. The miner system22may enlist or request that a service provider provide or perform a processing service. The encryption server154, for example, may communicate with the blockchain network server28and the miner system22via the communications network26. The encryption server154has a hardware processing element (“P”) that executes the encryption algorithm46stored in a local memory device. The encryption server154is operated on behalf of the encryption service provider150and provides the encryption service152. The miner system22and/or the blockchain network server28may send an encryption service request to the encryption server154, and the encryption service request may specify the inputs24(such as the blockchain transactions32). The encryption server154executes the encryption algorithm46using the inputs24to generate the hash value(s)60. The encryption server154sends a service response to the miner system22, and the service response includes or specifies the hash value(s)60. Other suppliers may be used. The difficulty server160may communicate with the blockchain network server28and the miner system22via the communications network26. The difficulty server160has a hardware processing element (“P”) that executes the difficulty algorithm48stored in a local memory device. The difficulty service provider156may provide the difficulty service158by instructing the difficulty server160to execute the difficulty algorithm48chosen or specified by the miner system22and/or the blockchain network server28. The miner system22and/or the blockchain network server28may send a difficulty service request to the difficulty server160, and the difficulty service request may specify the hash value(s)60. The difficulty server160executes the difficulty algorithm48using the hash value(s)60to generate the difficulty50. The difficulty server160sends the service response to the miner system22, and the service response includes or specifies the difficulty50. The PoW server124may communicate with the blockchain network server28and the miner system22via the communications network26. The PoW server124has a hardware processing element (“P”) that executes the proof-of-work algorithm52stored in a local memory device. The PoW service provider120(e.g., the PoW server124) may provide the PoW service122by executing the proof-of-work algorithm52chosen or specified by the miner system22and/or the blockchain network server28. The PoW server124sends the service response to the miner system22, and the service response includes or specifies the PoW result42. The miner system22may compare any of the hash value(s)60, the difficulty50, and/or the PoW result42to the proof-of-work target scheme34. If the proof-of-work target scheme34is satisfied, perhaps the miner system22is the first miner to have solved the puzzle62. Exemplary embodiments may be applied regardless of networking environment. Exemplary embodiments may be easily adapted to stationary or mobile devices having wide-area networking (e.g., 4G/LTE/5G cellular), wireless local area networking (WI-FI®), near field, and/or BLUETOOTH® capability. Exemplary embodiments may be applied to stationary or mobile devices utilizing any portion of the electromagnetic spectrum and any signaling standard (such as the IEEE 802 family of standards, GSM/CDMA/TDMA or any cellular standard, and/or the ISM band). Exemplary embodiments, however, may be applied to any processor-controlled device operating in the radio-frequency domain and/or the Internet Protocol (IP) domain. Exemplary embodiments may be applied to any processor-controlled device utilizing a distributed computing network, such as the Internet (sometimes alternatively known as the “World Wide Web”), an intranet, a local-area network (LAN), and/or a wide-area network (WAN). Exemplary embodiments may be applied to any processor-controlled device utilizing power line technologies, in which signals are communicated via electrical wiring. Indeed, exemplary embodiments may be applied regardless of physical componentry, physical configuration, or communications standard(s). Exemplary embodiments may utilize any processing component, configuration, or system. For example, the miner system22may utilize any desktop, mobile, or server central processing unit or chipset offered by INTEL®, ADVANCED MICRO DEVICES®, ARM®, TAIWAN SEMICONDUCTOR MANUFACTURING®, QUALCOMM®, or any other manufacturer. The miner system22may even use multiple central processing units or chipsets, which could include distributed processors or parallel processors in a single machine or multiple machines. The central processing unit or chipset can be used in supporting a virtual processing environment. The central processing unit or chipset could include a state machine or logic controller. When any of the central processing units or chipsets execute instructions to perform “operations,” this could include the central processing unit or chipset performing the operations directly and/or facilitating, directing, or cooperating with another device or component to perform the operations. Exemplary embodiments may packetize. When the blockchain network server28and the miner system22communicate via the communications network26, the blockchain network server28and the miner system22may collect, send, and retrieve information. The information may be formatted or generated as packets of data according to a packet protocol (such as the Internet Protocol). The packets of data contain bits or bytes of data describing the contents, or payload, of a message. A header of each packet of data may be read or inspected and contain routing information identifying an origination address and/or a destination address. Exemplary embodiments may use any encryption or hashing function. There are many encryption algorithms and schemes, and exemplary embodiments may be adapted to execute or to conform to any encryption algorithm and/or scheme. In the blockchain environment20, though, many readers may be familiar with the various hashing algorithms, especially the well-known SHA-256 hashing algorithm. The SHA-256 hashing algorithm acts on any electronic data or information to generate a 256-bit hash value as a cryptographic key. The key is thus a unique digital signature. However, there are many different hashing algorithms, and exemplary embodiments may be adapted to execute or to conform to any hashing algorithm, hashing family, and/or hashing scheme (e.g., Blake family, MD family, RIPE family, SHA family, CRC family). The miner system22may store or request different software packages. The hashing algorithm54may be a software file, executable program, routine, module, programming code, or third-party service that hashes the blockchain transactions32to generate the hash value(s)60. The difficulty algorithm48may be a software file, executable program, routine, module, programming code, or third-party service that uses the hash value(s)60to generate the difficulty50. The proof-of-work (“PoW”) algorithm52be a software file, executable program, routine, module, programming code, or third-party service that uses the hash value(s)60to generate the PoW result42. The miner system22may download or otherwise acquire the hashing algorithm54, the difficulty algorithm48, and/or the PoW algorithm52to provide mining operations for the blockchain transactions32. The blockchain environment20may flexibly switch or interchange encryption, difficulty, and proof-of-work. Because the hashing algorithm54, the difficulty algorithm48, and the proof-of-work algorithm52may be separate software packages, the proof-of-work (“PoW”) target scheme34and/or the blockchain environment20may mix-and-match the encryption algorithm46, the difficulty algorithm48, and the proof-of-work algorithm52. The blockchain environment20may thus easily evaluate different combinations of the encryption algorithm46, the difficulty algorithm48, and the proof-of-work algorithm52with little or no intra-algorithm or intra-application effect. The blockchain environment20may mix-and-match encryption, difficulty, and proof-of-work. FIGS.22-31illustrate mining specifications, according to exemplary embodiments. When the miner system22communicates with the blockchain network server28, the blockchain network server28may specify the proof-of-work (“PoW”) target scheme34that is required by the blockchain environment20. That is, when the miner system22participates as a miner and mines or processes blockchain records/transactions, the miner system22may be required or instructed to use the particular hashing algorithm54, the difficulty algorithm48, and/or the proof-of-work algorithm52specified by the blockchain network. For example, in order for the miner system22to be authorized or recognized as a mining participant, the miner system22may be required to download the client-side blockchain mining software application196that specifies or includes the hashing algorithm54, the difficulty algorithm48, and/or the proof-of-work algorithm52. The client-side blockchain mining software application196may thus comprise any software apps or modules, files, programming code, or instructions representing the hashing algorithm54, the difficulty algorithm48, and/or the proof-of-work algorithm52. FIGS.23-25illustrate an encryption identifier mechanism.FIG.23illustrates the miner system22receiving the proof-of-work (“PoW”) target scheme34that is required by the blockchain environment20. In order to reduce a memory byte size and/or programming line size of the PoW target scheme34and/or the client-side blockchain mining software application196, exemplary embodiments may specify an encryption identifier (encryption “ID”)200associated with the blockchain network's chosen or required encryption scheme. The encryption identifier200may be any alphanumeric combination, hash value, network address, website, or other data/information that uniquely identifies the PoW target scheme34and/or the encryption algorithm46used by the blockchain environment20. AsFIG.23illustrates, the miner system22may receive the encryption identifier200as a specification or parameter associated with the PoW target scheme34and/or the encryption algorithm46. AsFIG.24illustrates, though, the miner system22may receive a packetized message202from the blockchain network server28, and a packet header and/or payload may specify or include the encryption identifier200as a data field, specification, or parameter. Again, because many or most blockchain networks use hashing as an encryption mechanism, the encryption identifier200may specify, be assigned to, or be associated with the hashing algorithm54. The blockchain network server28may thus send the encryption identifier200(via the communications network26) to the miner system22. The encryption identifier200may be packaged as a downloadable component, parameter, or value with the client-side blockchain mining software application196. However, the encryption identifier200may additionally or alternatively be sent to the miner system22at any time via the message202. Because the encryption identifier200may be separately sent from the client-side blockchain mining software application196, the encryption identifier200may be dynamically updated or changed without downloading a new or updated client-side blockchain mining software application196. AsFIG.25illustrates, exemplary embodiments may consult the electronic database70of encryption algorithms. Once the miner system22receives or determines the encryption identifier200, the miner system22may implement the encryption scheme represented by the encryption identifier200. The miner system22may obtain, read, or retrieve the encryption identifier200specified by the client-side blockchain mining software application196and/or packet inspect the message202from the blockchain network server28. Once the encryption identifier200is determined, the miner system22may identify the corresponding blockchain encryption scheme by querying the electronic database70of encryption algorithms for the encryption identifier200.FIG.25illustrates the electronic database70of encryption algorithms locally stored in the memory device38of the miner system22. The electronic database70of encryption algorithms may store, reference, or associate the encryption identifier200to its corresponding proof-of-work target scheme34and/or encryption algorithm46. The miner system22may thus perform or execute a database lookup for the encryption identifier200to identify which proof-of-work target scheme34and/or encryption algorithm46is required for miners operating in the blockchain environment20. The miner system22may then retrieve, call, and/or execute the encryption algorithm46using the inputs24(such as the blockchain transactions32), as this disclosure above explained (with reference toFIG.7). Exemplary embodiments may outsource encryption operations. When the miner system22determines the encryption identifier200, the corresponding blockchain encryption scheme may require or specify the encryption service provider150that provides the encryption service152. AsFIG.25also illustrates, the electronic database70of encryption algorithms may map or relate the encryption identifier200to its corresponding encryption service provider150that provides the encryption service152. The miner system22may thus identify an encryption service resource204that provides the encryption service152. The encryption service resource204, for example, may be an Internet protocol address, website/webpage, and/or uniform resource locator (URL) that is assigned to, or associated with, the encryption service provider150and/or the encryption service152. The miner system22may outsource or subcontract the inputs24(such as the blockchain transactions32) to the encryption service resource204(perhaps using the service request and service response mechanism explained with reference toFIG.21). Exemplary embodiments may thus be agnostic to hashing. The miner system22may call, request, and/or execute any encryption scheme specified by any client, cryptographic coin, or blockchain network. The miner system22may dynamically switch or mix-and-match different encryption schemes. Once the miner system22determines the proof-of-work target scheme34, the encryption algorithm46, the encryption service provider150, the encryption service152, the encryption identifier200, and/or the encryption service resource204, the miner system22may perform any encryption scheme specified for the blockchain environment20. The blockchain environment20may dynamically change the encryption scheme at any time. The blockchain environment20may flexibly switch, change, and evaluate different encryption strategies, perhaps with little or no impact or effect on difficulty and proof-of-work operations. Moreover, the miner system22may operate within or mine different blockchain environments20without specialized hardware rigs. Exemplary embodiments improve computer functioning. Because exemplary embodiments may only specify the encryption identifier200, the memory byte size consumed by the proof-of-work (“PoW”) target scheme34and/or the client-side blockchain mining software application196is reduced. That is, the blockchain network server28need not send the entire software program, code, or instructions representing the hashing algorithm54used by the blockchain environment20. The blockchain environment20, the blockchain network server28, and/or the proof-of-work (“PoW”) target scheme34need only specify much smaller byte-sized data or information representing the encryption algorithm46, the encryption service provider150, the encryption service152, the encryption identifier200, and/or the encryption service resource204. The blockchain environment20need not be burdened with conveying the hashing algorithm54to the miner system22and other mining nodes. The blockchain environment20and the communications network26convey less packet traffic, so packet travel times and network latency are reduced. Moreover, especially if the miner system22outsources the hashing operation, the miner system22is relieved from processing/executing the hashing algorithm54and consumes less of the electrical power. Again, then, a faster and more expensive graphics processor or even ASIC will not speed up the hashing operation. The conventional central processing unit36is adequate, reduces costs, and promotes democratic mining. FIGS.26-28illustrate illustrates a difficulty identifier mechanism.FIG.26illustrates the miner system22receiving the proof-of-work (“PoW”) target scheme34that is required by the blockchain environment20. In order to reduce a memory byte size and/or programming line size of the PoW target scheme34and/or the client-side blockchain mining software application196, exemplary embodiments may specify a difficulty identifier (difficulty “ID”)210associated with the blockchain network's chosen or required difficulty scheme. The difficulty identifier210may be any alphanumeric combination, hash value, network address, website, or other data/information that uniquely identifies the PoW target scheme34and/or the difficulty algorithm48used by the blockchain environment20. AsFIG.26illustrates, the miner system22may receive the difficulty identifier210as a specification or parameter associated with the PoW target scheme34and/or the difficulty algorithm48. AsFIG.27illustrates, though, the miner system22may receive the packetized message202from the blockchain network server28, and a packet header and/or payload may specify or include the difficulty identifier210as a data field, specification, or parameter. The blockchain network server28may thus send the difficulty identifier210(via the communications network26) to the miner system22. The difficulty identifier210may be packaged as a downloadable component, parameter, or value with the client-side blockchain mining software application196. However, the difficulty identifier210may additionally or alternatively be sent to the miner system22at any time via the message202. Because the difficulty identifier210may be separately sent from the client-side blockchain mining software application196, the difficulty identifier210may be dynamically updated or changed without downloading a new or updated client-side blockchain mining software application196. AsFIG.28illustrates, exemplary embodiments may consult the electronic database74of difficulty algorithms. Once the miner system22receives or determines the difficulty identifier210, the miner system22may implement the difficulty scheme represented by the difficulty identifier210. The miner system22may obtain, read, or retrieve the difficulty identifier210specified by the client-side blockchain mining software application196and/or packet inspect the message202from the blockchain network server28. Once the difficulty identifier210is determined, the miner system22may identify the corresponding blockchain difficulty scheme by querying the electronic database74of difficulty algorithms for any query parameter (such as the difficulty identifier210).FIG.28illustrates the electronic database74of difficulty algorithms locally stored in the memory device38of the miner system22. The electronic database74of difficulty algorithms may store, reference, or associate the difficulty identifier210to its corresponding proof-of-work target scheme34and/or difficulty algorithm48. The miner system22may thus perform or execute a database lookup for the difficulty identifier210to identify which proof-of-work target scheme34and/or difficulty algorithm48is required for miners operating in the blockchain environment20. The miner system22may then retrieve, call, and/or execute the difficulty algorithm48using the hash value(s)60, as this disclosure above explained (with reference toFIG.8). Exemplary embodiments may outsource difficulty operations. When the miner system22determines the difficulty identifier210, the corresponding blockchain difficulty scheme may require or specify the difficulty service provider156that provides the difficulty service158. AsFIG.28also illustrates, the electronic database74of difficulty algorithms may map or relate the difficulty identifier210to its corresponding difficulty service provider156that provides the difficulty service158. The miner system22may thus identify a difficulty service resource212that provides the difficulty service158. The difficulty service resource212, for example, may be an Internet protocol address, website/webpage, and/or uniform resource locator (URL) that is assigned to, or associated with, the difficulty service provider156and/or the difficulty service158. The miner system22may outsource or subcontract the hash value(s)60to the difficulty service resource212(perhaps using the service request and service response mechanism explained with reference toFIG.21). Exemplary embodiments may thus be agnostic to difficulty. The miner system22may call, request, and/or execute any difficulty scheme specified by any client, cryptographic coin, or blockchain network. The miner system22may dynamically switch or mix-and-match different difficulty schemes. Once the miner system22determines the proof-of-work target scheme34, the difficulty algorithm48, the difficulty service provider156, the difficulty service158, the difficulty identifier210, and/or the difficulty service resource212, the miner system22may perform any difficulty scheme specified for the blockchain environment20. The blockchain environment20may dynamically change the difficulty scheme at any time. The blockchain environment20may flexibly switch, change, and evaluate different difficulty strategies, perhaps with little or no impact or effect on hashing and proof-of-work operations. Moreover, the miner system22may operate within or mine different blockchain environments20without specialized hardware rigs. Exemplary embodiments improve computer functioning. Because exemplary embodiments may only specify the difficulty identifier210, the memory byte size consumed by the proof-of-work (“PoW”) target scheme34and/or the client-side blockchain mining software application196is reduced. That is, the blockchain network server28need not send the entire software program, code, or instructions representing the difficulty algorithm48used by the blockchain environment20. The blockchain environment20, the blockchain network server28, and/or the proof-of-work (“PoW”) target scheme34need only specify much smaller byte-sized data or information representing the difficulty algorithm48, the difficulty service provider156, the difficulty service158, the difficulty identifier210, and/or the difficulty service resource212. The blockchain environment20need not be burdened with conveying the difficulty algorithm48to the miner system22and other mining nodes. The blockchain environment20and the communications network26convey less packet traffic, so packet travel times and network latency are reduced. Moreover, especially if the miner system22outsources the difficulty operation, the miner system22is relieved from processing/executing the difficulty algorithm48and consumes less of the electrical power. Again, then, a faster and more expensive graphics processor or even ASIC will not speed up the difficulty operation. The conventional central processing unit36is adequate, reduces costs, and promotes democratic mining. FIGS.29-31illustrate illustrates a proof-of-work (“PoW”) identifier mechanism.FIG.29illustrates the miner system22receiving the proof-of-work (“PoW”) target scheme34that is required by the blockchain environment20. In order to reduce a memory byte size and/or programming line size of the PoW target scheme34and/or the client-side blockchain mining software application196, exemplary embodiments may specify a PoW identifier214associated with the blockchain network's chosen or required PoW scheme. The PoW identifier214may be any alphanumeric combination, hash value, network address, website, or other data/information that uniquely identifies the PoW target scheme34and/or the PoW algorithm52used by the blockchain environment20. AsFIG.29illustrates, the miner system22may receive the PoW identifier214as a specification or parameter associated with the PoW target scheme34and/or the PoW algorithm52. AsFIG.30illustrates, though, the miner system22may receive the packetized message202from the blockchain network server28, and a packet header and/or payload may specify or include the PoW identifier214as a data field, specification, or parameter. The blockchain network server28may thus send the PoW identifier214(via the communications network26) to the miner system22. The PoW identifier214may be packaged as a downloadable component, parameter, or value with the client-side blockchain mining software application196. However, the PoW identifier214may additionally or alternatively be sent to the miner system22at any time via the message202. Because the PoW identifier214may be separately sent from the client-side blockchain mining software application196, the PoW identifier214may be dynamically updated or changed without downloading a new or updated client-side blockchain mining software application196. AsFIG.31illustrates, exemplary embodiments may consult the electronic database78of PoW algorithms. Once the miner system22receives or determines the PoW identifier214, the miner system22may implement the proof-of-work scheme represented by the PoW identifier214. The miner system22may obtain, read, or retrieve the PoW identifier214specified by the client-side blockchain mining software application196and/or packet inspect the message202from the blockchain network server28. Once the PoW identifier214is determined, the miner system22may identify the corresponding blockchain proof-of-work scheme by querying the electronic database78of PoW algorithms for any query parameter (such as the PoW identifier214).FIG.31illustrates the database78of PoW algorithms locally stored in the memory device38of the miner system22. The electronic database78of PoW algorithms may store, reference, or associate the PoW identifier214to its corresponding proof-of-work target scheme34and/or difficulty algorithm48. The miner system22may thus perform or execute a database lookup for the PoW identifier214to identify which proof-of-work target scheme34and/or PoW algorithm52is required for miners operating in the blockchain environment20. The miner system22may then retrieve, call, and/or execute the PoW algorithm52using the hash value(s)60, as this disclosure above explained (with reference toFIG.9). Exemplary embodiments may outsource difficulty operations. When the miner system22determines the PoW identifier214, the corresponding blockchain proof-of-work scheme may require or specify the PoW service provider120that provides the PoW service122. AsFIG.31also illustrates, the electronic database78of PoW algorithms may map or relate the PoW identifier214to its corresponding PoW service provider120and PoW service122. The miner system22may thus identify a PoW service resource216that provides the PoW service122. The PoW service resource216, for example, may be an Internet protocol address, website/webpage, and/or uniform resource locator (URL) that is assigned to, or associated with, the PoW service provider120and/or PoW service122. The miner system22may outsource or subcontract the hash value(s)60to the PoW service resource216(perhaps using the service request and service response mechanism explained with reference toFIG.21). Exemplary embodiments may thus be agnostic to proof-of-work. The miner system22may call, request, and/or execute any proof-of-work scheme specified by any client, cryptographic coin, or blockchain network. The miner system22may dynamically switch or mix-and-match different proof-of-work schemes. Once the miner system22determines the proof-of-work target scheme34, the PoW algorithm52, the PoW service provider120, the PoW service122, the PoW identifier214, and/or the PoW service resource216, the miner system22may perform any proof-of-work scheme specified for the blockchain environment20. The blockchain environment20may dynamically change the proof-of-work scheme at any time. The blockchain environment20may flexibly switch, change, and evaluate different proof-of-work strategies, perhaps with little or no impact or effect on hashing and difficulty operations. Moreover, the miner system22may operate within or mine different blockchain environments20without specialized hardware rigs. Exemplary embodiments improve computer functioning. Because exemplary embodiments may only specify the PoW identifier214, the memory byte size consumed by the proof-of-work (“PoW”) target scheme34and/or the client-side blockchain mining software application196is reduced. That is, the blockchain network server28need not send the entire software program, code, or instructions representing the PoW algorithm52used by the blockchain environment20. The blockchain environment20, the blockchain network server28, and/or the proof-of-work (“PoW”) target scheme34need only specify much smaller byte-sized data or information representing the PoW algorithm52, the PoW service provider120, the PoW service122, the PoW identifier214, and/or the PoW service resource216. The blockchain environment20need not be burdened with conveying the PoW algorithm52to the miner system22and other mining nodes. The blockchain environment20and the communications network26convey less packet traffic, so packet travel times and network latency are reduced. Moreover, especially if the miner system22outsources the proof-of-work operation, the miner system22is relieved from processing/executing the PoW algorithm52and consumes less of the electrical power. Again, then, a faster and more expensive graphics processor or even ASIC will not speed up the difficulty operation. The conventional central processing unit36is adequate, reduces costs, and promotes democratic mining. FIG.32illustrates remote retrieval, according to exemplary embodiments. After the miner system22determines the proof-of-work (“PoW”) target scheme34that is required by the blockchain environment20, the miner system22may acquire or download the encryption algorithm46, the difficulty algorithm48, and/or the PoW algorithm52. For example, the miner system22may determine the encryption identifier200(as this disclosure above explains) and send a query to the encryption server154. The query specifies the encryption identifier200. When the encryption server154receives the query, the encryption server154may query the database70of encryption algorithms for the encryption identifier200. The encryption server154may locally store the database70of encryption algorithms and function as a networked encryption resource for clients. The encryption server154identifies and/or retrieves the corresponding encryption algorithm46. The encryption server154sends a query response to the miner system22, and the query response specifies or includes the corresponding encryption algorithm46. The miner system22may then execute the encryption algorithm46, as above explained. The miner system22may remotely retrieve the difficulty algorithm48. After the miner system22determines the proof-of-work (“PoW”) target scheme34that is required by the blockchain environment20, the miner system22may acquire or download the difficulty algorithm48. For example, the miner system22may determine the difficulty identifier210(as this disclosure above explains) and send a query to the difficulty server160. The query specifies the difficulty identifier210. When the difficulty server160receives the query, the difficulty server160may query the database74of difficulty algorithms for the difficulty identifier210. The difficulty server160may locally store the database74of difficulty algorithms and function as a networked difficulty resource for clients. The difficulty server160identifies and/or retrieves the corresponding difficulty algorithm48. The difficulty server160sends a query response to the miner system22, and the query response specifies or includes the corresponding difficulty algorithm48. The miner system22may then execute the difficulty algorithm48, as above explained. The miner system22may remotely retrieve the PoW algorithm52. After the miner system22determines the proof-of-work (“PoW”) target scheme34that is required by the blockchain environment20, the miner system22may acquire or download the PoW algorithm52. For example, the miner system22may determine the PoW identifier214(as this disclosure above explains) and send a query to the PoW server124. The query specifies the PoW identifier214. When the PoW server124receives the query, the PoW server124may query the database78of PoW algorithms for the PoW identifier214. The PoW server124may locally store the database78of PoW algorithms and function as a networked proof-of-work resource for clients. The PoW server124identifies and/or retrieves the corresponding PoW algorithm52. The PoW server124sends a query response to the miner system22, and the query response specifies or includes the corresponding PoW algorithm52. The miner system22may then execute the PoW algorithm52, as above explained. FIGS.33-34further illustrate the bit shuffle operation92, according to exemplary embodiments. The difficulty algorithm48and/or the proof-of-work algorithm52may perform the bit shuffle operation92to conduct any difficulty and/or proof-of-work. After the hashing algorithm54generates the hash value(s)60(as this disclosure above explains), exemplary embodiments may use the database table90to further deter GPU/ASIC usage. The difficulty algorithm48and/or the proof-of-work algorithm52may implement the bit shuffle operation92on the hash value(s)60. AsFIG.34illustrates, suppose the hash value60is represented by a sequence or series of 256 bit values. The difficulty algorithm48and/or the proof-of-work algorithm52may select an arbitrary portion or number220of the bit values. The difficulty algorithm48and/or the proof-of-work algorithm52, for example, may call, use, or execute a random number generator (RNG)222to generate one or more random numbers224. As an example, a first random number224may be used to select a random entry94in the database table90. The difficulty algorithm48and/or the proof-of-work algorithm52may then query the database table90for the random entry94and identify/retrieve the corresponding random bits96. The difficulty algorithm48and/or the proof-of-work algorithm52may then select and replace the arbitrary portion or number220of the bit values in the hash value60with the random bits retrieved from the entry94in the database table90. The bit shuffle operation92thus converts the hash value60and generates a resulting randomized hash value226. The difficulty algorithm48and/or the proof-of-work algorithm52may instruct or cause the miner system to repeat the bit shuffle operation92as many times as desired. The randomized hash value226may, or may not, have the same number of 256 bit values. The randomized hash value226may have less than, or more than, 256 bit values. The randomized hash value226may have an arbitrary number of bit values. Once the specified or required number of bit shuffle operations92is complete, the difficulty algorithm48and/or the proof-of-work algorithm52may instruct or cause the miner system to determine the difficulty50and/or the PoW result42(as this disclosure above explains). FIGS.35-36further illustrate the database table90, according to exemplary embodiments. Exemplary embodiments may autonomously or automatically adjust the table byte size102(in bits/bytes) of the database table90to exceed the storage capacity or cache byte size104of the on-board processor cache memory100. The client-side blockchain mining application196, for example, may query the CPU36to determine the storage capacity or cache byte size104of the processor cache memory100. If the table byte size102consumed by the database table90exceeds the storage capacity or cache byte size104of the processor cache memory100, then perhaps no action or resolution is required. That is, the database table90requires more bytes or space than allocated to, or available from, the processor cache memory100(integrated/embedded L1, L2, and L3 SRAM/DRAM cache memory). Any cache read/write operation230will invalidate, thus forcing the processing component (whether a GPU, ASIC, or the CPU36) to incur a cache miss232and endure the cache latency234of requesting and writing blocks of data via the much-slower bus from the system/main memory38. The processing component (whether a GPU, ASIC, or the CPU36) stalls, thus negating the use of a faster GPU or ASIC. Exemplary embodiments may auto-size the database table90. When the client-side blockchain mining application196determines the storage capacity or cache byte size104of the processor cache memory100, the client-side blockchain mining application196may compare the storage capacity or cache byte size104to the table byte size102of the database table90. The storage capacity or cache byte size104of the processor cache memory100, for example, may be subtracted from the table byte size102of the database table90. If the resulting value (in bits/bytes) is positive (greater than zero), then the database table90exceeds the storage capacity or cache byte size104of the processor cache memory100. The client-side blockchain mining application196may thus determine a cache deficit236, ensuring the cache miss232and the cache latency234. Exemplary embodiments, however, may determine a cache surplus238. If the resulting value (in bits/bytes) is zero or negative, then the database table90is less than the storage capacity or cache byte size104of the processor cache memory100. Whatever the processing component (whether a GPU, ASIC, or the CPU36), some or even all of the database table90could be stored and retrieved from the processor cache memory100, thus giving an advantage to a faster processing component. The client-side blockchain mining application196may thus increase the table byte size102of the database table90. The client-side blockchain mining application196, for example, may add one (1) or more additional database rows240and/or one (1) or more additional database columns242. The client-side blockchain mining application196may increase the table byte size102of the database table90by adding additional entries94, with each added entry94specifying more random bits96. As an example, the client-side blockchain mining application196may call, use, or execute the random number generator222to generate the random number224and then add the additional database row(s)240and/or additional database column(s)242according to the random number224. Exemplary embodiments may thus continually or periodically monitor the storage capacity or cache byte size104of the processor cache memory100and the table byte size102of the database table90. The cache surplus238may trigger a resizing operation to ensure the database table90always exceeds the processor cache memory100. The database table90may be large. The above examples only illustrated a simple configuration of a few database entries94. In actual practice, though, the database table90may have hundreds, thousands, or even millions of the rows and columns, perhaps producing hundreds, thousands, millions, or even billions of database entries94. Exemplary embodiments may repeatedly perform the bit shuffle operation92to suit any difficulty or proof-of-work strategy or scheme. The proof-of-work target scheme34, the difficulty algorithm48, and/or the proof-of-work algorithm52may each specify a minimum and/or a maximum number of bit shuffle operations that are performed. Exemplary embodiments may use the XOR/Shift random number generator (RNG)222coupled with the lookup database table90of randomized sets of bytes. The database table90may have any number of 256 byte tables combined and shuffled into one large byte lookup table. Exemplary embodiments may then index into this large table to translate the state built up while hashing into deterministic but random byte values. Using a 1 GB lookup table results in a RAM Hash PoW algorithm that spends over 90% of its execution time waiting on memory (RAM) than it does computing the hash. This means far less power consumption, and ASIC and GPU resistance. The ideal platform for PoW using a RAM Hash is a Single Board Computer like a Raspberry PI4with 2 GB of memory. Any or all parameters may be specified. The size of the database table90may be specified in bits for the index, the seed used to shuffle the lookup table, the number of rounds to shuffle the table, and the size of the resulting hash. Because the LXRHash is parameterized in this way, as computers get faster and larger memory caches, the LXRHash can be set to use 2 GB or 16 GB or more. The Memory bottleneck to computation is much easier to manage than attempts to find computational algorithms that cannot be executed faster and cheaper with custom hardware, or specialty hardware like GPUs. Very large lookup tables will blow the memory caches on pretty much any processor or computer architecture. The size of the database table90can be increased to counter improvements in memory caching. The number of bytes in the resulting hash can be increased for more security (greater hash space), without significantly more processing time. LXRHash may even be fast by using small lookup tables. ASIC implementations for small tables would be very easy and very fast. LXRHash only uses iterators (for indexing) shifts, binary ANDs and XORs, and random byte lookups. The use case for LXRHash is Proof of Work (PoW), not cryptographic hashing. The database table90may have equal numbers of every byte value, and shuffled deterministically. When hashing, the bytes from the source data are used to build offsets and state that are in turn used to map the next byte of source. In developing this hash, the goal was to produce very randomized hashes as outputs, with a strong avalanche response to any change to any source byte. This is the prime requirement of PoW. Because of the limited time to perform hashing in a blockchain, collision avoidance is important but not critical. More critical is ensuring engineering the output of the hash isn't possible. Exemplary embodiments yield some interesting qualities. For example, the database table90may be any size, so making a version that is ASIC resistant is possible by using very big lookup tables. Such tables blow the processor caches on CPUs and GPUs, making the speed of the hash dependent on random access of memory, not processor power. Using 1 GB lookup table, a very fast ASIC improving hashing is limited to about ˜10% of the computational time for the hash. 90% of the time hashing isn't spent on computation but is spent waiting for memory access. At smaller lookup table sizes, where processor caches work, LXRHash can be modified to be very fast. LXRHash would be an easy ASIC design as it only uses counters, decrements, XORs, and shifts. The hash may be altered by changing the size of the lookup table, the seed, size of the hash produced. Change any parameter and you change the space from which hashes are produced. The Microprocessor in most computer systems accounts for 10× the power requirements of memory. If we consider PoW on a device over time, then LXRHash is estimated to reduce power requirements by about a factor of 10. Testing has revealed some optimizations. LXRHash is comparatively slow by design (to make PoW CPU bound), but quite a number of use cases don't need PoW, but really just need to validate data matches the hash. So using LXRHash as a hashing function isn't as desirable as simply using it as a PoW function. The somewhat obvious conclusion is that in fact we can use Sha256 as the hash function for applications, and only use the LXR approach as a PoW measure. So in this case, what we do is change how we compute the PoW of a hash. So instead of simply looking at the high order bits and saying that the greater the value the greater the difficulty (or the lower the value the lower the difficulty) we instead define an expensive function to calculate the PoW. Exemplary embodiments may break out PoW measures from cryptographic hashes. The advantage here is that what exactly it means to weigh PoW between miners can be determined apart from the hash that secures a blockchain. Also, a good cryptographic hash provides a much better base from which to randomize PoW even if we are going to use a 1 GB byte map to bound performance by DRAM access. And we could also use past mining, reputation, staking, or other factors to add to PoW at this point. PoW may be represented as a nice standard sized value. Because exemplary embodiments may use a function to compute the PoW, we can also easily standardize the size of the difficulty. Since bytes that are all 0xFF or all 0x00 are pretty much wasted, we can simply count them and combine that count with the following bytes. This encoding is compact and easily compared to other difficulties in a standard size with plenty of resolution. So with PoW represented as a large number, the bigger the more difficult, the following rules may be followed. Where bit0is most significant, and bit63is least significant:Bits0-3Count of leading 0xFF bytes; andBits4-63bits of the following bytes. For example, given the hashffffff7312334c442bf42625f7856fe0d50e4aa45c98d7a391c016b89e242d94, the difficulty is 37312334c442bf42. The computation counts the leading bytes with a value of 0xFF, then calculates the uint64 value of the next 8 bytes. The count is combined with the following bytes by shifting the 8 bytes right by 4, and adding the count shifted left by 60. As computing power grows, more significant bits of the hash can be used to represent the difficulty. At a minimum, difficulty is represented by 4 bits 0x0 plus the following 0+60 bits=>60 bits of accuracy. At the maximum, difficulty is represented by 4 bits 0xF plus the following 60 bits=>120+60=180 bits of accuracy. Sha256 is very well tested as a cryptographic function, with excellent waterfall properties (meaning odds are very close to 50% that any change in the input will flit any particular bit in the resulting hash). Hashing the data being mined by the miners is pretty fast. If an application chooses to use a different hashing function, that's okay as well. FIGS.37-40illustrate a table identifier mechanism, according to exemplary embodiments. When the miner system22communicates with the blockchain network server28, the blockchain network server28may specify the proof-of-work (“PoW”) target scheme34and/or the database table90that is required by the blockchain environment20. For example, in order to reduce a memory byte size and/or programming line size of the proof-of-work (“PoW”) target scheme34and/or the client-side blockchain mining software application196, exemplary embodiments may only specify a table identifier250associated with the blockchain network's chosen or required difficulty and proof-of-work scheme. The table identifier250may be any alphanumeric combination, hash value, network address, website, or other data/information that uniquely identifies the database table90used by the blockchain environment20. The blockchain network server28may thus send the table identifier250(via the communications network26) to the miner system22. The table identifier250may be packaged as a downloadable component, parameter, or value with the client-side blockchain mining software application196. However, the table identifier250may additionally or alternatively be sent to the miner system22, such as the packetized message202that includes or specifies the table identifier250(explained with reference toFIGS.22-31). Because the table identifier250may be separately sent from the client-side blockchain mining software application196, the table identifier250may be dynamically updated or changed without downloading a new or updated client-side blockchain mining software application196. Exemplary embodiments may consult an electronic database252of tables. When the miner system22receives the table identifier250, the miner system22may use, call, and/or implement the database table90represented by the table identifier250. The miner system22may obtain, read, or retrieve the table identifier250specified by the client-side blockchain mining software application196. The miner system22may additionally or alternatively inspect, read, or retrieve the table identifier250from the message202. Once the table identifier250is determined, the miner system22may identify the corresponding database table90by querying the database252of tables for the table identifier250.FIG.37illustrates the electronic database252of tables locally stored in the memory device38of the miner system22. The database252of tables stores, references, or associates the table identifier250and/or the proof-of-work target scheme34to the corresponding database table90. The miner system22may thus identify and/or retrieve the database table90. The miner system22may then execute the difficulty algorithm48and/or the proof-of-work algorithm using the entries specified by the database table90(as this disclosure above explains). FIG.38illustrates remote retrieval.FIG.38illustrates the database252of tables remotely stored by a table server254and accessed via the communications network26. The table server254may be the only authorized source for the database table90. The table server254may thus operate within the blockchain environment20and provide the latest/current database table90for all miners in the blockchain network. The table server254, however, may be operated on behalf of an authorized third-party vendor or supplier that provides the database table90for all miners in the blockchain network. Once the miner system22determines the table identifier250, the miner system22may send a query to the network address associated with or assigned to the table server254. The query specifies the table identifier250. When the table server254receives the query, the table server254queries the electronic database252of tables for the table identifier250specified by the query. The table server254has a hardware processor and memory device (not shown for simplicity) that stores and executes a query handler software application. The query handler software application causes the table server254to perform a database lookup operation. The table server254identifies the corresponding database table90by querying the database252of tables for the table identifier250. The table server254generates and sends a query response to the network address associated with or assigned to the miner system22, and the query response includes or specifies the database table90that is associated with the table identifier250. The miner system22may thus identify, download, and/or retrieve the database table90. Because the database252of tables may store or reference many different database tables, exemplary embodiments may dynamically switch or change the database table90to suit any objective or performance criterion. Exemplary embodiments may thus need only specify the table identifier250, and the table identifier250may be dynamically changed at any time. The blockchain environment20may flexibly switch, change, and evaluate different database tables, merely by changing or modifying the table identifier250. The blockchain network may thus experiment with different database tables, different difficulty algorithms48, and/or different proof-of-work algorithms52with little or no impact or effect on hashing. Should an experimental scheme prove or become undesirable, for whatever reason(s), the blockchain environment20(such as the blockchain network server28) may distribute, assign, or restore a new/different table identifier250(perhaps by updating the client-side blockchain mining software application196and/or distributing/broadcasting the message202, as this disclosure above explains). The blockchain environment20may thus dynamically change the database table90, which may concomitantly change the difficulty algorithm48and/or the proof-of-work algorithm52, for quick evaluation and/or problem resolution. FIG.39further illustrates table services. Here the table server254may serve different blockchain environments20. For example, the table server254may server miners22aoperating in blockchain environment20a. The table server254may also server miners22boperating in blockchain environment20b. The table server254may thus be operated on behalf of a table service provider256that provides a table service258to clients and blockchain networks. The table service provider256may receive, generate, and/or store different database tables90, perhaps according to a client's or a blockchain's specification. Each different table90may have its corresponding unique table identifier250. So, whatever the proof-of-work (“PoW”) target scheme (e.g.,34aand34b) and/or the blockchain environment20a-b, the table server254may offer and provide the corresponding database table90. The table service provider256and/or the table server254may thus be an authorized provider or participant in the blockchain environments20a-b. A first miner system22a, for example, operating in the blockchain environment20a, may request and retrieve the database table90athat corresponds to the proof-of-work (“PoW”) target scheme34a. A different, second system22b, operating in the blockchain environment20b, may request and retrieve the database table90bthat corresponds to the proof-of-work (“PoW”) target scheme34b. Miners may query the table server254(perhaps by specifying the corresponding table ID250) and retrieve the corresponding database table90. The table service provider256may thus specialize in randomized/cryptographic database tables, and the table server254may serve different blockchain networks. FIG.40further illustrates table services. The blockchain environment20and/or the miner system22may outsource the bit shuffle operation92to the table service provider256. Once the miner system22determines or receives the hash value(s)60(generated by the hashing algorithm54), the miner system22may outsource or subcontract the bit swap operation92to the table server254. The client-side blockchain mining software application196may thus cause or instruct the miner system22to generate a bit shuffle service request that is sent to the table service provider256(such as the IP address assigned to the table server254). The bit shuffle service request may specify or include the hash values60. The bit shuffle service request may additionally or alternatively specify or include the table identifier250. The bit shuffle service request may additionally or alternatively specify or include a website, webpage, network address location, or server from which the hash values60may be downloaded, retrieved, or obtained to perform the bit shuffle operation92. While the table service provider256may utilize any mechanism to provide the bit shuffle operation92,FIG.40illustrates a vendor's server/client relationship. The miner system22sends the bit shuffle service request to the table server254that is operated on behalf of the table service provider256. When the table server254receives the bit shuffle service request, the table server254may query the database252of tables for the table identifier250specified by the bit shuffle service request. The table server254identifies the corresponding database table90. The table server254performs the bit shuffle operation92using the hash value(s)60specified by, or referenced by, the bit shuffle service request. The table server254generates and sends a service result to the network address associated with or assigned to the miner system22, and the service result includes or specifies data or information representing the randomized hash value(s)226. The miner system22may then execute, or outsource, the difficulty algorithm48and/or the proof-of-work algorithm52using the randomized hash value(s)226(as this disclosure above explained). Exemplary embodiments improve computer functioning. The database table90adds cryptographic security by further randomizing the hash value(s)60generated by the hashing algorithm54. Moreover, because the database table90may be remotely located and accessed, exemplary embodiments may only specify the table identifier250. The memory byte size consumed by the proof-of-work (“PoW”) target scheme34and/or the client-side blockchain mining software application196is reduced. That is, the blockchain network server28need not send the entire software program, code, or instructions representing the database table90used by the blockchain environment20. The blockchain environment20, the blockchain network server28, and/or the proof-of-work (“PoW”) target scheme34need only specify the much smaller byte-sized table identifier250. The blockchain environment20need not be burdened with conveying the database table90to the miner system22and to other mining nodes. The blockchain environment20and the communication network26convey less packet traffic, so packet travel times and network latency are reduced. Moreover, especially if the miner system22outsources table operations, the miner system22is relieved from processing/executing the bit swap operation92and consumes less electrical power. Again, then, a faster and more expensive graphics processor or even ASIC will not speed up the proof-of-work operation. The conventional central processing unit36is adequate, reduces costs, and promotes democratic mining. Exemplary embodiments improve cryptographic security. If the blockchain environment20, the proof-of-work (“PoW”) target scheme34and/or the client-side blockchain mining software application196specifies use of the database table90, only authorized miners may have access to the actual entries referenced by the database table90. That is, if miner system22is required to perform, implement, or even execute the bit shuffle operation92, the miner system22must have access to the correct database table90. An unauthorized or rogue entity, in other words, likely could not perform the bit shuffle operation92without access to the correct database table90. Moreover, if the bit shuffle operation92is remotely performed from the miner system22(such as by the table server254, as above explained), perhaps not even the authorized miner system22need have access to the database table90. So, even if the miner system22is authorized to mine or process blockchain transactions32in the blockchain environment20, the authorized miner system22may still be blind to the database table90. The authorized miner system22, in other words, is operationally reliant on the table server254to perform the bit shuffle operation92that may be required for the difficulty algorithm48and/or for the proof-of-work algorithm52. The miner system22simply cannot solve the mathematical puzzle62without the table service258provided by the table server254. The database table90may thus be proprietary to the blockchain environment20, but, unknown and unavailable to even the authorized miner system22for added cryptographic security. FIG.41illustrates agnostic blockchain mining, according to exemplary embodiments. As the reader may now realize, the miner system22may be agnostic to the blockchain environment20. Because the miner system22may be agnostic to encryption, difficulty, and proof-of-work operations, the miner system22may process or mine the blockchain transactions32in multiple blockchain environments20. That is, because the conventional CPU36is adequate for mining blockchain transactions32, no specialized ASIC is required for any particular blockchain environment20. The miner system22may thus participate in multiple blockchain environments20and potentially earn multiple rewards. The miner system22, for example, may participate in the blockchain environment22aand mine the blockchain transactions32asent from the blockchain network server28ato authorized miners in blockchain network260a. The miner system22may thus mine the blockchain transactions32aaccording to the proof-of-work (“PoW”) target scheme34athat is specified by the blockchain environment22a, the blockchain network server28a, and/or the blockchain network260a. The miner system22, however, may also participate in the blockchain environment22band mine the blockchain transactions32bsent from the blockchain network server28bto authorized miners in blockchain network260b. The miner system22may thus mine the blockchain transactions32baccording to the proof-of-work (“PoW”) target scheme34bthat is specified by the blockchain environment22b, the blockchain network server28b, and/or the blockchain network260b. Because exemplary embodiments require no specialized GPU or ASIC, the miner's conventional CPU36may be adequate for mining operations in both blockchain environments22aand22b. The miner system22may thus download, store, and execute the client-side blockchain mining software application196athat is required to mine the blockchain transactions32ain the blockchain environment20a. The miner system22may also download, store, and execute the client-side blockchain mining software application196bthat is required to mine the blockchain transactions32bin the blockchain environment20b. The miner system22may thus call, execute, coordinate, or manage the encryption algorithm46a, the difficulty algorithm48a, and/or the proof-of-work (“PoW”) algorithm52aaccording to the proof-of-work (“PoW”) target scheme34aspecified by the blockchain environment20a. The miner system22may also call, execute, coordinate, or manage the encryption algorithm46b, the difficulty algorithm48b, and/or the proof-of-work (“PoW”) algorithm52baccording to the proof-of-work (“PoW”) target scheme34bspecified by the blockchain environment20b. Because exemplary embodiments require no specialized GPU or ASIC, the miner system22has the hardware processor capability and performance (e.g., clock speed, processor core(s)/thread(s) count, cycles, the on-board cache memory100, thermal profile, electrical power consumption, and/or chipset) to mine in both blockchain environments20aand20b. The miner system22may participate in multiple blockchain environments20, thus having the capability to earn additional rewards, while also being less expensive to purchase and to operate. FIGS.42-43illustrate virtual blockchain mining, according to exemplary embodiments. Because the miner system22may be agnostic to the blockchain environment20, the miner system22may outsource or subcontract mining operations to a virtual machine (or “VM”)262. For example, the miner system22may implement different virtual machines262, with each virtual machine262dedicated to a particular blockchain environment20. The miner system22, for example, may assign the virtual machine262ato mining the blockchain transactions32asent from the blockchain network server28a. The miner system22may assign the virtual machine262bto mining the blockchain transactions32bsent from the blockchain network server28b. The miner system22may thus be a server computer that participates in multiple blockchain environments20and potentially earns multiple rewards. The miner system22may provide virtual mining resources to multiple blockchain environments20, thus lending or sharing its hardware, computing, and programming resources. WhileFIG.42only illustrates two (2) virtual machines262aand262b, in practice the miner system22may implement any number or instantiations of different virtual machines262, with each virtual machine262serving or mining one or multiple blockchain environments20. So, when the miner system22receives the blockchain transactions32, the miner system22may inspect the blockchain transactions32for the proof-of-work (“PoW”) target scheme34that identifies the corresponding encryption, difficulty, and PoW scheme (such as by consulting the databases70,74, and78, as above explained). The miner system22may additionally or alternatively inspect the blockchain transactions32for the identifiers200,210,214, and250(as this disclosure above explains). Once the blockchain environment20is determined, the miner system22may then FIG.43illustrates a database lookup. When the miner system22determines the PoW scheme34and/or any of the identifiers200,210,214, and250, the miner system22may identify the corresponding virtual machine262. For example, the miner system22may consult an electronic database264of virtual machines. While the database264of virtual machines may have any structure,FIG.43illustrates a relational table266having entries that map or associate the PoW scheme34and/or any of the identifiers200,210,214,250to the corresponding virtual machine262. The miner system22may thus query the electronic database264of virtual machines for any of the PoW scheme34and/or any of the identifiers200,210,214,250and determine the corresponding virtual machine262. Once the virtual machine262is identified (e.g., a memory address or pointer, processor core, identifier, network address and/or service provider, or other indicator), the miner system22may assign the blockchain transactions32to the virtual machine262for mining. The miner system22may thus serve many blockchains. The miner system22, for example, may mine BITCOIN® and other cryptographic coin transactional records. However, the miner system22may also nearly simultaneously mine financial records sent from or associated with a financial institution, inventory/sales/shipping records sent from a retailer, and transactional records sent from an online website. The miner system22may participate in multiple blockchain environments20, thus having the capability to earn additional rewards, while also being less expensive to purchase and to operate. FIG.44is a flowchart illustrating a method or algorithm for mining the blockchain transactions32, according to exemplary embodiments. The inputs24(such as the blockchain transactions32) may be received (Block300). The proof-of-work (“PoW”) target scheme34may be received (Block302). The message202may be received (Block304). The identifiers200,210,214, and/or250may be received (Block306). The block40of data may be generated (Block308). The encryption algorithm46(such as the hashing algorithm54) may be identified (Block310) and the output56(such as the hash values60) may be generated by encrypting/hashing the blockchain transactions32and/or the block40of data (Block312). The encryption/hashing service provider150may be identified and the blockchain transactions32and/or the block40of data outsourced (Block314). The output56(such as the hash values60) may be received from the encryption/hashing service provider150(Block316). The difficulty algorithm48may be identified (Block318), the database table90may be generated or identified, and the difficulty50may be generated by executing the difficulty algorithm48(Block320). The difficulty service provider156may be identified and the difficulty calculation outsourced (Block322). The difficulty50may be received from the difficulty service provider156(Block324). The PoW algorithm52may be identified (Block326), the database table90may be generated or identified, and the PoW result42determined by executing the PoW algorithm52(Block328). The PoW service provider120may be identified and the PoW calculation outsourced (Block330). The PoW result42may be received from the PoW service provider120(Block332). The output56(such as the hash values60), the difficulty50, and/or the PoW result42may be compared to the PoW target scheme34(Block334). Exemplary embodiments may win the block40of data. If the output56, the difficulty50, and/or the PoW result42satisfy the PoW target scheme34, then the miner system22may submit the output56, the difficulty50, and/or the PoW result42to the blockchain network server28. The miner system22may itself determine if the miner system22is the first to satisfy the PoW target scheme34, or the miner system22may rely on the blockchain network server28to determine the first solution. When the miner system22is the first solver, the miner system22earns the right to add the block40of data to the blockchain64. However, if the PoW target scheme34is not satisfied, the miner system22implements a change or modification and repeats. FIG.45is a schematic illustrating still more exemplary embodiments.FIG.45is a more detailed diagram illustrating a processor-controlled device350. As earlier paragraphs explained, the miner system22may be any home or business server/desktop160, laptop computer162, smartphone164, tablet computer166, or smartwatch168, as exemplary embodiments allow these devices to have adequate processing and memory capabilities to realistically mine and win the block40of data (as explained with reference toFIG.18). Moreover, exemplary embodiments allow any CPU-controlled device to realistically, and profitably, process the blockchain transactions32, thus allowing networked appliances, radios/stereos, clocks, tools (such as OBDII diagnostic analyzers and multimeters), HVAC thermostats and equipment, network switches/routers/modems, and electric/battery/ICU engine cars, trucks, airplanes, construction equipment, scooters, and other vehicles170. Exemplary embodiments may be applied to any signaling standard. Most readers are familiar with the smartphone164and mobile computing. Exemplary embodiments may be applied to any communications device using the Global System for Mobile (GSM) communications signaling standard, the Time Division Multiple Access (TDMA) signaling standard, the Code Division Multiple Access (CDMA) signaling standard, the “dual-mode” GSM-ANSI Interoperability Team (GAIT) signaling standard, or any variant of the GSM/CDMA/TDMA signaling standard. Exemplary embodiments may also be applied to other standards, such as the I.E.E.E. 802 family of standards, the Industrial, Scientific, and Medical band of the electromagnetic spectrum, BLUETOOTH®, low-power or near-field, and any other standard or value. Exemplary embodiments may be physically embodied on or in a computer-readable storage medium. This computer-readable medium, for example, may include CD-ROM, DVD, tape, cassette, floppy disk, optical disk, memory card, memory drive, and large-capacity disks. This computer-readable medium, or media, could be distributed to end-subscribers, licensees, and assignees. A computer program product comprises processor-executable instructions for processing or mining the blockchain transactions32, as the above paragraphs explain. While the exemplary embodiments have been described with respect to various features, aspects, and embodiments, those skilled and unskilled in the art will recognize the exemplary embodiments are not so limited. Other variations, modifications, and alternative embodiments may be made without departing from the spirit and scope of the exemplary embodiments. | 144,255 |
11943335 | DETAILED DESCRIPTION An improved reference linkage data element, “space-time” link, STLink fulfills these functions: a reference externalized to the host system sent to parties for processed event's data plus any related context, a mechanism (e.g., means) to locate one or more keys for the system to lookup corresponding blockchain state which provides the immutable proof of the event data recorded, and the attribution of an STLink, and therefore of the event data recorded, to one or more parties, i.e. ownership. The STLink owner has the right to claim payments, goods, or services implicated or indicated by the data recorded. An incoming event processed by the system can contain an arbitrary amount of data. This data is interpreted and processed by an application executing on the host system. At event processing, an arbitrary amount of context may be bound at the application's discretion. Examples are the determined identities of the parties involved and data from prior event processing contexts, i.e. other STLink reference data elements. This data is stored and organized by the data structure method in various embodiments. FIG.1is a block diagram of an example system. It comprises, but is not limited to, a host system100and one or more blockchain subsystems (e.g., Enterprise Blockchain subsystem150), and an external public blockchain system160here illustrated as the public Ethereum blockchain160. The host system100comprises one or more host databases104, one or more host main servers102, and one or more Web/API servers106. The blockchain subsystem150comprises one or more blockchain processing nodes152,154. These nodes provide the host system100with the integration access to the functions of the external public blockchain160and its processing nodes166,168, mining nodes162,164, and the blockchain state data (e.g., stored on Ethereum Database170), which is a logical representation of the blockchain's immutable data store. Users110access the host system100from a browser112running on a desktop computer, from a mobile Android app or a browser running on an Android device114, or from a mobile iOS app or a browser on an iOS device116. In use, the host system100, the blockchain subsystem150, and the public blockchain160interoperate to continuously perform steps of a method including, but not limited to raise, detect, receive, and process events triggered by the user interactions and by the operation of the system. The system100is configured to extract data from the events to create one or more statements, each optionally assigned a unique number, and create one or more assertions also each assigned a unique number. An assertion binds the fresh set of statements with assertions or statements from earlier events' processing, any involved parties' identities determined. The system100assigns one or more parties as an assertion's owner. The system100records the assertion(s) on the blockchain by submitting and completing one or more successful blockchain transactions. The transaction receipt is stored by the system100along with its corresponding assertion to the host database104. The main server102obtains a receipt after a transaction is mined successfully. The identity of a party is an STLink created at a prior time. A party is typically a person but can be an intelligent software entity capable of interacting with users or other entities in the online ecosystem. It may reside in the host system100, the blockchain subsystem150, or in an external system that communicates to the system via an API server106. To satisfy proof-of-origin, it is important to be able to prove that the unique numbers assigned were created by the host system100. The unique numbers therefore are subject to the same irrevocability and auditability requirements, i.e. they also need to be mined onto a blockchain. This can be done in-situ at event processing time or pre-mined in pools by the system. By allocating numbers from a pre-mine pool for use as identities, the latency of the overall events processing cycle can be minimized significantly. In a variation, the unique number allocation event is mined on a blockchain individually, or it may be done in bulk based on criteria including but not limited to interval elapsed since allocation, count of allocated numbers, SLA tier associated with the particular events' processing, and one or more custom defined criteria. In another variation, more than one blockchain provider may be concurrently integrated to store the statements and assertions, or a subset selected by various criteria. An online marketing ecosystem utilizes this invention where one class of users, vendors, who makes products or offers services are aided by another class of users, marketers, to promote the products and services to general consumers. A vendor agrees to compensate a marketer for successful achievement of milestones defined in the offer. An STLink records a vendor's publishing of offer, another STLink records a marketer accepting a published offer to promote, yet another STLink records a consumer's action on a promoted offer, for example, purchase of a product. While these the are the key target events recorded, an STLink can be used to record any and all events that occur in the system from any use case of interest. Another use case for an STLink is to record one or more measures, determined dynamically by the system, against a marketer's promotional reach and ability to market select products or services categories. This measure can be based on information extracted from bulk database scrapes/retrievals including but not limited to accessing data sets held by third parties through various application programming interfaces. In another use case, an STLink records a published offer's credibility rating. This measure can also change dynamically based on information extracted by the system among interactions of users and consumers on offers, including but not limited to number of views, discussions/comments added, number of conversions, among others. In yet one more use case, the STLink records the compensation rate paid out to a marketer upon achieving offer milestones. The rate can be based on including but not limited: to number of clicks, number of conversions, number of user-to-user or consumer-to-consumer referrals, among others. This rate formula may be individualized to an individual marketer and, as permitted by offer creator, can be changed dynamically based on the marketer's performance as determined by the system or at the discretion of the offer creator. Tracking of these elements can be conducted on a webpage or a web browser, for example, through web beacons (e.g., cookies, pixel tags, page tags, JavaScript tags, among others). A challenge with prior approaches (e.g., non-blockchain approaches) is that disputes in respect of the counting methodology and counting evidence are common. Content publishers are incentivized to undercount and undercompensate content creators/referral originators, and the content creators/referral originators encounter difficulties in establishing evidence that the referral events for counting indeed occurred. This problem is exacerbated where the content publishers/ad networks store the event data on their own local storage, which is then subject to potential tampering and repudiation. As noted above, blockchain solutions such as Ethereum may be helpful to provide a technical solution to the challenges noted above. As described in various embodiments, a specific technical approach is proposed that integrates with the blockchain solutions in a technically robust way to efficiently (computationally) store certain information on the blockchains to establish time-stamped evidence that can be used to improve trust between both parties. As blockchain transactions are computationally expensive, an improved hybrid off-chain/on-chain solution is described that utilizes m-ary trees to provide a cross-referenceable data structure that has specific technical improvements to aid in ease of potential traversal. As described in variant embodiments, specific approaches are also described that aid in the reduction of overall data storage/payload sizes for the on-chain storage. FIG.2is a block diagram of an example system, according to some embodiments. A registered user210, who is a vendor or a marketer, logs into the web server240from a desktop browser220, from an Android mobile app222, or from an iOS mobile app224via the public internet230. The web server240retrieves and presents data items on client220,222, or224from the database246via the main server242. The user selects and invokes one or more actions on a data item presented in the client220,222, or224. The action is sent to and arrives at the web server240as an incoming message event. The event is sent to and processed by one or more applications executing in the main server242. Interfaces274,276,278, and280are utilized for communications between computing components and devices. One or more STLinks are created as a result by the main server242and corresponding records are mined onto a blockchain250via the blockchain subsystem244and the public internet230. FIG.3is a block diagram of the STLink300, Assertion320, and Statement330,340,350illustrative of their relationships, according to some embodiments. When an STLink300is externalized, it may be optionally assigned its own unique number380and is a reference360to an assertion390. An STLink is carried in a URL/URI and may be shared with multiple receiving parties. STLink ownership is not applicable when externalized. Internally, an STLink may be associated with zero or more owners. Any direct owners of an STLink can be associated with an identity embodied by an STLink's unique number, for instance. Associating an STLink to owner identities internal to the system may be achieved, for example, with a relational database schema and table data. If used only internally, the STLink's unique number372is optional as its identity can adopt from that322of the assertion390, therefore statement320, it points360to. An owner in the system, when externalized, can also be represented by an STLink400, seeFIG.4, in the system as illustrated in various embodiments. An STLink300is a reference360to an assertion390or a statement320in the system. A statement comprises a unique number322, which is its non-externalized identity, and the data it asserts324. This data324is a collection of items each is one of event data contained in-situ, or are extracted and put into one or more statements330, or an assertion (or a statement)340from earlier events' processing. An assertion390is a statement that is successfully mined onto a blockchain, it has an associated transaction receipt310. A mined statement results in an assertion. A statement340may comprise both a unique number342and a data items collection344, or at least one of the two. If it is the latter, one can use330,340for identity reference usage, and to collect data not requiring identification350in the statement scope. If both are present, the unique number associates the data collection with an identity recognized by the system. In short, an STLink300refers to a logical m-ary tree structure360,368,369,370where each of its nodes320,330,340, or350is an assertion, or a statement, each having an optional identification. Where a node is an assertion or a statement that contains a data collection, the logical structure continues by recursing into it. The complete traversal of this tree is the total statement of claims attributable to the STLink. The following are exemplary statements and an assertion illustrated in JSON format. In these examples, the unique number assigned to each statement are exemplary here each as a SHA256 hash of the Unix epoch. These unique numbers have been previously mined onto a blockchain in pools by the system. The reason for the unique numbers' mining is so their origination can be provably attributed to the system performing the blockchain transactions. Statements are mined into assertions at their respective processing that is separate from the unique number pools' mining. This separation allows the system to decouple processing timeliness wanted of the uniqueness assignment to and the statements' mining onto a blockchain into assertions. This is a data-only statement350. 350{352“data” : [{ “first” : “John” },{ “last”: “Doe” },{ “addr1”: “1122 Water Street” },{ “addr2”: “Vancouver, B.C.” },{ “addr3”: “V6C 2L1” },{ “profile”: [{ “occupation” : “self-employed” }] },{ “timestamp” : “1579569554516678”}]} Below is a statement340comprising an unique number342and a data collection344. Section [0090] further elaborates on this example. 340{342“uid” : “4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”,344“data” : {“person”: [{“first”: “John”},{“last”: “Doe”},{“addr1”: “1122 Water Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V5M 2N6”},{“profile”: [{“occupation”: “self-employed”}]},{“timestamp”: “1579569554516678”}]}} Below is a statement with only an unique number330. It has no associated data. This Identity-only statement refers to a node in zero or more m-ary tree in the system, i.e. what is referenced may or may not pre-exist. 330{332“uid”: “723108119e11efd41e280f34d6813f962f1e0838494d1c813c1e28b84bd1d72c”} Assertion390—is a statement320associated with the transaction receipt310from the former's successful mining onto a blockchain, here corresponded to an Ethereum blockchain transaction hash value. Note an assertion is data-only statement. Below shows this exemplary assertion in JSON format. 390{“data”: {“assertion”: [310{“txHash”: “0xb302681699f1b88502c3ac4182b3d461ebfa7036170bee1027d2a0929f24f65f”},320{“uid”: “4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”,“data”: {“person”: [{“first”: “John”},{“last”: “Doe”},{“addr1”: “1122 Water Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V5M 2N6”},{“profile”: [{“occupation”: “self-employed”}]},{“timestamp”: “1579569554516678”}]}}]}} A compact representation employing identity-only statement as a reference to John's identity statement, in lieu of statement data contained in-situ. {“data”: {“assertion”: [{“txHash”: “0xb302681699f1b88502c3ac4182b3d461ebfa7036170bee1027d2a0929f24f65f”},{“reference”: {“uid”:“4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”}}]}} FIG.4through toFIG.9to follow are all exemplary illustrations of the STLink concept ofFIG.3, each illustrative of an exemplary business domain use case, in this case online marketing, highlighting the usage variations provided by the STLink. FIG.4is a block diagram of an exemplary STLink instance400, illustrative of an statement420by the system's recording of an identity assignment event to a user (“John”), according to some embodiments. The user's identity is the assigned unique number422. It includes the user's data items424, for example first name, last name, address, phone number, etc., and a timestamp when the record was processed. Below is a representation of420in JSON format. 420{422“uid” : “4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”,424“data” : {“person”: [{“first”: “John”},{“last”: “Doe”},{“addr1”: “1122 Water Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V5M 2N6”},{“profile”: [{“occupation”: “self-employed”}]},{“timestamp”: “1579569554516678”}]}} The record is mined onto a block and the system obtains a transaction hash from receipt410, here shown as the transaction's hash value from the Ethereum blockchain. Below is representation in JSON format: 490{“data”: {“assertion”: [410{“txHash”: “0xb302681699f1b88502c3ac4182b3d461ebfa7036170bee1027d2a0929f24f65f”},420{422“uid” : “4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”,424“data”: {“person”: [{“first”: “John”},{“last”: “Doe”},{“addr1”: “1122 Water Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V5M 2N6”},{“profile”: [{“occupation”: “self-employed”}]},{“timestamp”: “1579569554516678”}]}}]}} Again, an assertion in compact JSON representation: 490{“data”: {“assertion”: [410{“txHash”: “0xb302681699f1b88502c3ac4182b3d461ebfa7036170bee1027d2a0929f24f65f”},420{“reference”: {“uid”:“4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”}}]}} An STLink is created to communicate the assertion that John is recognized as an identity in the system (assertion490). In some embodiments, the STLink created can be carried in a URI/URL and when clicked, logic running in the web page or app that returns to the system the STLink contents in a URL parameter. In some embodiments, a hash of the JSON representation below is used in the URI/URL. Below are two exemplary STLinks. The first targets a scenario where identifying the source that clicked on a STLink is not of interest to the system. The source of the click is characterized by data available from the browser or app environment about the user that can be and are relayed by the page logic/app, for example: IP address, geolocation, user agent, and other data. STLink in compact JSON representation, for assertion490above. Note this STLink has no unique number therefore has no identity. 400{ 460“data”: { “reference”: {422“uid”:“4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”}}} In the second targeted scenario, an STLink is itself assigned a unique number480so that each click by the characterized click source may be associated with this number. This association is subsequently used by the system to correlate and tracked the click source to the identities and claims pointed to by the STLink. 400{ 480“uid”: “8252ea277ebbe5a99e7368c1a0fa96c3f5105b3c03c5dd11f555490ee20ca335”,460“data”: { “reference”: { 422“uid”:“4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”}}} Note the use470of a unique number480in a STLink is discretionary and dependent on the particular business application or system goal. In both cases above, the unique number422, in this case of John's identity statement, is the target of the STLink reference460. When the STLink is received by the system, it performs business processing informed by identities and statements extracted from the traversal of an m-ary tree of statements recursed into and whose root is indicated by STLink's referenced assertion/statement. As illustrated above, note the structure of an STLink is also that of a statement. Because of this, where provable auditability is also wanted in some embodiments for an STLink, it may be also mined, i.e. become STLink statement becomes an STLink assertion. FIG.5is a block diagram of an exemplary STLink instance500having a unique number580, illustrative of an assertion590by the system's recording of a statement520of the user registration event for John, whose identity was previously asserted by statement530, to Enterprise A, previously asserted from statement540, according to some embodiments. A unique number522is assigned to this user registration event520. The statement520is mined onto a block and the system obtains a transaction receipt510. This is an example of a two-statement (at leaf level) assertion represented as a 3-node tree. f the child statements' data collection does not contain any STLinks or identity-only statements. If they do, while it remains a two-statement assertion, the depth of the tree increases.” Below is a JSON representation of statement520. 520{522“uid”: “aa9507543b63b211aea4983f4becbd7a9382d8101ea22d4f3e68a7ffeead6568”,524“data”: {“user_registration”: [540{542“uid”: “1fe63d74cc177ed5652a5b9d02172365302a3021b2f39919793c673a31f84094”,544“data”: {“enterprise”: [{“name”: “Weevr Inc.”},{“addr1”: “3600 Robson Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V7E 9Y8”},{“phone”: “778-223-4400”},{“fax”: “778-223-4401”},{“contact”: “[email protected]”},{“profile”: [{“business”: “online commerce”}]},{“timestamp”: “1579568668230476”}]}},530{532“uid”: “4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”,534“data”: {“person”: [{“first”: “John”},{“last”: “Doe”},{“addr3”: “V5M 2N6”},{“profile”: [{“occupation”: “self-employed”}]},{“timestamp”: “1579569554516690”}]}} ,{“timestamp”: “1579568668483853”}]}} With assertion in JSON: {“data”: {“assertion”: [410{“txHash”: “ 0x8c8052f25766158d56cdfa9c23b1b13068cbe694d0eb9feaa787c5f918062851”},{“reference”: {“uid”: “aa9507543b63b211aea4983f4becbd7a9382d8101ea22d4f3e68a7ffeead6568”}}]}} And a STLink for520may be created {“data”: { “reference”: {“uid”:“aa9507543b63b211aea4983f4becbd7a9382d8101ea22d4f3e68a7ffeead6568”}}} In the example illustration above, the statement520contains the data content of its child statements530,540. In a variation illustrated byFIG.6, they are replaced by their respective hash from the blockchain transaction receipt. This allows the host system to avoid data duplication in the host database at the cost of a lookup receipt to assertion content if the latter is required by application context during processing. FIG.6is a block diagram of an exemplary STLink instance600having a unique number602, illustrative of an assertion690by the system's recording of a user (“John”) registration event, according to some embodiments. In this variation, in contrast to assertion520ofFIG.5, the blockchain transaction receipts670,680(respective hash values) of child assertions630and640, respectively, are the data items collected656in assertion620instead of their actual data content. As with520, assertion690remains a two-statement assertion (at leaf level) and a three-node tree. Below is a JSON representation of this variation. 620{622“uid”: “aa9507543b63b211aea4983f4becbd7a9382d8101ea22d4f3e68a7ffeead6568”,624“data”: {“user_registration”: [640{“reference” : { “uid”:“1fe63d74cc177ed5652a5b9d02172365302a3021b2f39919793c673a31f84094”},“data”: {“txHash”:“0x56ad94ef9b3f4c67cca0b7ed720e23bb10ea16e76da93b6fa825de03f670d360”}},630{“reference” : {“uid”:“4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”},“data”: {“txHash”:“0x8c8052f25766158d56cdfa9c23b1b13068cbe694d0eb9feaa787c5f918062851”}},{“timestamp”: “1579568668483853”}]} It is possible that some statements will never need to be mined onto a blockchain because their content do not require provable auditability in the operation of the enterprise. Because of this, no transaction receipts can apply. In yet another variation, a child data-only statement is hashed and the hash value included in the parent statement's data collection instead of the actual data. Below is a JSON representation of this variation using content hashes. 620{622“uid”: “aa9507543b63b211aea4983f4becbd7a9382d8101ea22d4f3e68a7ffeead6568”,624“data”: {“user_registration”: [540{“uid”: “1fe63d74cc177ed5652a5b9d02172365302a3021b2f39919793c673a31f84094”,“data”: {“contentHash”:“d0cd1edce7ae0ae18deff7027cb1c1f0c5f580522b83cb5c9aa82332fb43d248”}},530{“uid”: “4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”,“data”: {“contentHash”:“723415ebb09218774a02554bab7fabba8ee79905b21a8e257b37e325ce6ff17a”}},{“timestamp”: “1579568668483853”}]}} As illustrated above, statement540hashed to the value below using SHA256 below: 540{“uid”: “1fe63d74cc177ed5652a5b9d02172365302a3021b2f39919793c673a31f84094”,“data”: {“enterprise”: [{“name”: “Weevr Inc.”},{“addr1”: “3600 Robson Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V7E 9Y8”},{“phone”: “778-223-4400”},{“fax”: “778-223-4401”},{“contact”: “[email protected]”},{“profile”: [{“business”: “online commerce”}]},{“timestamp”: “1579568668230476”}]}}SHA356 hash of statement 540:d0cd1edce7ae0ae18deff7027cb1c1f0c5f580522b83cb5c9aa82332fb43d248 And similarly for statement530 530{“uid”: “1fe63d74cc177ed5652a5b9d02172365302a3021b2f39919793c673a31f84094”,“data”: {“enterprise”: [{“name”: “Weevr Inc.”},{“addr1”: “3600 Robson Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V7E 9Y8”},{“phone”: “778-223-4400”},{“fax”: “778-223-4401”},{“contact”: “[email protected]”},{“profile”: [{“business”: “online commerce”}]},{“timestamp”: “1579568668230476”}]}}SHA356 hash of statement 530:723415ebb09218774a02554bab7fabba8ee79905b21a8e257b37e325ce6ff17a FIG.7is a block diagram of an exemplary STLink instance700having a unique number780, illustrative of an assertion790by the system's recording an offer publish statement720, the data of which is captured as statement724, according to some embodiments, and comprises statements726,730, and740. Assertion790is a statement with three child nodes (at leaf level) and a four-node tree. Below is a JSON representation illustrated with data content populated from all child statements: Offer123726, John's identity730, Weevr Inc. identity740. 720{“uid”: “3a4269309ec24459fe26724c6ed85b76b3fd001934771b76cb60971dd4cc77fc”,“data”: {“published_offer”: [730{“vendor”: {“uid”: “4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”,“data”: {“person”: [{“first”: “John”},{“last”: “Doe”},{“addr1”: “1122 Water Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V5M 2N6”},{“profile”: [{“age”: “42”},{“status”: “single”},{“occupation”: “self-employed”}]},{“timestamp”: “1579569554516678”}]}}},740{“publisher”: {“uid”: “1fe63d74cc177ed5652a5b9d02172365302a3021b2f39919793c673a31f84094”,“data”: {“enterprise”: [{“name”: “Weevr Inc.”},{“addr1”: “3600 Robson Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V7E 9Y8”},{“phone”: “778-223-4400”},{“fax”: “778-223-4401”},{“contact”: “[email protected]”},{“profile”: [{“business”: “online commerce”}]},{“timestamp”: “1579568668230476”}]}}},726{“offer”: {“data”: {“offer”: [{“name”: “Offer 123”},{“legal_reference”: “AA0322K12201”},{“valid_not_before”: “1579598500”},{“valid_not_after”: “1579569800”},{“taxonomy_1”: “all,forsale,shoes,brand,nike”},{“taxonomy_2”: “all,forsale,shoes,sports,basketball”},{“influencer_black_list”: [“9c438fb9790e5eb542527b9be8b1222806716f7839283f7f2faa784e87fff1fe”,“8252ea277ebbe5a99e7368c1a0fa96c3f5105b3c03c5dd11f555490ee20ca335”]},{“compensation”: [{“payout_max”: “100”},{“currency”: “cad”}]},{“timestamp”: “1579568669000321”}]}}},{“timestamp”: “1579568669268358”}]}} Below, its compact representation in JSON for the “Published Offer 123” statement for its vendor and publisher fields, for illustration. Here the “Offer 123” statement is illustrated as remaining in-situ contained data of “Published Offer 123” statement. Alternatively, a compact format for “Offer 123” statement using a content hash may be substituted for additional compactness, as described earlier. {“uid”: “3a4269309ec24459fe26724c6ed85b76b3fd001934771b76cb60971dd4cc77fc”,“data”: {“published_offer”: [{“vendor”: {“reference”: {“uid”:“4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”}}},{“publisher”: {“reference”: {“uid”:“1fe63d74cc177ed5652a5b9d02172365302a3021b2f39919793c673a31f84094”}}},{“offer”: {“data”: {“offer”: [{“name”: “Offer 123”},{“legal_reference”: “AA0322K12201”},{“valid_not_before”: “1579598500”},{“valid_not_after”: “1579569800”},{“taxonomy_1”: “all,forsale,shoes,brand,nike”},{“taxonomy_2”: “all,forsale,shoes,sports,basketball”},{“influencer_black_list”: [“9c438fb9790e5eb542527b9be8b1222806716f7839283f7f2faa784e87fff1fe”,“8252ea277ebbe5a99e7368c1a0fa96c3f5105b3c03c5dd11f555490ee20ca335”]},{“compensation”: [{“payout_max”: “100”},{“currency”: “cad”}]},{“timestamp”: “1579568669000321”}]}}},{“timestamp”: “1579568669268358”}]}} FIG.8is a block diagram of an exemplary STLink instance800, illustrative of an assertion890by the system's recording an acceptance statement820(in marketing, of a published offer—an marketer's acceptance to promote an offer), according to some embodiments. Assertion890is a statement with three child statements (at leaf level) but a seven-node tree due to the child statement826being a four-node tree (see720ofFIG.7), in addition to statements830and840. 820{822“uid”: “b267da4975dc6fd513e0d8124184ff3e9a8b165d7beea5d1d8a9cd9316d71bf7”,824“data”: {“promoted_offer”: {830“promoted_by”: {“uid”: “f8c5f1df23e3ce8367113f41c91ac04e642fe90097bda6262ac8a14e7e5d57e9”,“data”: {“person”: [{“first”: “Mary”},{“last”: “Jane”},{“addr1”: “213 Powell Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V7B 1E6”},{“profile”: [{“occupation”: “sporting goods influencer”}]},{“timestamp”: “1579568668749997”}]}},840“clearing_house”: {“uid”: “1fe63d74cc177ed5652a5b9d02172365302a3021b2f39919793c673a31f84094”,“data”: {“enterprise”: [{“name”: “Weevr Inc.”},{“addr1”: “3600 Robson Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V7E 9Y8”},{“phone”: “778-223-4400”},{“fax”: “778-223-4401”},{“contact”: “[email protected]”},{“profile”: [{“business”: “online commerce”}]},{“timestamp”: “1579568668230476”}]}},826“published_offer”: [{“vendor”: {“uid”: “4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”,“data”: {“person”: [{“first”: “John”},{“last”: “Doe”},{“addr1”: “1122 Water Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V5M 2N6”},{“profile”: [{“occupation”: “self-employed”}]},{“timestamp”: “1579569554516678”}]}}},{“publisher”: {“uid”: “1fe63d74cc177ed5652a5b9d02172365302a3021b2f39919793c673a31f84094”,“data”: {“enterprise”: [{“name”: “Weevr Inc.”},{“addr1”: “3600 Robson Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V7E 9Y8”},{“phone”: “778-223-4400”},{“fax”: “778-223-4401”},{“contact”: “[email protected]”},{“profile”: [{“business”: “online commerce”}]},{“timestamp”: “1579568668230476”}]}}},{“offer”: {“data”: {“offer”: [{“name”: “Offer 123”},{“legal_reference”: “AA0322K12201”},{“valid_not_before”: “1579598500”},{“valid_not_after”: “1579569800”},{“taxonomy_1”: “all,forsale,shoes,brand,nike”},{“taxonomy_2”: “all,forsale,shoes,sports,basketball”},{“influencer_black_list”: [“9c438fb9790e5eb542527b9be8b1222806716f7839283f7f2faa784e87fff1fe”,“8252ea277ebbe5a99e7368c1a0fa96c3f5105b3c03c5dd11f555490ee20ca335”]},{“compensation”: [{“payout_max”: “100”},{“currency”: “cad”}]},{“timestamp”: “1579568669000321”}]}}},{“timestamp”: “1579568669268358”}]},“timestamp”: “1579568669550064”}} Below, a compact representation for820: {“uid”: “b267da4975dc6fd513e0d8124184ff3e9a8b165d7beea5d1d8a9cd9316d71bf7”,“data”: {“promoted_offer”: {“promoted_by”: {“reference”: {“uid”:“f8c5f1df23e3ce8367113f41c91ac04e642fe90097bda6262ac8a14e7e5d57e9”}},“clearing_house”: {“reference”: {“uid”:“1fe63d74cc177ed5652a5b9d02172365302a3021b2f39919793c673a31f84094”}},“published_offer”: {“reference”: {“uid”:“3a4269309ec24459fe26724c6ed85b76b3fd001934771b76cb60971dd4cc77fc”}}},“timestamp”: “1579568669550064”}} FIG.9is a block diagram of an exemplary STLink instance900, illustrative of an assertion990by the system's recording a consumer action statement920(on a promoted offer, e.g., a purchase), according to some embodiments. Assertion890is a statement with three child statements (at leaf level) but a ten-node tree where assertion926contributes seven nodes. 920{922“uid”: “ff1600a95b927f8fe358319337ee5b1a4a5bcae42d7610fd761ac0975a5b9dbe”,924“data”: {930“clicked_by”: {932“uid”: “b343d71686a75251a04384b9263cde3225175e6cb87ae8965ef03218b664066a”,934“data”: {“click_source”: [{“ip”: “201.112.33.5”},{“click_timestamp”: “1579568661010101”},{“user_agent”: “Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:72.0)Gecko/20100101 Firefox/72.0”},{“app_version”: “5.0 (Windows)”},{“geolocation”: “51.5131010,−0.122405”},{“timestamp”: “1579568670143131”}]}},926“promoted_offer”: {928“uid”: “b267da4975dc6fd513e0d8124184ff3e9a8b165d7beea5d1d8a9cd9316d71bf7”,929“data”: {“promoted_offer”: {“promoted_by”: {“uid”: “f8c5f1df23e3ce8367113f41c91ac04e642fe90097bda6262ac8a14e7e5d57e9”,“data”: {“person”: [{“first”: “Mary”},{“last”: “Jane”},{“addr1”: “213 Powell Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V7B 1E6”},{“profile”: [{“age”: “50”},{“occupation”: “sporting goods influencer”}]},{“timestamp”: “1579568668749997”}]}},“clearing_house”: {“uid”: “1fe63d74cc177ed5652a5b9d02172365302a3021b2f39919793c673a31f84094”,“data”: {“enterprise”: [{“name”: “Weevr Inc.”},{“addr1”: “3600 Robson Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V7E 9Y8”},{“phone”: “778-223-4400”},{“fax”: “778-223-4401”},{“contact”: “[email protected]”},{“profile”: [{“business”: “online commerce”}]},{“timestamp”: “1579568668230476”}]}},“published_offer”: {“uid”: “3a4269309ec24459fe26724c6ed85b76b3fd001934771b76cb60971dd4cc77fc”,“data”: {“published_offer”: [{“vendor”: {“uid”:“4c2f6296bf6accf5003ecd019aacd1b3d0e485586a10fc8fead719f291a1bb03”,“data”: {“person”: [{“first”: “John”},{“last”: “Doe”},{“addr1”: “1122 Water Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V5M 2N6”},{“profile”: [{“age”: “42”},{“status”: “single”},{“occupation”: “self-employed”}]},{“timestamp”: “1579569554516678”}]}}},{“publisher”: {“uid”:“1fe63d74cc177ed5652a5b9d02172365302a3021b2f39919793c673a31f84094”,“data”: {“enterprise”: [{“name”: “Weevr Inc.”},{“addr1”: “3600 Robson Street”},{“addr2”: “Vancouver, B.C.”},{“addr3”: “V7E 9Y8”},{“phone”: “778-223-4400”},{“fax”: “778-223-4401”},{“contact”: “[email protected]”},{“profile”: [{“business”: “online commerce”}]},{“timestamp”: “1579568668230476”}]}}},{“offer”: {“data”: {“offer”: [{“name”: “Offer 123”},{“legal_reference”: “AA0322K12201”},{“valid_not_before”: “1579598500”},{“valid_not_after”: “1579569800”},{“taxonomy_1”: “all,forsale,shoes,brand,nike”},{“taxonomy_2”: “all,forsale,shoes,sports,basketball”},{“influencer_black_list”: [“9c438fb9790e5eb542527b9be8b1222806716f7839283f7f2faa784e87fff1fe”,“8252ea277ebbe5a99e7368c1a0fa96c3f5105b3c03c5dd11f555490ee20ca335”]},{“compensation”: [{“payout_max”: “100”},{“currency”: “cad”}]},{“timestamp”: “1579568669000321”}]}}},{“timestamp”: “1579568669268358”}]}}},“timestamp”: “1579568669550064”}},“timestamp”: “1579568669847610”}} Below, a compact representation for920: {“uid”: “ff1600a95b927f8fe358319337ee5b1a4a5bcae42d7610fd761ac0975a5b9dbe”,“data”: {“clicked_by”: {“reference”: {“uid”:“b343d71686a75251a04384b9263cde3225175e6cb87ae8965ef03218b664066a”}},“promoted_offer”: {“reference”: {“uid”:“b267da4975dc6fd513e0d8124184ff3e9a8b165d7beea5d1d8a9cd9316d71bf7”}},“timestamp”: “1579568669847610”}} Semantics are not lost in this compact representation and a full representation can be recovered by traversing into the m-ary tree whose root starts at922and resolving all reference elements encountered. For illustration completeness, event920may be externalized with the following STLink presentation (no unique number used in the STLink in this illustration): {“data”: { “reference”: {“uid”:“ff1600a95b927f8fe358319337ee5b1a4a5bcae42d7610fd761ac0975a5b9dbe”}}} FIG.10andFIG.11are flowcharts illustrating an exemplary method of the steps in an STLink's creation resulting from an event's processing, according to some embodiments. The host system100receives event1002, determines the designated application to handle it and routes to it where the event's data is extracted1004and processed1006. As part of the processing, the application determines any identities and assertions conveyed in the event data. A statement is created1012and depending on the application context a unique number assigned to each statement. Because of the need to satisfy the same irrevocability and auditability requirements, a “unique-number-allocated” event is raised at1016and1018to further trigger the system to record it onto the blockchain in a separate iteration of this flowchart. More statements may be created1020as application requirements dictate. The application creates a new assertion1022, i.e. intent at this point is to mine it onto a blockchain, to bind all new statements, determined identities and other existing assertions/statements required by application context, a timestamp, and assign it a unique number. All assertions are assigned a unique number but step1024leaves open in the method to permit exceptional cases where none is required or wanted. InFIG.11, the application returns the assertion(s) to the system for recording onto a blockchain. The method accommodates recording to more than one blockchain concurrently1130. For each blockchain selected, the contract to use for the transaction is selected1132and optionally its contract address is added as an additional statement to the assertion in step1134. Because the size of an assertion can be large and the cost of on-chain storage is expensive, a method1136to transform an assertion's data to minimize its on-chain portion may be triggered.FIG.12is an exemplary method for such a data transformation. This on-chain data minimization step returns with a data blob in step1140that is a fraction of the one serialized from step1138if no data transformation was selected. Using the selected contract from step1132, the system submits the data blob to a blockchain selected in step1130in a transaction1140, and awaits its outcome1142. When completed, the blockchain returns a transaction receipt1144and if the transaction succeeded, the system creates an STLink1146, optionally assigns a unique number1150as its identity. Based on application need and/or system design goals, the system determines1154and assigns1156the identities to be associated with the STLink identity, as the latter's owners. In some embodiments, STLink ownership is strictly an application concept, therefore not required in the creation and operation of STLink. Where the STLink is assigned a unique number, similar to the case of a statement above, a “unique-number-assigned” event is raised. Where ownership is applied and if provable auditability of ownership is also wanted, this can be achieved as yet another statement where STLink(s) and owners' identities are bound and mined to an assertion. FIG.12is a flowchart illustrating an exemplary method for a set of the steps to transform an assertion's data content so as to achieve the goal of minimizing the amount of on-chain data from an assertion. A requirement is the method's preservation of irrefutably and provability that the on-chain and off-chain data are uniquely related. In this exemplary method, this is accomplished by performing a SHA256 hash1210of the assertion's complete data content1200. The generated hash value continues as the on-chain data with the original assertion data content stored in an off-chain database1220with the hash as its key for fast lookup and correspondence to its counterpart that is stored on the blockchain. FIG.13a flowchart illustrating an exemplary method of the steps to determine all identities and assertions that pre-exist in the system and that are relevant to the event's processing. They may be traced and identified from an STLink conveyed in an event. This flowchart is an elaboration of steps1008and1010ofFIG.10. The system receives an event extracts its data content1302. First the system checks the STLink's validity. It is valid (step1304) if it exists in the system's off-chain database, has not been administratively locked, and other custom criteria. If one or more owners of the STLink were previously assigned, they are determined and retrieved by a query1306to the off-chain store (database) and collected. Any direct owners of an STLink are associated with an identity embodied by an STLink's unique number. Additional identities are extracted from the traversal of the m-ary tree rooted at the the top-level statement1310pointed to by the STLink. Though customarily an STLink points to one statement, the method of the invention allows an STLink to point to a set of statements. A retrieved statement's unique number, its identity, is collected at step1312. A statement's data collection is extracted at step1314and iterated at step1316. Where a data item can be interpreted to be an identity-only statement used as a reference or a transaction receipt at steps1318,1320, more identities and/or statements/assertions may be contained within it. If a data item is an statement, the method stacks the current statement processing level, and recurses into the child statement at step1350. If the data item is a transaction receipt or an identity-only statement used as a reference, the corresponding statement or reference is queried at step1322in the off-chain store (database) with the receipt, and the statement recursed at step1360. This recursion completes when all paths of the top-level statement's node tree are traversed at steps1324,1326,1328,1330. All identities and statements are collected when traversal is complete. The term “connected” or “coupled to” may include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Although the embodiments have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps. As can be understood, the examples described above and illustrated are intended to be exemplary only. | 41,595 |
11943336 | DETAILED DESCRIPTION Throughout the following discussion, numerous references will be made regarding servers, services, interfaces, engines, modules, clients, peers, portals, platforms, or other systems formed from computing devices. It should be appreciated that the use of such terms, is deemed to represent one or more computing devices having at least one processor (e.g., ASIC, FPGA, DSP, x86, ARM, ColdFire, GPU, multi-core processors, etc.) programmed to execute software instructions stored on a computer readable tangible, non-transitory medium (e.g., hard drive, solid state drive, RAM, flash, ROM, etc.). For example, a server can include one or more computers operating as a web server, database server, or other type of computer server in a manner to fulfill described roles, responsibilities, or functions. One should further appreciate the disclosed computer-based algorithms, processes, methods, or other types of instruction sets can be embodied as a computer program product comprising a non-transitory, tangible computer readable media storing the instructions that cause a processor to execute the disclosed steps. The various servers, systems, databases, or interfaces can exchange data using standardized protocols or algorithms, possibly based on HTTP, HTTPS, AES, public-private key exchanges, web service APIs, known financial transaction protocols, or other electronic information exchanging methods. Data exchanges can be conducted over a packet-switched network, the Internet, LAN, WAN, VPN, or other type of packet switched network. The following discussion provides many example embodiments of the inventive subject matter. Although each embodiment represents a single combination of inventive elements, the inventive subject matter is considered to include all possible combinations of the disclosed elements. Thus if one embodiment comprises elements A, B, and C, and a second embodiment comprises elements B and D, then the inventive subject matter is also considered to include other remaining combinations of A, B, C, or D, even if not explicitly disclosed. As used herein, and unless the context dictates otherwise, the term “coupled to” is intended to include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms “coupled to” and “coupled with” are used synonymously. FIG.1provides a diagrammatic overview of a system100according to embodiments of the inventive subject matter. The inventive subject matter provides apparatus, systems and methods in which a computing device produces highly random numbers using a combination of (1) and AI interface, (2) a functions table, and (3) a random bits generator. A preferred embodiment of the inventive system and method is referred to herein from time to time as CrownRNG™. The CrownRNG™ design exploits the by-default randomness of irrational numbers. Mathematically speaking, irrational numbers are those numbers that cannot be expressed as ratios of two or more rational numbers. They are proven to have digital sequences, also known as mantissas, that extend to infinity without ever repeating. Therefore, they are excellent sources for true randomness1. Mathematical functions known to generate irrational numbers include the square roots of non-square numbers, e.g., square roots of prime numbers, ?20, ?35, etc., and also trigonometric functions having natural numbers for their arguments, among many other. (Please refer to Appendix A for a list of some irrational numbers' generating functions). The basic idea behind CrownRNG is to use the power of artificial intelligence (AI) to create mathematical functions able to generate random irrational numbers. The irrational numbers will function as parameters to be used to generate highly randomized sequences of binary bits that are suitable for encryption purposes. The CrownRNG unit generally comprises three main components:1—An AI interface.2. A Functions Table.3. A Random Bits Generator (RBG). 1. The AI interface: The AI interface utilizes the learning capabilities of artificial intelligence to learn how to generate randomized parameters needed by the system. The AI is initialized by a set of CPU metrics coming from the hosting PC. It uses these metrics as initial features to learn from and to then evolve via linear-regression-based Machine-Learning (ML) algorithm. One branch of the AI, the MusicAI, will compose random music pieces that will transform into a set of numbers corresponding to the octaves, notes, and tempi of the music piece. These three values are then converted, via digital root arithmetic, into specific ranges such that they can be utilized by the Functions Table. The other branch of the AI, the MathAI, uses ML to create ever-changing mathematical formulae tuned to create random non-square numbers (N). The square roots of these non-square numbers create numbers with irrational mantissas. These irrational numbers will be truncated to specific bit-lengths dictated by the private keys' security level and then will be passed on to the RBG as seeds. The MusicAI: The main workflow of the MusicAI can be summarized as follows: First, three parameters of the CPU are collected: the allocated memory, allocated heap, and stack. These three values will be collected in intervals of 1 millisecond for a total of 5 seconds. This will generate 5000 data points for each. Next, the AI will start doing supervised machine learning on these values, treating two as features and one as the predicted label. Using a linear regression model, two features, let say the memory and heap, will learn to predict the stack label, then the model will collect new values for the features and start predicting the heap label. The same machine-learning algorithm will be used, but with the labels being the other two features instead. So, in total, we have three AI algorithms working simultaneously to predict three labels, memory, heap, and stack. The three predicted values are truncated, using modular math, into specific values depending on their allocated variables, as shown in the table below. TABLE.1The features/labels transformation of the CPU metrics.FeaturesLabelVariableModHeapMemoryStackNote7HeapStackMemoryTempo6MemoryStackHeapOctave12 The stack values will be allocated to the note variable and hence be truncated to mod(7), in other words, eight values from 0 to 7. The memory values will transform into the tempo using mod(6), and finally, the heap variables will transform into the octave, using mod(12). These three values will then pass on to the function table, as we to be explained later. 1. The MathAI: The MathAI shares the same supervised machine learning algorithm with the MusicAI. However, for this case, the three predicted values of the memory, heap, and stack are truncated using mod(10). The operation is repeated, and the values are concatenated to form one single number of a specific length, specified by the programmer. A single-digit of either [2, 3, 7, 8] is randomly chosen and then added to the end of the concatenated number, whenever needed, to ensure that the number is not a perfect square. This is because no number, when squared, will end with either of these four digits. The final number is then square-rooted and passed on to the next element. In summary, the AI outputs the following parameters:a. The irrational seed. An infinite irrational number truncated to a specific length.b. The tempo, note, and octave parameters in the ranges of (0-7), (0-8), and (0-13), respectively. FIG.2is a schematic rendering of the workflow of the AI. 2. The Functions Table The Functions Table is defined by a set of horizontal and vertical variables that are mathematical functions proven to always produce perfect irrational numbers. The arguments of these functions are not fixed, determined by the random internal states, mainly the timestamp of the current time, as well as the tempo variable. The tempo, note, and octave parameters coming out of the AI will be used to determine which two cells on the vertical and horizontal axis will be utilized for the current run. The output of these cells (the irrational mantissas) will then be truncated accordingly and used to compute the arithmetic mode by which the RBG will operate. For our current model, we use the square root function on the horizontal axis of the table and trigonometric ones on the vertical axis. There are seven cells on the horizontal axis (FIG.2), with the argument of the square roots being the product of the tempo value, the timestamp (TS), and a non-square number (A) as follows: sqrt (TS x Tempo x A). (This non-square number A is not the same as the N used to generate the seed.) The horizontal scale is made of 104 cells corresponding to 13 octaves, with each octave divided into eight notes. The octave parameter will first select one of the 13 octaves, and then the note parameter will select which note of this specific octave will be used. Each note corresponds to a trigonometric function having an argument made of the time stamp divided by a specific frequency value: TS/fr. The trigonometric functions, along with the frequencies of the notes of the 13 octaves, are listed in the table below. TABLE 2A list of the trigonometric functions used along with the music frequencies of each octave.Sin432 × 100450468252270288306324342360378396414Cos864 × 100900936504540576612648684720756792828Tan1728 × 10180018721008108011521224129613681440151215841656Ctan3456 × 10360037442016216023042448259227362880302431683312Sec6912 × 10720074884032432046084896518454725760604863366624Csc1382414400149768064864092169792103681094411520120961267213248Sin27648288002995216128172801843219584207362188823040241922534426496Cos55296576005990432256345603686439168414724377646080483845068852992 Once the two irrational values of the horizontal and the vertical cells are calculated, they will be truncated accordingly and then passed on, along with the seed, to the RBG. 3. The Random Bit Generator (RBG) The RBG utilizes a specific mathematical function that takes the seed output of the AI as its initial argument and the two irrational numbers of the Functions Table as the arithmetic mod parameters. From there, it iterates on each calculated value to calculate new ones that are then concatenated to create a randomized sequence of bits. The RBG general design is based on the cryptographically-secure Blum-Blum-Shub (BBS) generator2. But while the arithmetic mod in the original BBS is computed from the product of a couple of prime numbers, in the CrownRNG case, we use the truncated irrational numbers coming from the Functions Table instead. We can think of the RBG function as occupying the inner cells of the Function table, taking its parameters from the X and Y axis, and outputting a value based on these parameters and the seed. The general concept of the BBS generator goes as follows:1. Two primes p and q of specific bit-length are chosen such that each is congruent to 3 modulo 4: p≡q≡3 mod(4). (In the CrownRNG case, the two primes are replaced with two truncated irrational numbers, I1 and I2, that satisfy the same mod(4) requirement.)2. The two prime numbers (irrational numbers in our case) are multiplied to generate n, the arithmetical mode by which the generator will perform its calculations.3. A random integer s (the seed) is generated from another physical true random number generator, having a length in the interval of [1, n−1] (in the CrownRNG case, the seed is generated by the AI.)4. The seed will initiate the generation process through the operation x0=s2mod(n).5. The function xi=x2i-1mod(n) is then used to iterate over each previously calculated value, generating new values for every iteration and outputting a string of numbers: x1, x2, x3, . . . , xk.6. Next, the output values are converted into a string of binary bits.7. The bit-parity of each binary number is determined depending on the type of parity, even or odd (0 or 1).8. Finally, the parity digits are concatenated to form the desired CSPRN, depending on the required bit-length of the key, which also determines the level of security: Y=y1y2y3. . . yk. As mentioned above, the only modification the RBG introduces to the original BBS is replacing the prime numbers with the irrational ones. The usage of prime numbers in the original BBS is a must if we want to have the ability to reverse the direction of the generator, as in the case when the BBS system is used as an encryption/decryption algorithm. However, as we do not want to reverse the operation in our system, there is no problem with using numbers that are not prime. In fact, this introduces additional security to the system because when we compare the limited amount of prime numbers having specific bit-length to the infinite amount of potential irrational numbers of the same bit-lengths, the infinity factor introduces an extra advantage when it comes to the security of the generator against cyber-attacks that try to predict these values. FIG.5shows a flowchart of a method of securing data, according to embodiments of the inventive subject matter. At step510, a computing device obtains a random selection of seeds. The seed can be selected in several ways. In embodiments, the seed is selected randomly from among numbers that are neither prime numbers nor quasi-prime numbers. In embodiments, the seed is randomly selected from among a plurality of numbers that are over a thousand digits long. In other embodiments, the seed is randomly selected from among a plurality of numbers that are over ten thousand digits long. At step520, the computing device uses the seeds to generate irrational numbers. The generation of irrational numbers is discussed in further detail below. FIG.6illustrates the processes associated with encrypting a set of data according to embodiments of the inventive subject matter. Steps610-660cover the process of encrypting a message or data. At step610, the sending computing device selects a function to be used to obtain an irrational number. The function can be a mathematical function or algorithm as discussed further herein. The function can be selected according to a pre-determined order or schedule. Alternatively, it can be randomly selected or user-selected. The computing device obtains the selected function from the stored functions in a functions database. At some point prior to step620, the computing device also obtains a plurality of seeds, as discussed above. At step620, the computing device solves the function using each of the selected seeds to obtain a corresponding irrational number. Irrational numbers have an infinite or near-infinite amount of decimal places. Thus, the function is a function whose output is an irrational number. By using an irrational, the systems and methods of the inventive subject matter have the flexibility to obtain many encryption keys from the same function without repeating some or all of the encryption keys. Because irrational numbers do not have a pattern, the systems and methods of the inventive subject matter can ensure true randomness in the generation of cryptography keys. For example, the function can be to take the square root of a non-perfect square number. This results in an irrational number. In an illustrative example, the function to be solved can be the square root of 20. In embodiments, solving the function comprises taking an inverse of each of the selected seeds. In embodiments, the irrational number is calculated by calculating the root of the seed. In embodiments, the root can be a square root or a cube root. In other embodiments, the root can be a fractional root. In embodiments, the irrational number can be a root of a number that comprises the seed and that ends in 2, 3, 7 or 8. In embodiments, the same function is not applied to all of the obtained seeds. In these embodiments, the computing device selects a first function and applies it to a first subset of the obtained seeds. The computing device then selects a second function and applies it to a second subset of the obtained seeds. The total group of selected seeds can be subdivided into additional subsets and have additional functions applied to them. At step630, the computing device selects a starting point and a length within the mantissa of each irrational number calculated for each seed. The starting point designates a start digit in the mantissa. The length designates the number of digits following the start digit. The start digit and length are preferably integer values such that they identify a precise digit location and precise length. At step640, the computing device applies the starting point and length to the decimals of the mantissa to result in a shortened key or one-time pad, which is a portion of the mantissa. Thus, the one-time pad is a key that starts at the start digit and contains the digits following the start digit according to the length. In embodiments, the one-time pad can be at least 10,000 digits long. In other embodiments, the one-time pad comprises at least as many digits as data positions in the data. In still other embodiments, the binary representation of the one-time pad comprises at least as many digits as the binary representation of the data to be encrypted. After step640, the process continues to step530ofFIG.5. The techniques used to generate and use the encryption/decryption keys using a mathematical function are described in greater detail in the inventor's own pending U.S. patent application U.S. Ser. No. 17/018,582 filed Sep. 11, 2020, entitled “Method of Storing and Distributing Large Keys”, which is incorporated herein by reference in its entirety. At step530, the computing device uses the generated portion of the mantissas of each of the irrational numbers as a one-time pad to encrypt individual ones of multiple pieces or sets of data. The encrypted data can then be stored by the computing device locally or at a remote database. At step540, the computing device updates its records of used seeds and discards those seeds. Discarding the seeds can involve deleting the seeds from the database that stores the seeds In a variation of these embodiments, the computing device can discard the starting points and lengths used such that the seeds themselves can be reused but the starting points and lengths are not such that the actual one-time pads used for encryption cannot be recreated by the computing device. In embodiments, the first portion (which can be or can include the seed) can be distributed via a graphical code. This graphical code could be a QR code. In a variation of these embodiments, the QR code can contain additional codes that help to obfuscate the public key. To decrypt the encrypted data at a future time, the computing device (or another computing device that is the recipient of the encrypted data) can apply the same seeds to the function to generate the mantissa and one-time pad as discussed above, to generate an identical key that can be used for decryption. In situations where a receiving computing device is decrypting the data, the sending computing device can send the seed and an indicator of a function. The receiving computing device would already have the functions (or pointers to these functions) stored as part of an initial shared secret established prior to the data transmission. The receiving computing device would also receive or otherwise obtain the starting points and lengths for each of the one-time pads it needs. The seed(s) can be transmitted as part of a graphical code as discussed above. A benefit of this approach is that for each potential recipient that may be authorized to access to some of, but not all of, the individual encrypted data sets, the system can provide access to only the data sets that particular recipient is authorized to access. To do so, the computing device can send the necessary decryption information for only the authorized data sets. This way, the recipient computer device can only decrypt and access the data for which it has the decryption tools. It should be apparent to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refers to at least one of something selected from the group consisting of A, B, C . . . and N, the text should be interpreted as requiring only one element from the group, not A plus N, or B plus N, etc. | 21,286 |
11943337 | DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS Techniques are disclosed for providing secure and reliable trusted execution environments (“TEEs”), such as a virtual machines (“VMs”), containers and enclaves. Modern hardware supports trusted execution environment (“TEE”) techniques where a supervisor of a host computer does not have access to memory of a specific TEE, such as a trusted container, a trusted virtual machine, or a trusted software enclave running on the host computer. For example, the supervisor may lack access to the memory of the TEE because the memory is protected by host hardware or host firmware. Memory encryption is one such technique to protect the memory of the TEE. In an example, encrypted memory may be used to support and protect running sensitive workloads in the cloud. Launching a TEE instance, such as a trusted container, by a cloud provider typically involves using a secret that is private to the TEE instance owner and unavailable to the cloud provider. For example, a disk image of the TEE instance (e.g., disk image of the trusted container) may be encrypted with a secret key. However, providing this secret to the TEE instance (e.g., trusted container) presents a challenge. One way to provide the secret to the TEE instance is by having the host hardware or the host firmware first provide a measurement (e.g., cryptographic measurement) of the TEE instance owner. The measurement (e.g., cryptographic measurement) may be used for attestation or validation to ensure that the TEE instance runs under a trusted environment and is protected from the supervisor or hypervisor. After verifying that the TEE instance runs under a trusted environment and is protected from the supervisor or hypervisor, the TEE instance owner's secret is encrypted and forwarded to the TEE instance. For example, the secret may be provided to a trusted container in an encrypted form and in a container specific way. However, providing the secret to the TEE instance in this way typically requires the TEE instance owner to maintain a service (e.g., attestation service) for verifying the measurement and providing the encrypted secret. Additionally, the service (e.g., attestation service) is typically required to be accessible to the cloud provider, but outside of the cloud provider's control. In an example, the service is envisioned to be hosted on the TEE instance owner's private cloud. However, hosting the service on the TEE instance owner's private cloud is inappropriate for workloads that launch and shut down TEE instances (e.g., trusted containers) at a high rate. For example, should the private cloud become unavailable, the ability to launch new TEE instances would become unavailable. Additionally, if the private cloud slows down and in unable to serve requests, the ability to launch new TEE instances would be halted and unavailable until the private cloud is again able to serve requests. Providing attestation through a private cloud often requires that the TEE instances are not treated as “cattle” where they are started up and shut down at will. Instead, the TEE instances are treated as “pets” and carefully migrated live without shutting down. However, live migration increases costs for the cloud providers and the TEE instance owners. For example, treating the TEE instances as “pets” that are migrated live increases hardware utilization and creates additional overhead (e.g., due to successive iterations of memory pre-copying that consumes extra CPU cycles on both source and destination servers). Another technique for providing attestation services to TEEs is to use a quoting TEE instance (e.g., quoting enclave) that runs on the same platform as the TEE instances (e.g., enclaves) being attested. The quoting enclave provides attestation services to the application enclaves by signing their attestation reports. For example, the quoting enclave may verify attestation reports for the platform and the quoting enclave may hold the platforms attestation key. In an example, multiple attesting enclaves may create respective attestation reports for the quoting enclave and the quoting enclave verifies the report and signs the report with the attestation key. Then, off-platform tenants may go through a verification process to obtain the attestation information. For example, the verification of a quote by an off-platform tenant involves verifying that a Provisioning Certification Key (“PCK”) embedded in the quote is valid. For example, verification of the quote may be achieved by using the PCK certificate chain obtained from a manufacturer associated with the enclave (e.g., Intel as a SQX processor that utilizes the quoting enclaves). SQX may refer to an encoding, video and/or compression technology. Then, verifying the key associated with the PCK certification is the one that signed the platform's attestation key (e.g., signed the hash of the platform's attestation key). Verification of the quote includes verifying that the attestation key was the one that signed the quote and also verifying that the hash of the attestation key embedded in the quote is correct. To provide secure and reliable TEEs, a guest owner may host a TEE with a trusted cloud provider by uploading an encrypted disk image to the cloud provider. Instead of self-hosting an attestation service, the owner hosts the attestation service in a public cloud with an alternate cloud provider. The attestation service can launch from an attestation disk image including a secret (e.g., a disk encryption key). The attestation disk image does not need to be encrypted since it does not give access to the actual disk image and instead gives access to just the secret. In this way, no single cloud provider has access to both the disk image and the secret. For example, the trusted cloud provider has access to the encrypted image but not the secret. Additionally, the alternate cloud provider has access to the secret but not the encrypted image. By restricting access to the both the disk image and the secret to any single cloud provider, security is advantageously improved. To further improve availability, multiple alternate cloud providers may be used. Additionally, a secret sharing scheme may be implemented to further improve security. For example, the secret may be divided into “n” pieces where any “k” pieces (i.e., “k” is less than or equal to “n”) may be sufficient to retrieve the secret. For example, any “k” of “n” alternate cloud providers may provide an attestation service and provide its own piece of the secret to the TEE running on the trusted cloud provider. The TEE may then use the pieces to the retrieve the secret and complete start-up. Since multiple attestation services on a public cloud and/or multiple attestation services from a plurality of public clouds may each validate an application TEE instance, the configuration of the systems and methods disclosed herein includes various attestation options, which has no single point of failure (e.g., one of the attestation services or one of the alternate cloud services going offline or slowing down). Conversely, multiple attestation services may be available from one or more alternate cloud providers at the same time such that if one of the attestation services crashes or goes off-line, another attestation service may assume the responsibility of validating (e.g., attesting) newly launched TEE instances. Secrecy, security and reliability may be further enhanced by secret sharing. Secret sharing further reduces failure points because even if one of the attestation services fails or is infiltrated or subverted by an attacker, another attestation service can validate the application TEE instance and provide the secret (or portion of the secret). Instead of having attestation services exclusively or fully run in a private cloud, TEEs such as escrow containers may be used to provide attestation services on various alternate clouds. For example, without solely relying on and maintaining an attestation service in a private cloud, the private cloud is protected from continuous security threats while validating (e.g., attesting) launched applications because access to the private cloud may be disabled after launching an escrow TEE instance (e.g., application TEE instance with clone service). For example, while providing attestation services, the private cloud may be vulnerable to attacks from various applications (e.g., application TEE instances) requesting validation to complete start-up. Furthermore, running attestation services that maintain secrets and/or keys (or a portion of the secret and/or key) eliminates the need of live migration, reduces the reliance on private cloud attestation services, eliminates the numerous verification steps associated with quoting enclaves, and improves the latency of launching TEE instances (e.g., containers, virtual machines or enclaves). Further, the attestation techniques described herein may be extended to any kind of TEE, making it completely independent for specific hardware (e.g., encrypted VMs can also be supported), which allows running elastic applications that treat containers like “cattle”, with encryption, providing a secrecy to any required degree (e.g., with secret sharing) without sacrificing performance and through the use and convenience of public clouds. The advantages and features discussed above are especially important for cloud vendors as these advantages and features add value compared to private cloud solutions. For example, vendors using a hypervisor (e.g., Kernel-based Virtual Machine (“KVM”)) on an operating system, such as Red Hat® Enterprise Linux® (“RHEL”) may utilize the systems and methods disclosed herein to preserve privacy, improve security, and improve reliability while performing introspection services for TEEs. When handling network traffic (e.g., network traffic from a cloud-computing platform such as the Red Hat® OpenStack® Platform), hypervisor vendors and operating system (“OS”) vendors often attempt to improve security to prevent malicious memory accesses. An example vendor is Red Hat®, which offers RHEL. By providing introspection services while maintaining privacy for TEEs, security and reliability may be improved. FIG.1depicts a high-level component diagram of an example computing system100in accordance with one or more aspects of the present disclosure. The computing system100may include an operating system (e.g., host OS186), one or more TEEs (e.g., TEE instances160A-B and escrow TEE instances162A-B), a cloud provider (e.g., server150), and nodes (e.g., nodes110A-C). An application TEE instance (e.g., TEE instance160A) may be a virtual machine, container, enclave, etc. and may include a guest OS, guest memory, a virtual CPU (VCPU), virtual memory devices (VMD), and virtual input/output devices (VI/O). For example, TEE instance160A may include guest OS196A, guest memory195A, a virtual CPU190A, a virtual memory devices192A, and virtual input/output device194A. Virtual machine memory195A may include one or more memory pages. Similarly, TEE instance160B may include a guest OS, guest memory, a virtual CPU, virtual memory devices, and virtual input/output devices. The computing system100may also include a supervisor or hypervisor180and host memory184. The supervisor or hypervisor180may manage host memory184for the host operating system186as well as memory allocated to the TEEs (e.g., TEE instances160A-B and escrow TEE instances162A-B) and guest operating systems (e.g., guest OS196A such as guest memory195A provided to guest OS196A. Host memory184and guest memory195A may be divided into a plurality of memory pages that are managed by the supervisor or hypervisor180. Guest memory195A allocated to the guest OS196A may be mapped from host memory184such that when an application198A-D uses or accesses a memory page of guest memory195A, the guest application198A-D is actually using or accessing host memory184. In an example, a TEE instance (e.g., TEE instance160A-B), such as a virtual machine, container or enclave may execute a guest operating system196A and run applications198A-B which may utilize the underlying VCPU190A, VMD192A, and VI/O device194A. One or more applications198A-B may be running on a TEE, such as virtual machine, under the respective guest operating system196A. TEEs (e.g., TEE instances160A-B and escrow TEE instances162A-B, as illustrated inFIG.1) may run on any type of dependent, independent, compatible, and/or incompatible applications on the underlying hardware and OS. In an example, applications (e.g., App198A-B) run on a TEE, such as a virtual machine, and may be dependent on the underlying hardware and/or OS186. In another example, applications198A-B run on a TEE, such as a virtual machine, and may be independent of the underlying hardware and/or OS186. For example, applications198A-B running on a first TEE instance160A may be dependent on the underlying hardware and/or OS186while applications (e.g., application198C-D) running on a second TEE instance160B are independent of the underlying hardware and/or OS186A. Additionally, applications198A-B running on TEE instance160A may be compatible with the underlying hardware and/or OS186. In an example, applications198A-B running on a TEE instance160A may be incompatible with the underlying hardware and/or OS186. The cloud provider (e.g., server150) may be a trusted cloud provider. In another example, the cloud provider may be an alternate cloud provider. The system100may interact with both a trusted cloud provider and alternate cloud provider(s). In an example, the alternate cloud provider(s) may be one or more public clouds. As mentioned above, the escrow TEE instance(s)162A-B may include attestation services199A-B that validate application TEE instances160A-B. Once validated, the application TEE instances160A-B may complete start-up. For example, the escrow TEE instances162A-B and/or the attestation services199A-B may provide the newly started application TEE instance160A-B with a secret, key or key-secret pair. The escrow TEE instances162A-B and/or the attestation services199A-B may also provide portions of the secret, key or key-secret pair, which may be used along with other portions to recover the full secret, key or key-secret pair. Additional details of the configuration of the escrow TEE instances162A-B and the type of secret scheme used (e.g., partial secrets or entire secrets) is described in more detail inFIG.2AandFIG.2B. The escrow TEE instance(s)162A-B may be enclaves that provide validation (e.g., attestation) services for containers or virtual machines. In another example, the escrow TEE instance(s)162A-B may be virtual machines that provide validation (e.g., attestation) services for containers or enclaves. Therefore, the system advantageously provides additional security while launching TEE instance(s) without requiring the escrow TEE instance(s)162A-B to run on the same platform as the application TEE instance(s)160A-B being launched, which provides additional flexibility while maintaining security. For example, the systems and methods described herein do not rely on the platform to provide security, which is typically required when using quoting enclaves. Furthermore, using a quoting enclave requires contacting the tenant to perform the validation, which as described above, may pose a problem if the tenant is unable to be reached (e.g., tenant's attestation services going off-line or slowing down). The server150may include hardware, such as processor(s), memory, hard drives, network adapters for network connection, etc. For example, server150may include many of the same hardware components as nodes110A-C. The computer system100may include one or more nodes110A-C. Each node110A-C may in turn include one or more physical processors (e.g., CPU120A-D) communicatively coupled to memory devices (e.g., MD130A-D) and input/output devices (e.g., I/O140A-C). Each node110A-C may be a computer, such as a physical machine and may include a device, such as hardware device. In an example, a hardware device may include a network device (e.g., a network adapter or any other component that connects a computer to a computer network), a peripheral component interconnect (PCI) device, storage devices, disk drives, sound or video adaptors, photo/video cameras, printer devices, keyboards, displays, etc. TEE instances160A-B may be provisioned on the same host or node (e.g., node110A) or different nodes. For example, TEE instance160A and TEE instance160B may both be provisioned on node110A. Alternatively, TEE instance160A may be provided on node110A while TEE instance160B is provisioned on node110B. As used herein, physical processor, processor or CPU120A-D, refers to a device capable of executing instructions encoding arithmetic, logical, and/or I/O operations. In one illustrative example, a processor may follow Von Neumann architectural model and may include an arithmetic logic unit (ALU), a control unit, and a plurality of registers. In a further aspect, a processor may be a single core processor which is typically capable of executing one instruction at a time (or process a single pipeline of instructions), or a multi-core processor which may simultaneously execute multiple instructions. In another aspect, a processor may be implemented as a single integrated circuit, two or more integrated circuits, or may be a component of a multi-chip module (e.g., in which individual microprocessor dies are included in a single integrated circuit package and hence share a single socket). A processor may also be referred to as a central processing unit (CPU). As discussed herein, a memory device130A-D refers to a volatile or non-volatile memory device, such as RAM, ROM, EEPROM, or any other device capable of storing data. As discussed herein, I/O device140A-C refers to a device capable of providing an interface between one or more processor pins and an external device capable of inputting and/or outputting binary data. Processors (e.g., CPUs120A-D) may be interconnected using a variety of techniques, ranging from a point-to-point processor interconnect, to a system area network, such as an Ethernet-based network. Local connections within each node, including the connections between a processor (e.g., CPU120A-D) and a memory device130A-D may be provided by one or more local buses of suitable architecture, for example, peripheral component interconnect (PCI). FIG.2Aillustrates a block diagram of a secure TEE attestation system200A for launching TEE instances. As illustrated inFIG.2A, an application or an application TEE instance210may be launched on a trusted cloud of a trusted cloud provider202. For example, an encrypted disk image280may be uploaded to a cloud service of the trusted cloud provider202. In an example, an application may include multiple TEE instances. For example, an application may consist of a number of trusted application containers. The trusted cloud provider202, such as server150, may launch the TEE instance210. Similarly, one or more escrow TEE instances220A-C may be launched to provide validation (e.g., attestation) services to the newly started application instances (e.g., application TEE instance210). In an example, the trusted escrow TEE instances220A-C may have already been launched. The trusted escrow TEE instances220A-C may be launched and hosted by one or more alternate cloud providers204A-C. For example, as illustrated inFIG.2A, the alternate cloud provider204A may launch the escrow TEE instance220A, alternate cloud provider204B may launch escrow TEE instance220B, and alternate cloud provider204C may launch escrow TEE instance220C. In another example, each of the escrow TEE instances220A-C may be launched on the same alternate cloud provider204. The alternate cloud provider(s)220A-C, hereinafter referred to generally as alternate cloud provider220, may be a public cloud. Each of the escrow TEE instances220A-C may include a respective attestation service299A-C and may be provisioned with a respective secret240A-C. The respective secrets240A-C may be provided on an attestation disk image. For example, the attestation services299A-C may launch from a respective attestation disk image that includes the secret240(e.g., a disk encryption key). Each secret240A-C may be an identical secret such that any of the escrow TEE instances220A-C may validate (e.g., provide attestation services) to the application TEE instance210. After the escrow TEE instance220(e.g., escrow TEE instance220B) is provisioned with the secret240B, the escrow TEE instance220may obtain a cryptographic measurement250associated with the application TEE instance210. Based on the cryptographic measurement250, the escrow TEE instance220(e.g., escrow TEE instance220B) may validate the application TEE instance210(e.g., by comparing the cryptographic measurement250to a reference measurement or an integrity record). After validation, the escrow TEE instance220(e.g., escrow TEE instance220B) may provide the secret240B to the application TEE instance210. Once the escrow TEE instance220(e.g., escrow TEE instance220B) provides the key or secret240B to the application TEE instance210, the application TEE instance210can finish launching. The validation process between the escrow TEE instance220and the application TEE instance210may be performed by the attestation service299(e.g., attestation service299B). As illustrated inFIG.2B, none of the cloud providers (e.g., alternate cloud providers204A-C or trusted cloud provider202) have access to both the encrypted disk image280and the secret (e.g., secrets240A-C). For example, initially the trusted cloud provider202has access to the encrypted disk image280, but not the secret240. For example, the trusted cloud provider202has access the encrypted disk image280, which it may use to launch an application TEE instance210. The alternate cloud providers204A-C initially have access to the secret240, but not the encrypted disk image280. For example, an alternate cloud provider204may have access to an attestation disk image, which gives the alternate cloud provider204access to the secret240. The secret240or access to the secret240may be provided after validation or attestation occurs. By splitting access between the trusted cloud provider202and the alternate cloud providers204A-C, instead of providing access to both the secret240and the encrypted disk image280together, security is advantageously improved. Furthermore, the secret240may be configured such that saving the secret240in a separate location is prohibited. For example, after the secret240is used to finalize start-up of the application TEE instance210, the trusted cloud provider202may no longer have access to the secret240. If another application TEE instance210is to be launched, another validation or attestation process may be required to again gain access to the secret240. For example, as illustrated inFIG.2A, several escrow TEE instances220A-C hold secrets240A-C for launching application TEE instance(s)210such that if one of the escrow TEE instances220goes offline, another escrow TEE instance may validate application TEE instance210in its place. Instead of entirely running the attestation services or validation services in a single private cloud, attestation services299A-C may run on a number of escrow TEE instances220A-C, which provides high availability without reducing security. For example, multiple escrow TEE instances220A-C may provide attestation services to application TEE instances210. The escrow TEE instances220A-C may be hosted by alternate cloud providers204A-C, which may be one or more public clouds. FIG.2Billustrates a block diagram of a secure TEE attestation system200B for launching TEE instances. The secure attestation system200B may include each of the same components as secure attestation system200A. However, instead of each escrow TEE instance being provisioned with a full secret240A-C, the escrow TEE instances220A-C each hold a respective portion of the secret240. For example, escrow TEE instance220A may be provisioned with portion241, while escrow TEE instance220B is provisioned with portion243and escrow TEE instance220C is provisioned with portion245. The portions241,243and245of the secret may be configured such that any two portions of the secret240are sufficient to recover the entire secret240. The secret may be divided into “n” pieces where any “k” pieces (i.e., “k” is less than or equal to “n”) may be sufficient to retrieve the secret. For example, any “k” of “n” alternate cloud providers may provide an attestation service and provide its own piece of the secret to the TEE running on the trusted cloud provider. The secret sharing may operate according to “Shamir's Secret Sharing” which is an algorithm where a secret240is divided into parts, giving each participant (e.g., escrow TEE instance or attestation service) its own unique part. To reconstruct or recover the original secret240, a minimum number of parts or portions is required. The minimum number of parts or portions is less than the total number of parts or portions. The minimum number of parts or portions required to reconstruct or recover the secret, hereinafter referred to as a threshold, may be known when the secret240is divided. For example, if the secret240is divided into three parts (as illustrated inFIG.2B), the threshold may be two parts, such that any two parts are sufficient to recover the original secret240. For example, the secret may be recovered from portion241and243, portions241and245, or portions243and245. The application TEE instance210may try to recover the secret240regardless of the quantity of portions it receives. In another example, the application TEE instance may be notified of the threshold and may recover the secret240once the threshold of portions (e.g., portions241and243) are received. The secret sharing scheme (e.g., amount of division and threshold) may be pre-agreed between each party and may form part of a protocol. In another example, the application TEE instance210may receive more than the required portions (e.g., portions241and243) of the secret240, in which case the additional portions (e.g., portion245) may be ignored. Multiple escrow TEE instances220may obtain a respective cryptographic measurement250A-C associated with the application TEE instance210. Based on the cryptographic measurements250A-C, the escrow TEE instances220A-C may validate the application TEE instance210(e.g., by comparing the respective cryptographic measurements250A-C to a reference measurement or an integrity record). After validation, the escrow TEE instances220A-C may provide their respective portions (e.g., portions241,243, and245) of the secret240to the application TEE instance210. Once the escrow TEE instances220A-C provide the portions of the key or secret240to the application TEE instance210, the application TEE instance210may recover the entire secret240. Once the secret240is recovered, the application TEE instance210can finish launching. The validation process between the escrow TEE instances220A-C and the application TEE instance210may be performed by the attestation services299A-C. Referring to bothFIG.2AandFIG.2B, measurements250A-C described above may be cryptographic measurements that identify characteristics of the application TEE instance(s)210such as the type of application TEE instance210, version of the TEE instance210, description of software components loaded into the TEE instance210, etc. Additional examples of cryptographic measurements are described in more detail below. As illustrated inFIG.2AandFIG.2B, instead of continually providing access to the trusted cloud provider202(e.g., private cloud) for performing attestation services, which may expose the trusted cloud provider202(e.g., private cloud) to additional malicious entities, attestation is performed by one or more attestation services299A-C hosted by alternate cloud providers204A-C (e.g., public clouds). The malicious entities may include a malicious hypervisor or malicious application TEE instances, which may endanger other services hosted by the trusted cloud provider202(e.g., private cloud) if the trusted cloud provider202also performed attestation. By restricting access to both the encrypted disk image280and the secret240to any single cloud provider, security is advantageously improved. The increased security may however make the setup more brittle as new TEEs may not be launched if the alternate cloud provider204A-C is down. Therefore, to improve availability, multiple alternate cloud providers204A-C may be used. Additionally, a secret sharing scheme, as illustrated inFIG.2B, may be implemented to further improve security. Therefore, the trusted cloud provider202(e.g., private cloud) is protected from malicious attacks. Additionally, if an attack is detected while using one attestation service (e.g., attestation service299B), that service may be avoided and validation may be performed through other secure attestation services. More specifically, a compromised attestation service (e.g., attestation service299B) may be disabled, shut-down or marked for removal from the pool of available attestation services299A-B. Once an attestation service (e.g., attestation service299B) is disabled or removed, a new attestation service may be launched or a new instance of the disabled attestation service (e.g., attestation service299B) may be started. The configuration of the systems200A-B illustrated inFIG.2AandFIG.2Binclude various attestation options, which have no single point of failure (e.g., one of the attestation services299A-C or one of the alternate cloud providers204A-C going offline or slowing down). For example, multiple attestation services299A-C may be available from one or more alternate cloud providers204A-C at the same time such that if one of the attestation services (e.g., attestation service299C) crashes or goes off-line, another attestation service (e.g., attestation service299B) may assume the responsibility of validating (e.g., attesting) newly launched application TEE instances210. As discussed above, instead of having attestation services299A-C exclusively or fully hosted by a trusted cloud provider202(e.g., run in a private cloud), TEEs such as escrow TEE instances220A-C may be used to provide attestation services on various alternate clouds. By doing so the private cloud is protected from continuous security threats while validating (e.g., attesting) launched applications because access to the private cloud may be disabled after launching an escrow TEE instance220A-C. For example, while providing attestation services, the trusted cloud provider202(e.g., private cloud) may be vulnerable to attacks from various applications (e.g., application TEE instances210) requesting validation to complete start-up. Furthermore, running attestation services299A-C that maintain secrets240A-C (or portions241,243, and245of secrets240) and/or keys (or a portion of the secret and/or key) eliminates the need of live migration, reduces the reliance on private cloud attestation services, eliminates the numerous verification steps associated with quoting enclaves, and improves the latency of launching TEE instances (e.g., containers, virtual machines or enclaves). When receiving a request to start an application TEE instance210, an escrow TEE instance (e.g., escrow TEE instance220A) may also be launched and provisioned with a secret (e.g., secret240A) if a respective escrow TEE instance is not already running. Alternatively, the escrow TEE instance (e.g., escrow TEE instance220A) may be a previously launched escrow TEE instance. For example, once the escrow TEE instance (e.g., escrow TEE instance220A) is launched and running, the escrow TEE instance may validate (e.g., perform attestation) for newly launched application TEE instances210. As mentioned above, along with the additional benefits of increased security, the escrow TEE instances220A-C also eliminate the need of live migration. For example, the application TEE instances210as well as the escrow TEE instances220may be killed or shut-down at will at a migration source and may be restarted at a migration destination. Once started at the migration destination, the application TEE instances210may be validated through attestation services299A-C hosted by alternate cloud providers on a public cloud. The escrow TEE instances220A-C do not have to run on the same platform as the application TEE instances210being launched since the escrow TEE instances220A-C do not rely on the platform to provide security. In contrast, other solutions such as the quoting enclave example described above rely on the platform to provide security and the tenant needs to be contacted to have the tenant verify the attestation and supply the secret. Lastly, the systems and methods described herein do not require any special machinery for the escrow TEE instance (e.g., escrow container) because the escrow TEE instance220A-C is a trusted TEE instance, like the application TEE instance210or any other trusted TEE instance, and is treated as such. Further, the attestation techniques described herein may be extended to any kind of TEE, making it completely independent for specific hardware (e.g., encrypted VMs can also be supported), which allows running elastic applications that treat containers like “cattle”, with encryption, providing secrecy to any required degree (e.g., with secret sharing) without sacrificing performance and through the use and convenience of public clouds. FIG.3illustrates a flowchart of an example method300for secure and reliable launching of TEEs in accordance with an example of the present disclosure. Although the example method300is described with reference to the flowchart illustrated inFIG.3, it will be appreciated that many other methods of performing the acts associated with the method300may be used. For example, the order of some of the blocks may be changed, certain blocks may be combined with other blocks, blocks may be repeated, and some of the blocks described are optional. The method300may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software, or a combination of both. In the illustrated example, method300includes uploading an encrypted disk image to a cloud service of a trusted cloud provider (block302). For example, an encrypted disk image280may be uploaded to a first cloud service of a trusted cloud provider202. Additionally, the trusted cloud provider202may have access to the encrypted disk image280. While the encrypted disk image280is available to the trusted cloud provider202, access to the encrypted disk image280is restricted from other alternate cloud providers.204A-C. The method300also includes launching an application TEE instance (block304). For example, an application TEE instance210may be launched on the first cloud service of the trusted cloud provider202. The first cloud service may be a private cloud. It should be appreciated that application TEE instance210ofFIGS.2A and2Bmay represent TEE instance(s)160A or160B ofFIG.1, which may each be referred to generally as application TEE instance210. Additionally, method300includes launching a first attestation service from an attestation disk image on a cloud service of an alternate cloud provider (block306). For example, a first attestation service instance299A may be launched from an attestation disk image on a second cloud service of a first alternate cloud provider204A. The attestation disk image may include a secret240A and the first alternate cloud provider204A may have access to the secret. However, the secret240A is unavailable to the trusted cloud provider202. For example, the trusted cloud provider202may be restricted from accessing the secret240A. Method300also includes launching a second attestation service from the attestation disk image on a cloud service of another alternate cloud provider (block308). For example, a second attestation service299B instance may be launched from the attestation disk image on a third cloud service of a second alternate cloud provider204B. The attestation disk image may include the secret240B and the second alternate cloud provider204B may have access to the secret. Secrets240A and240B may be the same secret. For example, each alternate cloud provider204B may host respective escrow TEE instances220A-B that run attestation services299A-B and provide secrets240A-B after validating an application TEE instance210. In another example, the first and second alternate cloud providers240A-B may be the same cloud provider that hosts several escrow TEE instances220A-B on the same public cloud. It should be appreciated that escrow TEE instances220A-B ofFIG.2may represent escrow TEE instances162A-B ofFIG.1, which may each be referred to generally as escrow TEE instance220. Additionally, method300includes providing the secret to the application TEE instance (block310). For example, the second cloud service or the third cloud service may provide the secret (e.g., secret240A-B) to the application TEE instance210. The second cloud service may provide the secret240A when the third cloud service is unavailable. Alternatively, the third cloud service may provide the secret240B when the second cloud service is unavailable. In an example, the secret (e.g., secret240A-B) may be provided by the corresponding escrow TEE instances204A-B. More specifically, the secrets (e.g., secret240A-B) may be provided by the corresponding attestation services299A-B of the escrow TEE instances204A-B. Prior to providing the secret (e.g., secret240A-B) to the application TEE instance210, at least one attestation service (e.g., attestation service299B) may validate the application TEE instance210. The escrow TEE instance220B or its associated attestation service299B may validate application TEE instance210by obtaining a cryptographic measurement associated with the application TEE instance210. For example, the escrow TEE instance220B or its associated attestation service299B may obtain a cryptographic measurement associated with the application TEE instance210. The cryptographic measurement may include measurements of files, BIOS, bootloaders, virtual memory, components, images, internal configurations, current software or applications run by the TEE, etc. For example, components of the boot of the application TEE instance210may be cryptographically measured (e.g., each boot component may be measured either individually or collectively by computing the hash values of byte arrays representing the boot components). The measured values of the boot components may then be used to decide if the application TEE instance can be trusted. Additionally, the measurement or hash may represent a fingerprint of the measured files. In another example, the cryptographic measurement may include a measurement value that is a hash value of the files associated with the application TEE instance. In another example, a cryptographic measurement may be taken from one or more of the application TEE images. The measurement may be compared to integrity records or attestation records from a reference measurement. In some cases, the measurement may also indicate the origin of the measured information, which may help attest that the origin of the information is a trusted source. The secret240B or key may involve symmetric encryption or asymmetric encryption. Symmetric encryption is an encryption process that uses a single key for both encryption and decryption. In symmetric encryption, the same key is available to multiple entities (e.g., nodes, escrow TEE instances220A-C, etc.). Asymmetric encryption uses key pairs or key-secret pairs that may each include a private key and a public key. In an example, the private key is known only to a respective entity (e.g., escrow TEE instance220B), and the public key is known to a group of entities in the network (e.g., each application TEE instance210). An application TEE instance210may use the public key to encrypt data, and the encrypted data can be decrypted using the escrow TEE instances220private key. The encryption and decryption may utilizing hashing functions such as the Secure Hash Algorithm (“SHA”) (e.g., SHA-128, SHA-256, etc.) or other hashing functions such as MD5. For example, the secret240A′ or key may appear to be a random string of numbers and letters (e.g., 140FA9Z425ED694R018019B492). Additionally, the encryption and decryption processes may be performed according to the Advanced Encryption Standard (“AES”). AES is based on a design principle known as a substitution-permutation network, and may utilize keys with a key size of 128, 192, or 256 bits. FIG.4illustrates a flowchart of an example method400for secure and reliable launching of TEEs in accordance with an example of the present disclosure. Although the example method400is described with reference to the flowchart illustrated inFIG.4, it will be appreciated that many other methods of performing the acts associated with the method400may be used. For example, the order of some of the blocks may be changed, certain blocks may be combined with other blocks, blocks may be repeated, and some of the blocks described are optional. The method400may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software, or a combination of both. In the illustrated example, method400includes receiving an encrypted disk image (block402). For example, a cloud service of a trusted cloud provider202may receive an encrypted disk image280. While the encrypted disk image280is available to the trusted cloud provider202, the encrypted disk image280may otherwise be unavailable to other cloud providers to provide additional security. The method400also includes launching an application TEE instance (block404). For example, the cloud service may launch an application TEE instance210. The application TEE instance210may be launched and pending full start-up until the application TEE instance210has been validated. Method400also includes launching a first attestation service from an attestation disk image on a first cloud service of a plurality of clouds (block406). For example, a first cloud service of a plurality of cloud services of at least one alternate cloud provider240A-B may launch a first attestation service instance299A from an attestation disk image. The attestation disk image may include a first portion241of a secret240. Additionally, the method400includes launching a second attestation service from the attestation disk image on a second cloud service of the plurality of clouds (block408). For example, a second cloud service of the plurality of clouds services of the at least one alternate cloud provider240A-B, may launch a second attestation service instance299B from the attestation disk image. The attestation disk image may include a second portion243of the secret. Additional attestation service instances (e.g., attestation service instance299C) may also be launched on the same cloud service by the same alternate cloud provider or on a different public cloud by a different cloud provider. Method400also includes providing a first portion of a second to the application TEE instance (block410). For example, the first cloud service may provide the first portion241of the secret240to the application TEE instance210. The escrow TEE instance220A or its associated attestation service299A may provide the first portion241of the secret240to the application TEE instance210. Additionally, the method400includes providing a second portion of the secret to the application TEE instance (block412). For example, the second cloud service may provide the second portion243of the secret240to the application TEE instance210. The escrow TEE instance220B or its associated attestation service299B may provide the second portion243of the secret240to the application TEE instance210. In an example, portion241and portion243may be sufficient to satisfy the threshold required to recover the entire secret240. FIGS.5A and5Bdepict a flow diagram illustrating an example method500for securely and reliably launching TEE instances according to secret sharing scheme according an example embodiment of the present disclosure. Although the example method500is described with reference to the flow diagram illustrated inFIGS.5A and5B, it will be appreciated that many other methods of performing the acts associated with the method may be used. For example, the order of some of the blocks may be changed, certain blocks may be combined with other blocks, blocks may be repeated, and some of the blocks described are optional. The method may be performed by processing logic that may comprise (e.g., circuitry, dedicated logic, etc.), software, or a combination of both. For example, an application container415, escrow containers425A-B, and a trusted cloud provider202may communicate to perform example method500. The escrow container425A is launched by an alternate cloud provider204(block502). The escrow container425A may be launched from an attestation disk image and may include an attestation service299. In an example, the alternate cloud provider is a public cloud, such that attestation services299may be hosted outside of a private cloud. Then, the escrow container425A is provisioned with a first portion522of a secret240(block504). In an example, the attestation service299or the escrow container425A may launch from an attestation disk image that includes the portion522of the secret240. Similarly, another escrow container425B is launched by an alternate cloud provider (block506). The escrow container425B may be launched from an attestation disk image and may include an attestation service299. Then, the escrow container425B is provisioned with a second portion of the secret (block508). Similarly, the attestation service299or the escrow container425B may launch from an attestation disk image that includes the portion532of the secret240. The escrow containers425A-B may be hosted on the same alternate public cloud. Alternatively, each escrow container425A-B may be hosted by different alternate cloud providers on different clouds. A trusted cloud provider202receives a request to start an application container415(block510). A container owner may send a request to start an application due to increased traffic that requires additional application instances to handle the load. In other examples, the request to start the application may be to launch a new version or new release of the application in the cloud. In an example, the escrow containers425A-B may already be launched and running on one or more alternate clouds. However, the escrow containers425A-B may be launched responsive to receiving a request to start an application, such as an application container415. Additionally, the trusted cloud provider202launches the application container415(block512). Then, the application container415is launched (block514), but has not yet started providing runtime services. Even though application container415is launched and running, the container415has not yet been validated and therefore is not allowed to proceed with start-up. For example, validation or attestation may be required to ensure that the application containers415are authorized for deployment. Validation may provide an endorsement that the application container415was launched by a trusted platform and that the application container's code is endorsed by a trusted entity. Allowing an application container415to proceed with start-up without validating or attesting the application container415may allow untrusted applications (e.g., malicious applications) to start-up and cause security breaches. The escrow container425A measures the application container415(block516). For example, the escrow container425A may cryptographically measure the application container415. The cryptographic measurement may include measurements of files, BIOS, bootloaders, virtual memory, components, images, internal configurations, current software or applications run by the TEE, etc. The escrow container425A may perform measurements via an associated attestation service299. Then, the escrow container425A validates the application container415(block518). For example, the escrow container425may validate the application container415based on the cryptographic measurement. Validation or attestation may occur if the cryptographic measurement matches a reference measurement. For example, if the cryptographic measurement produces a measurement value (e.g., hash value) that matches a trusted reference measurement, the escrow container425may determine that the application container415(e.g., clone or cloned container) is trustworthy. After validating the application container415, the escrow container425A provides the first portion522of the secret240to the application container415(block520). The application container425A receives the first portion522of the secret240(block524). The portion522of the secret240provided to the application container415may be the same portion that the escrow container425A was provisioned with at block504, but that is now stored in a different memory location. By providing the portion522of the secret240and attestation services through the escrow container425A, the trusted cloud provider (e.g., private cloud) is protected for security threats associated with accessing the key or secret during attestation. The portion522of the secret240or key may involve symmetric encryption or asymmetric encryption. With asymmetric encryption, the portion522of the secret240or key may use key pairs or key-secret pairs. In an example, the encryption and decryption may utilizing hashing functions such as the Secure Hash Algorithm (“SHA”) (e.g., SHA-128, SHA-256) or MD5. Additionally, the encryption and decryption processes may be performed according to the Advanced Encryption Standard (“AES”) and may utilize keys or secrets240with a size of 128, 192, or 256 bits. Similarly, the escrow container425B measures the application container415(block526). Then, the escrow container425B validates the application container415(block528). In an example, validation data may be shared between escrow containers425A-B or attestation services299A-B. For example, one escrow container (e.g., escrow container425A) may perform the measurement and validation. If the cryptographic measurement and validation is confirmed, this result may be shared with other escrow containers in the system. After validating the application container415, the escrow container425B provides the second portion532of the secret to the application container415(block530). The application container425B receives the second portion432of the secret (block534). In the illustrated example, two portions522and532are sufficient to meet the threshold for recovering the secret240. For example, the secret240may have been split into three portions where any two of those portions (e.g., portion522and532) are sufficient to recover the entire secret240. Then, the application container415uses the first portion522and the second portion532of the secret to recover the secret (block536). As discussed above, the secret sharing may operate according to “Shamir's Secret Sharing”, which is an algorithm where a secret240is divided into parts or portions, giving each participant (e.g., escrow containers425A-B) its own unique portion522,532. To reconstruct or recover the original secret240, a minimum number of portions is required, which is referred to as a threshold. After recovering the secret, the application container415proceeds with startup (block538). For example, the application container415may start performing runtime services on the cloud. The recovered secret240may be configured such that saving the recovered secret is prohibited, which prevents the application container415from saving the secret240for later use. In an example, once the recovered secret240is used to finalize start-up of the application container415, the trusted cloud provider202may no longer have access to the secret240. When launching another application container415, another validation or attestation process may be required to again gain access to the secret240or portions522,532of the secret240. As illustrated inFIG.5B, network activity increases (block540). Increased network activity may require additional application containers to handle the increased network load. For example, the trusted cloud provider202may receive a request to start another application container (block542). If network traffic increases again and additional application containers415need to be launched, the escrow containers425A-B may validate additional application containers415or application instances that are launched on the cloud, which advantageously provides high availability to attestation or validation services without reducing security. For example, since access to the portions of the secret240and the encrypted disk image280used to launch application containers415are split between alternate cloud provider(s) (e.g., a public cloud) and a trusted cloud provider202(e.g., a private cloud), secrecy and reliability is improved with no single point of failure even if one of the attestation services fails or is infiltrated by an attacker. As mentioned above, if there is a security breach with one of the escrow containers (e.g., escrow container425A), those escrow containers (e.g., escrow container425A) may be shut down and the application container415may recover the secret240from another portion of the secret240obtained from a different escrow container or another instance of an escrow container. For example, there may be multiple instances of each escrow container. Specifically, multiple escrow containers425A-B may be launched and running at the same time such that if one of the escrow containers425crashes or goes off-line, another escrow container425may assume the responsibility of validating (e.g., attesting) application containers415. FIG.6is a block diagram of an example application TEE launching system600according to an example embodiment of the present disclosure. The system600includes an application TEE instance610and a first cloud service622of a trusted cloud provider620. The first cloud service is configured to receive an encrypted disk image625and the trusted cloud provider620has access to the encrypted disk image625. The first cloud service622is also configured to launch the application TEE instance610. The system600also includes a second cloud service632A of a first alternate cloud provider630A, which is configured to launch a first attestation service instance640A from an attestation disk image635. The attestation disk image635includes a secret650, and the first alternate cloud provider630A has access to the secret650. The second cloud service632A is also configured to provide the secret635to the application TEE instance610. Additionally, the system600includes a third cloud service632B of a second alternate cloud provider630B, which is configured to launch a second attestation service instance640B from the attestation disk image635. The attestation disk image635includes the secret650, and the second alternate cloud provider630B has access to the secret650. The third cloud service632B is also configured to provide the secret650to the application TEE instance610. One of the second cloud service632A and the third cloud service632B provide the secret650(the provided secret650or access to the secret650is illustrated as secret650′) to the application TEE instance610when the other of the second cloud service632A and third cloud service632B is unavailable. By splitting access to the encrypted disk image625and the secret650between the trusted cloud provider620and alternate cloud providers630A-B, additional security is provided such that security is increased compared to systems that solely rely on attestation services maintained in a private cloud. Additionally, when a private cloud's attestation services go off-line, validating newly launched application TEE instances610may be completely halted thereby preventing the application TEE instances610from fully starting up and performing runtime services (e.g., serving application requests). For example, relying solely on a private cloud adds another point of failure and may cause the application to perform poorly or even crash if the private cloud slows down or goes off-line. However, relying on several different public clouds improves reliability while maintaining security. FIG.7is a block diagram of an example application TEE launching system700according to an example embodiment of the present disclosure. The system700includes an application TEE instance710and a cloud service722of a trusted cloud provider720. The cloud service722is configured to receive an encrypted disk image725and launch the application TEE instance710. The system700also includes a first cloud service732A of a plurality of cloud services732A-B from at least one alternate cloud provider730A-B, which is configured to launch a first attestation service instance740A from an attestation disk image735. The attestation disk image735includes a first portion750A of a secret752. The first cloud service732A is also configured to provide the first portion750A of the secret752to the application TEE instance710. Additionally, the system700includes a second cloud service732B of the plurality of cloud services732A-B from the at least one alternate cloud provider730A-B, which is configured to launch a second attestation service instance740B from the attestation disk image735. The attestation disk image includes a second portion750B of the secret752. The second cloud service732B is also configured to provide the second portion750B of the secret752to the application TEE instance710(the provided portions750A and750B of the secret752or access to the portions750A-B of the secret752is illustrated as portions750A′ and750B′). By using a secret sharing scheme where the secret752is divided into pieces or portions750A-B, secrecy and reliability may be further improved. It will be appreciated that all of the disclosed methods and procedures described herein can be implemented using one or more computer programs or components. These components may be provided as a series of computer instructions on any conventional computer readable medium or machine readable medium, including volatile or non-volatile memory, such as RAM, ROM, flash memory, magnetic or optical disks, optical memory, or other storage media. The instructions may be provided as software or firmware, and/or may be implemented in whole or in part in hardware components such as ASICs, FPGAs, DSPs or any other similar devices. The instructions may be configured to be executed by one or more processors, which when executing the series of computer instructions, performs or facilitates the performance of all or part of the disclosed methods and procedures. Aspects of the subject matter described herein may be useful alone or in combination with one or more other aspects described herein. In a 1st exemplary aspect of the present disclosure a system includes an application trusted execution environment (“TEE”) instance and a first cloud service of a trusted cloud provider. The first cloud service is configured to receive an encrypted disk image and the trusted cloud provider has access to the encrypted disk image. The first cloud service is also configured to launch the application trusted execution environment (TEE) instance. The system also includes a second cloud service of a first alternate cloud provider, which is configured to launch a first attestation service instance from an attestation disk image. The attestation disk image includes a secret, and the first alternate cloud provider has access to the secret. The second cloud service is also configured to provide the secret to the application TEE instance. Additionally, the system includes a third cloud service of a second alternate cloud provider, which is configured to launch a second attestation service instance from the attestation disk image. The attestation disk image includes the secret, and the second alternate cloud provider has access to the secret. The third cloud service is also configured to provide the secret to the application TEE instance. One of the second cloud service and the third cloud service provide the secret to the application TEE instance when the other of the second cloud service and third cloud service is unavailable. In a 2nd exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 1st aspect), at least one of the second cloud service and the third cloud service is configured to obtain a cryptographic measurement associated with the application TEE instance. In a 3rd exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 2nd aspect), the cryptographic measurement identifies characteristics of the application TEE instance including at least one of a type of the TEE instance, a version of the TEE instance, and a description of software components loaded into the TEE instance. In a 4th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 2nd aspect), the cryptographic measurement further includes an integrity code to validate the cryptographic measurement. In a 5th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 2nd aspect), at least one of the second cloud service and the third cloud service is configured to validate the application TEE instance. In a 6th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 5th aspect), the at least one of the second cloud service and the third cloud service is configured to provide the secret responsive to validating the application TEE instance. In a 7th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 1st aspect), the application TEE instance is an encrypted virtual machine. In an 8th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 1st aspect), the trusted cloud provider does not have access to the secret. In a 9th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 1st aspect), the trusted cloud provider is restricted from accessing the secret. In a 10th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 1st aspect), the first alternate cloud provider does not have access to the encrypted disk image. In an 11th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 1st aspect), the first alternate cloud provider is restricted from accessing the encrypted disk image. In a 12th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 1st aspect), the second alternate cloud provider does not have access to the encrypted disk image. In a 13th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 1st aspect), the second alternate cloud provider is restricted from accessing the encrypted disk image. In a 14th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 1st aspect), the second cloud service and the third cloud service are public clouds. In a 15th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 1st aspect), the first cloud service is a private cloud. Aspects of the subject matter described herein may be useful alone or in combination with one or more other aspects described herein. In a 16th exemplary aspect of the present disclosure a method includes uploading an encrypted disk image to a first cloud service of a trusted cloud provider. Additionally, the trusted cloud provider has access to the encrypted disk image. The method also includes launching an application trusted execution environment (TEE) instance on the first cloud service of a trusted cloud provider. Additionally, the method includes launching a first attestation service instance from an attestation disk image on a second cloud service of a first alternate cloud provider. The attestation disk image includes a secret, and the first alternate cloud provider has access to the secret. The method also includes launching a second attestation service instance from the attestation disk image on a third cloud service of a second alternate cloud provider. The attestation disk image includes the secret, and the second alternate cloud provider has access to the secret. Additionally, the method includes providing, by one of the second cloud service and the third cloud service, the secret to the application TEE instance when the other of the second cloud service and third cloud service is unavailable. In a 17th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 16th aspect), the method further includes obtaining, by at least one of the second cloud service and the third cloud service, a cryptographic measurement associated with the application TEE instance. In an 18th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 17th aspect), the cryptographic measurement identifies characteristics of the application TEE instance including at least one of a type of the TEE instance, a version of the TEE instance, and a description of software components loaded into the TEE instance. In a 19th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 17th aspect), the cryptographic measurement further includes an integrity code to validate the cryptographic measurement. In a 20th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 17th aspect), the method further includes validating, by at least one of the second cloud service and the third cloud service, the application TEE instance. In a 21st exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 20th aspect), the at least one of the second cloud service and the third cloud service provides the secret responsive to validating the application TEE instance. In a 22nd exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 16th aspect), the application TEE instance is an encrypted virtual machine. In a 23rd exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 16th aspect), the trusted cloud provider does not have access to the secret. In a 24th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 16th aspect), the trusted cloud provider is restricted from accessing the secret. In a 25th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 16th aspect), the first alternate cloud provider does not have access to the encrypted disk image. In a 26th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 16th aspect), the first alternate cloud provider is restricted from accessing the encrypted disk image. In a 27th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 16th aspect), the second alternate cloud provider does not have access to the encrypted disk image. In a 28th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 16th aspect), the second alternate cloud provider is restricted from accessing the encrypted disk image. Aspects of the subject matter described herein may be useful alone or in combination with one or more other aspects described herein. In a 29th exemplary aspect of the present disclosure a non-transitory machine-readable medium stores code, which when executed by a processor is configured to upload an encrypted disk image to a first cloud service of a trusted cloud provider. The trusted cloud provider has access to the encrypted disk image. The non-transitory machine-readable medium is also configured to launch an application trusted execution environment (TEE) instance on the first cloud service of a trusted cloud provider and launch a first attestation service instance from an attestation disk image on a second cloud service of a first alternate cloud provider. The attestation disk image includes a secret, and the first alternate cloud provider has access to the secret. Additionally, the non-transitory machine-readable medium is configured to launch a second attestation service instance from the attestation disk image on a third cloud service of a second alternate cloud provider. The attestation disk image includes the secret, and the second alternate cloud provider has access to the secret. The non-transitory machine-readable medium is also configured to provide the secret to the application TEE instance by one of the second cloud service and the third cloud service when the other of the second cloud service and third cloud service is unavailable. Aspects of the subject matter described herein may be useful alone or in combination with one or more other aspects described herein. In a 30th exemplary aspect of the present disclosure a system includes a means for uploading an encrypted disk image to a first cloud service of a trusted cloud provider. The trusted cloud provider has access to the encrypted disk image. The system also includes a first means for launching an application trusted execution environment (TEE) instance on a first cloud service of a trusted cloud provider and a second means for launching a first attestation service instance from an attestation disk image on a second cloud service of a first alternate cloud provider. The attestation disk image includes a secret, and the first alternate cloud provider has access to the secret. Additionally, the system includes a third means for launching a second attestation service instance from the attestation disk image on a third cloud service of a second alternate cloud provider. The attestation disk image includes the secret, and the second alternate cloud provider has access to the secret. The system also includes a means for providing the secret to the application TEE instance. In a 31st exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 30th aspect), the first alternate cloud provider and the second alternate cloud provider are the same such that the second cloud service an third cloud service are different cloud services hosted by the same alternate cloud provider. Aspects of the subject matter described herein may be useful alone or in combination with one or more other aspects described herein. In a 32nd exemplary aspect of the present disclosure a system includes an application trusted execution environment (“TEE”) instance and a cloud service of a trusted cloud provider. The cloud service is configured to receive an encrypted disk image, and launch the application trusted execution environment (TEE) instance. The system also includes a first cloud service of a plurality of cloud services from at least one alternate cloud provider, which is configured to launch a first attestation service instance from an attestation disk image. The attestation disk image includes a first portion of a secret. The first cloud service is also configured to provide the first portion of the secret to the application TEE instance. Additionally, the system includes a second cloud service of the plurality of cloud services from the at least one alternate cloud provider, which is configured to launch a second attestation service instance from the attestation disk image. The attestation disk image includes a second portion of the secret. The second cloud service is also configured to provide the second portion of the secret to the application TEE instance. In a 33rd exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 32nd aspect), the first cloud service and the second cloud service are configured to obtain a cryptographic measurement associated with the application TEE instance. In a 34th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 33rd aspect), the cryptographic measurement identifies characteristics of the application TEE instance including at least one of a type of the TEE instance, a version of the TEE instance, and a description of software components loaded into the TEE instance. In a 35th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 33rd aspect), the cryptographic measurement further includes an integrity code to validate the cryptographic measurement. In a 36th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 33rd aspect), the first cloud service and the second cloud service are configured to validate the application TEE instance. In a 37th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 36th aspect), the first cloud service is configured to provide the first portion of the secret responsive to validating the application TEE instance. In a 38th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 32nd aspect), the application TEE instance is an encrypted virtual machine. In a 39th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 32nd aspect), the cloud service is restricted from accessing the first portion of the secret and the second portion of the secret. In a 40th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 32nd aspect), the plurality of cloud services are restricted from accessing the encrypted disk image. In a 41st exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 32nd aspect), the first cloud service is restricted from accessing the second portion of the secret. In a 42nd exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 32nd aspect), the plurality of cloud services are public clouds. In a 43rd exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 32nd aspect), the cloud service is a private cloud. In a 44th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 32nd aspect), the application TEE instance is configured to combine the first portion of the secret and the second portion of the secret to recover a complete secret. In a 45th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 44th aspect), the application TEE instance is configured to complete start up by accessing the encrypted disk image using the complete secret. Aspects of the subject matter described herein may be useful alone or in combination with one or more other aspects described herein. In a 46th exemplary aspect of the present disclosure a method includes receiving, by a cloud service of a trusted cloud provider, an encrypted disk image and launching, by the cloud service, an application trusted execution environment (TEE) instance. The method also includes launching, by a first cloud service of a plurality of cloud services of at least one alternate cloud provider, a first attestation service instance from an attestation disk image. The attestation disk image includes a first portion of a secret. Additionally, the method includes launching, by a second cloud service of the plurality of cloud services of the at least one alternate cloud provider, a second attestation service instance from the attestation disk image. The attestation disk image includes a second portion of the secret. The method also includes providing, by the first cloud service, the first portion of the secret to the application TEE instance and providing, by the second cloud service, the second portion of the secret to the application TEE instance. In a 47th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 46th aspect), the method further includes obtaining, by at least one of the first cloud service and the second cloud service, a cryptographic measurement associated with the application TEE instance. In a 48th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 46th aspect), the method further includes validating, by at least one of the first cloud service and the second cloud service, the application TEE instance. In a 49th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 48th aspect), the first cloud provider forwards validation information to the second cloud provider. In a 50th exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 48th aspect), the at least one of the first cloud service and the second cloud service provides the secret responsive to validating the application TEE instance. In a 51st exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 46th aspect), the method further includes combining, by the application TEE instance, the first portion of the secret and the second portion of the secret to recover a complete secret. In a 52nd exemplary aspect of the present disclosure, which may be used in combination with any one or more of the preceding aspects (e.g., the 51st aspect), the method further includes accessing, by the application TEE instance, the encrypted disk image using the complete secret. Additionally, the method includes completing, by the application TEE instance, start up. Aspects of the subject matter described herein may be useful alone or in combination with one or more other aspects described herein. In a 53rd exemplary aspect of the present disclosure a non-transitory machine-readable medium stores code, which when executed by a processor is configured to receive an encrypted disk image, launch an application trusted execution environment (TEE) instance, and launch a first attestation service instance from an attestation disk image. The attestation disk image includes a first portion of a secret. The non-transitory machine-readable medium is also configured to launch a second attestation service instance from the attestation disk image. The attestation disk image includes a second portion of the secret. Additionally, the non-transitory machine-readable medium is also configured to provide the first portion of the secret to the application TEE instance and provide the second portion of the secret to the application TEE instance. Aspects of the subject matter described herein may be useful alone or in combination with one or more other aspects described herein. In a 54th exemplary aspect of the present disclosure a system includes a means for receiving an encrypted disk image, a first means for launching an application trusted execution environment (TEE) instance, and a second means for launching a first attestation service instance from an attestation disk image. The attestation disk image includes a first portion of a secret. Additionally, the system includes a third means for launching a second attestation service instance from the attestation disk image. The attestation disk image includes a second portion of the secret. The system also includes a first means for providing the first portion of the secret to the application TEE instance and a second means for providing the second portion of the secret to the application TEE instance. To the extent that any of these aspects are mutually exclusive, it should be understood that such mutual exclusivity shall not limit in any way the combination of such aspects with any other aspect whether or not such aspect is explicitly recited. Any of these aspects may be claimed, without limitation, as a system, method, apparatus, device, medium, etc. It should be understood that various changes and modifications to the example embodiments described herein will be apparent to those skilled in the art. Such changes and modifications can be made without departing from the spirit and scope of the present subject matter and without diminishing its intended advantages. It is therefore intended that such changes and modifications be covered by the appended claims. | 82,803 |
11943338 | DETAILED DESCRIPTION In the following description of the various embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration various embodiments in which aspects of the disclosure may be practiced. It is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope of the present disclosure. Aspects of the disclosure are capable of other embodiments and of being practiced or being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein are for the purpose of description and should not be regarded as limiting. Rather, the phrases and terms used herein are to be given their broadest interpretation and meaning. The use of “including” and “comprising” and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items and equivalents thereof. By way of introduction, aspects discussed herein may relate to methods and techniques for object-level encryption and key rotation. As discussed further herein, this combination of features may allow for increased security and decreased negative impact upon security breach. Before discussing these concepts in greater detail, however, several examples of a computing device that may be used in implementing and/or otherwise providing various aspects of the disclosure will first be discussed with respect toFIG.1. FIG.1illustrates one example of a computing device101that may be used to implement one or more illustrative aspects discussed herein. For example, computing device101may, in some embodiments, implement one or more aspects of the disclosure by reading and/or executing instructions and performing one or more actions based on the instructions. In some embodiments, computing device101may represent, be incorporated in, and/or include various devices such as a desktop computer, a computer server, a mobile device (e.g., a laptop computer, a tablet computer, a smart phone, any other types of mobile computing devices, and the like), and/or any other type of data processing device. Computing device101may, in some embodiments, operate in a standalone environment. In others, computing device101may operate in a networked environment. As shown inFIG.1, various network nodes101,105,107, and109may be interconnected via a network103, such as the Internet. Other networks may also or alternatively be used, including private intranets, corporate networks, LANs, wireless networks, personal networks (PAN), and the like. Network103is for illustration purposes and may be replaced with fewer or additional computer networks. A local area network (LAN) may have one or more of any known LAN topology and may use one or more of a variety of different protocols, such as Ethernet. Devices101,105,107,109and other devices (not shown) may be connected to one or more of the networks via twisted pair wires, coaxial cable, fiber optics, radio waves or other communication media. As seen inFIG.1, computing device101may include a processor111, RAM113, ROM115, network interface117, input/output interfaces119(e.g., keyboard, mouse, display, printer, etc.), and memory121. Processor111may include one or more computer processing units (CPUs), graphical processing units (GPUs), and/or other processing units such as a processor adapted to perform computations associated with machine learning. I/O119may include a variety of interface units and drives for reading, writing, displaying, and/or printing data or files. I/O119may be coupled with a display such as display120. Memory121may store software for configuring computing device101into a special purpose computing device in order to perform one or more of the various functions discussed herein. Memory121may store operating system software123for controlling overall operation of computing device101, control logic125for instructing computing device101to perform aspects discussed herein, machine learning software127, training set data129, and other applications129. Control logic125may be incorporated in and may be a part of machine learning software127. In many embodiments, computing device101may include two or more of any and/or all of these components (e.g., two or more processors, two or more memories, etc.) and/or other components and/or subsystems not illustrated here. Devices105,107,109may have similar or different architecture as described with respect to computing device101. Those of skill in the art will appreciate that the functionality of computing device101(or device105,107,109) as described herein may be spread across multiple data processing devices, for example, to distribute processing load across multiple computers, to segregate transactions based on geographic location, user access level, quality of service (QoS), etc. For example, devices101,105,107,109, and others may operate in concert to provide parallel computing features in support of the operation of control logic125and/or software127. Having discussed several examples of computing devices which may be used to implement some aspects as discussed further below, discussion will now turn to a method for object layer encryption. FIG.2depicts an example structure for organizing encryption keys. An encryption key may be a cryptographic key that is used to digitally encrypt a piece of content such as text, image, audio, video, document, code, etc. or another cryptographic key. A cryptographic key may be randomly generated and be secret (e.g., kept hidden to prevent unauthorized access). A cryptographic key may be symmetric, that is the same encryption key may be used to decrypt the encrypted message. Cryptographic keys may be asymmetric, where two related keys form a pair. In hierarchical key model200, there may be one or more binary asset binaries201a-201n(collectively “201”). Asset binary201may be variously referred to as an asset, a binary, a file, a document, an object, an item, etc. Asset binaries201may be digital data (e.g., text, image, audio, video, etc.) that needs protection. Binary asset binaries201may be organized into one or more clusters202a,202b(collective “202”). Cluster202(also referred to as an “asset cluster”) may be a grouping or partition in which asset binaries201are organized. For example, asset binaries201a-201gmay belong to cluster202awhile asset binaries201h-201nmay belong to cluster202b.Each of clusters202may represent a partition of data that corresponds to one vendor. The padlock symbol on each of binary asset binaries201may represent object-level encryption (OLE). Specifically, asset binaries201a-nmay be encrypted with content encryption keys (CEKs)203a-203d(collectively “203”). CEK203may be an encryption key that is used to digitally encrypt a piece of content. Asset binaries201may be encrypted using two or more CEKs. In the example embodiment shown inFIG.2, asset binaries201a-201dmay be encrypted using CEK1.1203a,asset binaries201e-201gmay be encrypted using CEK1.2203b,asset binaries201h-201kmay be encrypted using CEK2.1203c,and asset binaries201l-201nmay be encrypted using CEK2.2203d.This way, in the unfortunate event that one of the CEKs203gets compromised (e.g., leaked, stolen, cracked, etc.), the integrity of those asset binaries201that had been encrypted with the remaining uncompromised CEKs203could still remain intact. Asset binaries202and CEKs203may be part of service platform204. Service platform204may be a web service, cloud storage, app server, social media service, etc. Each of CEKs203may be further encrypted using one of cluster master keys (CMKs)205a,205b(collectively “205”). For example, CEK1.1203aand CEK1.2203bmay be encrypted using CMK1205a,and CEK2.1203cand CEK2.2203dmay be encrypted using CMK2205b.CMK203may be a cryptographic key that is used to digitally encrypt other cryptographic keys such as CEKs203belonging to the same cluster. CMKs205may be stored in external service206. For example, external service206may be a cloud storage service that is external to service platform204. Alternatively, CMKs205may be stored within service platform204. CMKs205amay correspond to one or more vendors207a,207b(collectively “207”). For example, CMK1205amay correspond to vendor1207aand CMK2205bmay correspond to vendor2207b.CMKs205and vendors207may have one-to-one mappings with each other. Each cluster202may be related to a single CMK205and a single vendor207. Each of vendors207may be a service, corporate entity, client, account, customer, etc. Thus, even if a security breach is found for data pertaining to one vendor, the data for other vendor(s) may remain secure and protected. Each vendor207may be provisioned with at least one CMKs205that will be stored in external service206. In turn, CEKs203may be generated to encrypt asset binaries202. CEKs203may be stored in a database after being encrypted by CMKs205. FIG.3depicts an example process of content encryption key rotation. In cluster300, asset binaries301a-301g(collectively “301”) may be encrypted with one or more CEKs302a,302b(collective “302”). Asset binaries301and CEKs302may correspond to asset binaries201and CEKs202ofFIG.2. In this example, initially CEK1302amay be the “active” CEK (also referred to as the “current” CEK) for cluster300. In other words, whenever a new binary is to be protected within cluster300, the binary would be encrypted with CEK1302a.However, there may exist a predetermined encryption limit for how many asset binaries301may be encrypted per each of CEKs302. In the example shown inFIG.3, that CEK count limit303is four but the encryption limit could be any number (e.g., 10,000 per CEK). After CEK count limit303is reached (i.e., the number of items encrypted using the active CEK is equal to CEK count limit303), the active CEK may be retired and a new CMK may be created. In the example shown inFIG.3, after asset binaries301a-301dare encrypted using active CEK1302aand CEK count limit303of four is reached, CEK1302amay be retired from use (i.e., loses its “active” status) and CEK2302bmay become the new active CEK. The retired CEK may be also referred to as an inactive CEK, an old CEK, a deprecated CEK, etc. Inactive CEK1302amay still be used to decrypt asset binaries301a-301d,which had been previously encrypted using (then active) CEK1302. Subsequent binaries such as asset binaries301e-gand forward may be protected (e.g., encrypted) with CEK2302b,which is the new active CEK of cluster300. In addition or alternatively, an active CEK may be retired based on a temporal threshold such as a timer. For example, an active CEK of cluster300may be retired once CEK count limit303is reached or a predetermined timer (e.g., one day, one month, three months, one year, etc.) expires, whichever event occurs first. Although not shown inFIG.3, CEK2302bmay eventually be retired and replaced by yet another CEK for encrypting new asset binaries in cluster300. FIGS.4A-4Ddepict an example process of master key rotation. One or more components of system400as depicted inFIGS.4A-4Dmay correspond to their respective counterparts inFIGS.2and3, thus their detailed descriptions are omitted here. In the steady state as depicted inFIG.4A, each cluster may have an active CMK and an inactive CMK. The active CMK may be used to encrypt any new CEKs that are generated. The active CMK may also be referred to as the current CMK. The inactive CMK may also be referred to as the retired CMK, the old CMK, or the deprecated CMK. In this example, asset binaries401a-401dof cluster402may be encrypted using CEKA1403a,and asset binaries401e-401gof cluster402may be encrypted using CEKA2403b.CMKA405amay be the currently active CMK (represented with a solid line inFIG.4A) for vendors407a.CMKB405bmay be an inactive CMK (represented with a broken line inFIG.4A). Both CMKA405aand CMKB405bmay be stored in external service406that is separate from service platform404although, alternatively, CMKA405aand CMKB405bmay be stored within service platform404. InFIG.4B, system400may begin the CMK rotation. As with the CEK rotation, the CMK rotation may take place when a predetermined encryption limit (e.g., a number of items that may be encrypted with a given encryption key) is reached and/or a timer has expired. When such an event occurs, the status of CMKA405amay be toggled from “active” to “inactive” with regard to cluster402. Likewise, the status of CMKB405bmay be toggled from “inactive” to “active.” The status changes of CMKA405aand CMKB405bmay take place simultaneously (e.g., atomically). Any new CEKs created during the master key rotation process, such as CEKA3403cin this example, may be encrypted using newly active CMKB405b.The CMK count limit of seven in this example is reached when asset binary401gis encrypted using CEKA2403b.New CEKA3403cbecomes the new active CEK of cluster402and at the same time new CMKB405bbecomes the new active CMK of cluster402. Any new asset binaries may be encrypted using active CEKA3403cfor the time being, and CEKA3may be encrypted using CMKB405b.Alternatively, unlike what is depicted inFIG.4B, a CMK rotation may not necessarily coincide with a CEK rotation. For example, a new CMK may be activated while keeping an old CEK as the active CEK. InFIG.4C, for each of CEKs403a-403bassociated with (now) inactive CMKA405a,those CEKs403a-403bmay be decrypted using CMKA405aand then re-encrypted with active CMKB405b.This way, any CEKs403that might have been initially encrypted with any deprecated CMKs may always stay encrypted using one active CMK (e.g., CMKB405b). InFIG.4D, the cycle of CMK rotation may be concluded by retiring old CMKA405aafter all CEKs403have been re-encrypted using currently active CMKB405b.Old CMKA405amay now be safely deleted from external service406. Alternatively, CMKA405amay not be deleted and stay in external service406. Either way, once old CMKA405ais retired, new CMKC405cmay be newly generated as the new inactive CMK later to eventually replace currently active CMKB405bonce its encryption limit and/or lifetime is reached. FIG.5depicts an example process of performing server-side encryption. In system500, service platform501may receive one or more asset binaries from client502and store those asset binaries in cloud storage503. Various components shown inFIG.5may correspond to respective counterparts as described with reference toFIGS.2-4D, thus their detailed descriptions are omitted here. Cloud storage503may be external object storage such as Amazon S3® provided by Amazon.com, Inc. of Seattle, Wash. Cloud storage503may be located outside service platform501. Service platform501may include one or more modules such as public application programming interface (API)505, metadata module506, and storage module507. These modules may be implemented with software, hardware, or a combination of both. For example, these modules may be one of a process, a service, a microservice, a plug-in, a driver, a library, etc. Two or more modules of service platform501may be combined into one module. One or more of these modules may be located inside and/or outside service platform501. Client502may be an application that is capable of interacting with service platform501to store and access data. For example, client502may be one of network nodes101,105,107, and109as depicted inFIG.1. Client502may correspond to a vendor. Client502may send asset binary504to service platform501for storage. For example, client502may send asset binary504to service platform501via public API505. In particular, public API505may offer one or more API commands that client502may use for submitting asset binary504. Public API505may use metadata module506to generate and store a checksum of asset binary504. The checksum may be used to ensure the integrity of asset binary504and to check whether its content has been altered. Public API505may use storage module507to store asset binary504at cloud storage503. Cloud storage503may offer server-side encryption (SSE), which protects (e.g., encrypts) documents at rest and prevents attackers from reading their sensitive content in the event they gain unauthorized access. Thus, after cloud storage503receives asset binary504from service platform501, cloud storage504may first encrypt asset binary504via server-side encryption and then store encrypted asset binary504. SSE may protect documents at rest and prevent attackers from reading sensitive content from disk in the event that the attackers gain access to the infrastructure and/or facilities of cloud storage503. However, because the API of cloud storage503is designed to decrypt the binary transparently for the client, malicious attackers with access to an appropriate access role (e.g., Identity and Access Management (IAM) role on Amazon Web Service® (AWS)) with permission to read from will also be able to read the unencrypted content of binaries. FIG.6depicts an example process of performing object-level encryption. In system600, service platform601may receive one or more asset binaries from client602and store those asset binaries in cloud storage603. Various components shown inFIG.6may correspond to respective counterparts as described with reference toFIGS.2-5, thus their detailed descriptions are omitted here. Service platform601may include one or more modules such as public application programing interface (API)605, metadata module606, storage module507, and keys module608. These modules may be implemented with software, hardware, or a combination of both. For example, these modules may be one of a process, a service, a microservice, a plug-in, a driver, a library, etc. Two or more modules of service platform601may be combined into one module. One or more of these modules may be located inside and/or outside service platform601. Various modules depicted inFIG.6may communicate with each other via a web protocol (e.g., hypertext transfer protocol (HTTP)). Through object-level encryption (OLE), service platform601may encrypt documents before placing them into cloud storage603. Client602may be an application that is capable of interacting with service platform601to store and access data. For example, client602may be one of network nodes101,105,107, and109as depicted inFIG.1or client502as depicted inFIG.5. Client602may correspond to a vendor. Client602may send asset binary604to service platform601for storage. For example, client602may send asset binary604to service platform601via public API605. In particular, public API605may offer one or more API commands that client602may use for submitting asset binary604. Public API605may use metadata module606to generate and store a checksum of asset binary604. Public API605may also use metadata module606to store one or more key identifiers (e.g., keyId). Public API may interact with authorization module611to authenticate client602(e.g., vendor) and obtain a vendor identifier (e.g., vendorId). The vendor identifier may correspond to a cluster identifier on a one-to-one basis. Public API605may use keys module608to retrieve various keys (e.g., CEKs) and key identifiers (e.g., keyId) based on a vendor identifier (e.g., vendorId). Keys module608may be a service that integrates with local vault agent609. Keys module608may interact with vault agent609to retrieve various keys (e.g., CMKs) and also decrypt CEKs. Vault agent609may authenticate with key storage610. Key storage610may also be referred to as an external security store. Vault agent609and key storage610may be external services relative to service platform601. That is, vault agent609and key storage610may be services provided by a third-party. Vault agent609may be installed locally and integrated with keys module608. Alternatively, vault agent609and key storage610may be part of service platform601. Using vault agent609as a secure proxy, keys module608may retrieve and cache CMK and manage CEKs by storing them securely and rotating them as appropriate. Public API605may use storage module607to perform object-level encryption of asset binary604prior to storing encrypted asset binary604at cloud storage603. Encrypted asset binary604stored at cloud storage603may be doubly encrypted with SSE (on top of OLE). Encrypted binaries stored in cloud storage603may have an additional metadata tag (e.g., metadata tag612) to facilitate migration to the new encryption scheme (e.g., to OLE) in order to continue to serve production traffic while the existing binaries are encrypted with OLE. For example, metadata tag612may indicate whether or not asset binary604stored at cloud storage603has been encrypted with OLE (symbolized by double padlock icons inFIG.6). Storage613may be local storage that is available to service platform601and may be part of or separate from keys module608. Storage613may store one or more CEKs. Storage613may also include cache memory for storing one or more latest (e.g., active) CMKs. Optionally, the encryption keys stored in the cache memory may expire after a predetermined time threshold for added security. Storage613may also include a database that maps encryption keys with corresponding key identifiers. The database may also store mappings between clusters, CEKs, CMKs, etc. Additionally, the database may store information regarding active status of encryption keys. FIG.7depicts an example flow diagram of a document upload request flow. One or more of the steps shown inFIG.7may be combined, split up, omitted, and/or performed in different order. In particular, the broken lines indicate steps that may be combined as part of the previous step indicated with a solid line. At step701, client602may send an upload request to public API605. The upload request may include an asset binary to be uploaded to service platform601and its associated asset identifier (e.g., assetId). The upload request message (and one or more subsequent messages) may be a web request/response. The upload request may be, for example, a hypertext transfer protocol (HTTP) PUT method (e.g., PUT/vault/:assetId/loading-dock). At step702, public API may send a request (e.g., PUT/vendors/lookup) to authorization module611to look up the vendor associated with client602. Authorization module611may optionally authenticate client602and look up in a database for the vendor associated with client602. At step703, authorization module611may send a response to public API703. For example, the response may include an HTTP 200 OK success response status code in response to the message from step702. At step704, authorization module611may send a vendor identifier (e.g., vendorId) associated with client602to public API605. Next, keys module608may issue keys that are tied to cluster identifiers and may be responsible for keeping track of how many times a given key has been used for a particular cluster identifier as well as creating a new key once the threshold (e.g., encryption limit) has been reached. At step705, public API605may send a key retrieval request (e.g., POST/keys/) to keys module608. At step706, public API605may send a cluster identifier (e.g., clusterId) that corresponds to the vendor identifier (e.g., vendorId) associated with client602. Keys module608may use cluster master keys to encrypt all the content encryption keys belonging to a particular cluster. Keys module608may aggressively cache responses from key storage610. Thus, if keys module608already has in its cache the appropriate CMK and/or CEK needed, keys module608may use those cached key(s) instead of having to retrieve a key from key storage610. Otherwise, at step707, keys module608may send a CMK retrieval request (e.g., GET <cluster master key>) to key storage610. Keys module608may use the cluster identifier to identify the appropriate CMK (e.g., active CMK) that is needed for that cluster. The request may be sent to key storage610via vault agent609. At step708, key storage610may send a response message back to keys module608. The response message may include the current active cluster master key of the cluster. At step709, keys module608may send a response message to public API605. For example, the response may include an HTTP 200 OK success response status code in response to the message from step705. At step710, keys module608may send a key to public API605. The key may be an unencrypted (e.g., plaintext) CEK. Keys module608may first decrypt the active CEK of the cluster with the retrieved CMK and then send the decrypted (e.g., plaintext) active CEK to public API605. At step711, keys module608may send a key identifier (e.g., keyId) to public API605. The key identifier may correspond to the CEK sent at step710. At step712, public API605may send a storage request (e.g., PUT/loading-dock) to storage module607. At step713, public API605may send a checksum (e.g., Content-MD5: <checksum>) to storage module607. The checksum may be generated by public API605based on the asset binary received at step701. Alternatively, the checksum may have been generated by client602and sent to public API605by client602. The checksum may be, for example, an MD5 checksum value. At step714, public API605may send an encryption key to storage module607. The encryption key may be the plaintext (e.g., decrypted, unencrypted) CEK that was previously received at step710. Storage module607may perform checksum validation on the unencrypted object (e.g., asset binary) and then encrypt the payload (e.g., perform OLE on the asset binary) using the provided key. Separating key management (handled by keys module608) from storage module607may increase the efficiency of the service platform. Keeping encryption and decryption of asset binaries in storage module607may improve performance by limiting the number of network hops that require transmission of the object to be stored. At step715, storage module607may send an upload request (e.g., PUT Object) to cloud storage603. At step716, storage module607may send to cloud storage603a message body (e.g., payload) encrypted with the CEK. The message body may include the encrypted asset binary. At step717, storage module607may send an object metadata to cloud storage603. The object metadata may indicate that the payload has been encrypted (e.g., encrypted: true (object metadata)). At step718, cloud storage603may send a response message to storage module607. For example, the response may include an HTTP 200 OK success response status code in response to the message from step715. Cloud storage603may additionally apply SSE to the received message body. The asset binary stored on cloud storage603may thus be doubly encrypted through OLE and SSE. At step719, storage module607may send a response message to public API605. For example, the response may include an HTTP 200 OK success response status code in response to the message from step712. At step720, storage module607may send a checksum and an initialization vector to public API605. In addition to any existing metadata, a reference to the content key identifier used to encrypt the object and the initialization vector may be stored with the asset. For example, at step721, public API605may send a request (e.g., PATCH/:AssetId) to metadata module606. At step722, public API605may send the checksum (e.g., md5: <checksum>) to metadata module606. At step723, public API605may send size information (e.g., size: <binary size>). The size information may indicate the size of the asset binary. At step724, public API605may send the key identifier (e.g., keyId: <keyId>) to metadata module606. In return, at step725, metadata module606may send a response to public API605. For example, the response may include an HTTP response status code (e.g., 200 OK, 201 Created, 202 Accepted, etc.) in response to the message from step721. Finally, at step726, public API605may send a response to client602. For example, the response may include an HTTP 204 No Content success response status code in response to the message from step701. FIG.8depicts an example flow diagram of a document download request flow. One or more of the steps shown inFIG.8may be combined, split up, omitted, and/or performed in different order. In particular, the broken lines indicate steps that may be combined as part of the previous step indicated with a solid line. At step801, client602may send a download request to public API605. The download request may include an asset binary identifier (e.g., assetId) associated with the asset binary to be downloaded. The download request message (and one or more subsequent messages) may be a web request/response. The download request may be, for example, a hypertext transfer protocol (HTTP) GET method (e.g., GET/vault/:assetId/raw). Although not shown inFIG.8, public API605may authenticate client602similar to steps702-704ofFIG.7. At step802, public API605may send a metadata retrieval request (e.g., GET/assets/:assetId) to metadata module606. The request may include the asset binary identifier as obtained in step801. In return, at step803, metadata module606may send a response message to public API605. For example, the response may include an HTTP 200 OK success response status code in response to the message from step802. At step804, metadata module606may send the checksum (e.g., md5: <checksum>)associated with the asset binary identifier to public API605. At step805, metadata module606may send the key identifier (e.g., keyId: <keyId>) associated with the asset binary identifier to public API605. The key identifier may be associated with the appropriate CEK (e.g., CEK that was used to encrypt the asset binary). At step806, public API605may send a key retrieval request (e.g., GET/keys/:keyId) to keys module608. The key retrieval request may include the key identifier previously obtained at step805. Keys module608may decrypt the requested CEK (identified by keyId) using the appropriate CMK from key storage610. As before, keys module608may cache the key(s) aggressively. Thus, if keys module608already has in its cache the appropriate CMK and/or CEK needed, keys module608may use those cached key(s) instead of having to retrieve a key from key storage610. Otherwise, at step807, keys module608may send a CMK retrieval request (e.g., GET <cluster master key>) to key storage610. Keys module608may use the key identifier to identify the appropriate CMK (e.g., active CMK) that is needed for the requested binary. The request may be sent to key storage610via vault agent609. At step808, key storage610may send a response message back to keys module608. The response message may include the appropriate CMK corresponding to the requested binary. At step809, keys module608may send a response message to public API605. For example, the response may include an HTTP 200 OK success response status code in response to the message from step806. At step810, keys module608may send a key to public API605. The key may be a plaintext CEK corresponding to the key identifier previously received at step806. Keys module608may first decrypt the CEK with the CMK retrieved from key storage610and then send the decrypted (e.g., plaintext) CEK to public API605. At step811, public API605may send an object retrieval request (e.g., GET/assets/:assetId?md5=:md5) to storage module607. The request may include the relevant asset binary identifier and the checksum as previously obtained at steps801and804respectively. At step812, public API605may send the encryption key to storage module607. The encryption key may be the plaintext (e.g., unencrypted, decrypted) CEK obtained at step810. At step813, Storage module607may send an object retrieval request (e.g., GET Object) to cloud storage603. Cloud storage603may send a response to storage module607. For example, the response may include an HTTP response status code (e.g., 200 OK, 201 Created, 202 Accepted, etc.) in response to the message from step813. At step815, cloud storage603may also send a message body. The message body may include the relevant asset binary that had been previously encrypted with the CEK as identified by the key identifier from step812. Cloud storage603may first remove (e.g., decrypt) SSE from the asset binary before sending it to storage module607. At step816, storage module607may send a response message to public API605. For example, the response may include an HTTP 200 OK success response status code in response to the message from step811. If the object received from cloud storage603has its “encrypted: true” metadata tag set, then storage module607may decrypt the object from cloud storage603using the CEK received at step812. At step817, storage module607may send the decrypted message body (e.g., asset binary) to public API605. At step818, public API605may send a response message to client602. For example, the response may include an HTTP 200 OK success response status code in response to the message from step801. At step819, public API605of the service platform may send the decrypted asset binary to client602. FIG.9depicts an example flow diagram of a key retrieval process. In particular, process900may be performed by keys module608after the key retrieval request (e.g., POST/keys/) as shown in step705ofFIG.7is received. Keys module608may also receive a cluster identifier (e.g., step706ofFIG.7). One or more of the steps shown inFIG.9may be combined, split up, omitted, and/or performed in different order. Keys module608may first resolve the CMK. At step901, keys module608may retrieve from a database a CMK identifier (e.g., masterKeyId) for the latest (e.g., active) cluster master key that is associated with the cluster identified by the cluster identifier. The database may be located either inside or outside keys module608. At step902, storage module610may determine whether the CMK that corresponds to the CMK identifier is stored in the cache memory. The cache memory may be located either inside or outside keys module608. If the CMK is stored in the cache memory (902: Yes), then keys module608may retrieve the CMK from the cache memory. Otherwise (902: No), keys module608may retrieve the appropriate CMK from key storage610(e.g., steps707-708ofFIG.7). Once retrieved from key storage610, the CMK may be stored in the cache memory at step905for quicker access in the future. At step906, keys module608may return the CMK. Keys module608may then retrieve the appropriate CEK. In particular, at step907, keys module608may retrieve from the database the latest (e.g., active) CEK that is associated with the returned CMK. At step908, keys module608may decrypt the retrieved CEK using the retrieved CMK. Notably, the retrieved CEK may have been previously encrypted with the same CMK. CEK may be encrypted and decrypted by using, for example, the AES-256-CBC algorithm but other encryption algorithms may be used. At step909, keys module608may increment the encryption counter that is associated with the CEK being used. The encryption counter may be used to determine whether the encryption limit for the CEK has been reached thereby requiring a CEK rotation. If a separate encryption counter is maintained for the CMK, the CMK encryption counter may be also increased at this time. At step910, keys module608may return the decrypted CEK. In particular, keys module608may send the plaintext (e.g., unencrypted, decrypted) CEK and its associated key identifier to public API605(e.g., steps709-711ofFIG.7). FIG.10depicts an example data model for encryption keys and other related information. In data model1000, various data objects may be stored in a database, such as storage613as shown inFIG.6. Various data objects as shown inFIG.10may represent relational database schemas. CEK table1001may include various types of information such as one or more key identifiers (e.g., id, key_id, etc.), associated CMK identifier (e.g., cluster_master_key_id), encrypted CEK value (e.g., encrypted content key), one or more timestamps (e.g., created, updated), etc. The bolded fields inFIG.10may indicate primary keys. CEK table1001may be linked to its associated CMK table1002. CMK table1002may include various types of information such as one or more key identifiers (e.g., id, cluster_master_key_id, master_key_id), cluster identifier (e.g., cluster_id), one or more timestamps (e.g., created), etc. The cluster identifier may be associated with a corresponding vendor identifier. Optionally, the cluster identifier and the vendor identifier may be interchangeable. The relevant CMK may be associated with multiple key identifiers such as a domain level identifier, a key storage reference, etc. CEK table1001may be also linked to encryption counter table1003(e.g., key_counts). Encryption counter table1003may hold the counts for each CEK generated. Encryption counter table1003may include various types of information including a CEK identifier (e.g., key_id), a slot (e.g., slot), and a counter (e.g., cnt). The CEK identifier and the slot may be used as primary keys to distribute the updates and prevent encryption counter table1003object from becoming a mutex. Increased concurrency may be thereby achieved. FIGS.11A and11Bdepict example database queries for updating and retrieving encryption counts. For example,FIG.11Ashows an example query that may be executed to perform an insert with an ON DUPLICATE KEY UPDATE clause to set the new count for the appropriate slot. The slot may be created if it does not already exist, thus removing the need to pre-generate rows for each slot. If the slot already exists, then the count (e.g., cnt) may be updated (e.g., incremented by 1).FIG.11Bshows an example query that may be executed for generating a report for the number of asset binaries encrypted with active CEKs. FIG.12illustrates an example flow chart for a method and algorithm for object-level encryption and key rotation in accordance with one or more aspects described herein. Method1200may be implemented by a suitable computing system, as described further herein. For example, method1200may be implemented by any suitable computing environment by a computing device and/or combination of computing devices, such as computing devices101,105,107, and109ofFIG.1; service platform204ofFIG.2; service platform404ofFIGS.4A-4D; service platform501ofFIG.5; and service platform601ofFIG.6. Method1200may be implemented in suitable program instructions, such as in machine learning software127, and may operate on a suitable training set, such as training set data129. Various steps shown inFIG.12may be performed in any order including those that add, omit, combine, and/or split up one or more steps. At step1201, the system may store a plurality of asset clusters. In particular, the system may store, using a data store, a plurality of asset clusters. Each asset cluster may comprise a plurality of data items and may be associated with an active encryption key. Each asset cluster in the plurality of asset clusters may be associated with a third-party service. At step1202, the system may, for each asset cluster in the plurality of asset clusters (1202: “more asset cluster? Yes”), perform one or more of steps1203-1211. Once all the asset clusters are processed (1202: No), the process may end. At step1203, the system may generate a master encryption key. Each of the master encryption keys may be stored using an external security store and inaccessible to any third-party services. At step1204, the system may generate a first content encryption key. In particular, the system may generate the first content encryption key based on the master encryption key for the asset cluster. The first content encryption key may be generated by providing a request indicating an asset cluster in the plurality of asset clusters to the external security store and receiving a response from the external security store. At step1205, the system may set the first content encryption key associated with the asset cluster as the active encryption key for the asset cluster. At step1206, the system may encrypt a first subset of data items using the active encryption key. In particular, the system may encrypt a first subset of the plurality of data items of the asset cluster using the active encryption key. The active encryption key may be the first encryption key. The system may maintain a count of a number (i.e., quantity) of data items encrypted using the active encryption key. At step1207, the system may determine whether the number of data items encrypted using the active encryption key exceeds a threshold value. Alternatively, at step1207, the system may determine whether the number of data items encrypted using the active encryption key has reached the threshold value rather than exceeds the threshold value. If it is determined that the number of data items encrypted using the active encryption key does not exceed the threshold value (1207: No), then the process may return to step120and encrypt any additional data items using the active encryption key. Alternatively, the process may return to step1202to process the next asset cluster. If it is determined that the number of data items encrypted using the active encryption key exceeds the threshold value (1207: Yes), then at step1208, the system may set the first content encryption key as an inactive encryption key for the asset cluster. At step1209, the system may generate a second content encryption key. In particular, the system may generate the second content encryption key for the asset cluster and based on the master encryption key for the asset cluster. The second content encryption key may be generated by providing a request indicating an asset cluster in the plurality of asset clusters to the external security store and receiving a response from the external security store. At step1210, the system may set the second content encryption key as the (new) active encryption key for the asset cluster. At step1211, the system may encrypt a second subset of data items using the active encryption key. In particular, for a second subset of the plurality of data items in the asset cluster, the system may encrypt the second subset of the plurality of data items using the active encryption key (e.g., the second content encryption key). The system may maintain a second count of the number of data items encrypted using the active encryption key (e.g., the second content encryption key). The system may decrypt, based on generating the second content encryption key; each of the first subset of data items for the asset cluster using the inactive encryption key. The system may encrypt each of the first subset of data items for the asset cluster using the active encryption key. The system may reset, based on decrypting each of the first subset of data items, the inactive encryption key to a default value. The system may then delete the first content encryption key. The system may determine that a threshold time period has elapsed. The system may set the second content encryption key as an inactive encryption key for the asset cluster. The system may generate, for the asset cluster and based on the master encryption key for the asset cluster, a third content encryption key. The system may set the third content encryption key as the active encryption key for the asset cluster. The system may obtain, from a computing device, a request for a data item. The system may determine an asset cluster in the plurality of asset clusters storing the data item. The system may determine a content encryption key that was used to encrypt the data item. The content encryption key may be the first content encryption key for the determined asset cluster and/or the second content encryption key for the determined asset cluster. The system may decrypt the data item using the determined content encryption key. The system may transmit the decrypted data item to the computing device. The system may generate a second master encryption key for each asset cluster. For each asset cluster in the plurality of asset clusters, the system may decrypting the active encryption key for the asset cluster using the master encryption key for the asset cluster; encrypt the active encryption key for the asset cluster using the second master encryption key for the asset cluster; decrypt the inactive encryption key for the asset cluster using the master encryption key for the asset cluster; and encrypt the inactive encryption key for the asset cluster using the second master encryption key for the asset cluster. The system may delete the master encryption key for each asset cluster. One or more aspects discussed herein may be embodied in computer-usable or readable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices as described herein. Generally, program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The modules may be written in a source code programming language that is subsequently compiled for execution, or may be written in a scripting language such as (but not limited to) HTML or XML. The computer executable instructions may be stored on a computer readable medium such as a hard disk, optical disk, removable storage media, solid-state memory, RAM, and the like. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied, in whole or in part, in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects discussed herein, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein. Various aspects discussed herein may be embodied as a method, a computing device, a system, and/or a computer program product. Although the present invention has been described in certain specific aspects, many additional modifications and variations would be apparent to those skilled in the art. In particular, any of the various processes described above may be performed in alternative sequences and/or in parallel (on different computing devices) in order to achieve similar results in a manner that is more appropriate to the requirements of a specific application. It is therefore to be understood that the present invention may be practiced otherwise than specifically described without departing from the scope and spirit of the present invention. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents. | 47,091 |
11943339 | DESCRIPTION OF EMBODIMENT Hereinafter, a suitable embodiment of the present disclosure is explained in detail with reference to the attached figures. Note that constituent elements having substantially identical functional configurations in the present specification and the figures are given identical reference signs and that overlapping explanations thereof are thus omitted. Note that explanations will be given in the following order. 1. Overview of peer-to-peer databases 2. Configuration example of information processing system 3. Configuration example of each apparatus and data to be generated 4. Configuration example of data 5. Copyrights 6. Process flow example of each apparatus 7. Example 8. Hardware configuration example of each apparatus 1. Overview of Peer-to-Peer Databases Before one embodiment according to the present disclosure is explained, the overview of peer-to-peer databases is first explained. In an information processing system according to the present disclosure, distributed peer-to-peer databases that are distributed across a peer-to-peer network are used. Note that the peer-to-peer network is called a peer-to-peer distributed file system in some cases. In the present document, the peer-to-peer network is called a “P2P network,” and the peer-to-peer databases are called “P2P databases.” Examples of the P2P databases include blockchains that are distributed across the P2P network. Accordingly, first, the overview of a blockchain system is explained as an example. As depicted inFIG.2, a blockchain is data including a string of multiple blocks that are continuous with each other as if they form a chain. Each block can store one piece or two or more pieces of target data as transaction data (transaction). Examples of blockchains include ones that are used for exchanges of data of a cryptocurrency such as Bitcoin, for example. A blockchain used for exchanges of data of a cryptocurrency includes hashes of previous blocks and values called nonces, for example. A hash of the previous block is information used for deciding whether or not a current block is the “correct block” which is continuous with the previous block correctly. The nonces are information used for preventing identity frauds in authentication performed by using hashes, and falsification is prevented by using the nonces. Examples of the nonces include a character string, a digit string, data representing a combination of a character string and a digit string, for example. In addition, in a blockchain, an electronic signature generated by using an encryption key is given to each piece of transaction data, and identity frauds are thus prevented. In addition, each piece of transaction data is disclosed and is shared in the whole P2P network. Note that each piece of transaction data may be encrypted by using an encryption key. FIG.3is a figure depicting a manner in which target data is registered by a user A in a blockchain system. The user A gives the target data to be registered in a blockchain an electronic signature generated by using a private key of the user A. Then, the user A broadcasts, on a P2P network, transaction data including the target data to which the electronic signature is given. Thus, it is proven that the owner of the target data is the user A. FIG.4is a figure depicting a manner in which the ownership of the target data is transferred from the user A to a user B in the blockchain system. The user A gives transaction data an electronic signature generated by using the private key of the user A and includes a public key of the user B in the transaction data. Thus, it is represented that the ownership of the target data has been transferred from the user A to the user B. In addition, when conducting the transaction of the target data, the user B may acquire a public key of the user A from the user A and acquire the target data that is given the electronic signature or is encrypted. In addition, in the blockchain system, for example, by using the side chain technology, it is also possible to include other target data that is different from a cryptocurrency, in a blockchain of Bitcoin or the like (a blockchain used for exchanges of data of an existing cryptocurrency). 2. Configuration Example of Information Processing System In the description above, the overview of P2P databases has been explained. Next, a configuration example of an information processing system according to one embodiment of the present disclosure is explained with reference toFIG.5.FIG.5is a block diagram depicting a configuration example of the information processing system according to the present embodiment. As depicted inFIG.5, the information processing system according to the present embodiment includes a manufacturer apparatus100, a generating apparatus200, a processing apparatus300, an examining apparatus400, and a node apparatus500. Then, the node apparatus500is connected to a P2P network600. The manufacturer apparatus100is an apparatus of a manufacturer of the generating apparatus200and is an information processing apparatus that manages a key of the generating apparatus200. In the present embodiment, explanations are given by using, as an example, a case that the generating apparatus200is a camera (i.e. processing target data is image data), and in such a case, the manufacturer apparatus100is an information processing apparatus of a manufacturer of the generating apparatus200(camera), for example. The generating apparatus200is an information processing apparatus that generates original data to be used as processing source data. The processing apparatus300is an information processing apparatus that generates processed data by using the original data generated by the generating apparatus200. Here, while only one processing apparatus300is displayed in the example depicted inFIG.5, the number of the processing apparatuses300is not limited particularly, and the processing apparatus300may generate processed data by using processing source data generated by another processing apparatus300. The processing apparatus300can be, for example, a computer used for editing image data. The examining apparatus400is an information processing apparatus that examines the authenticity of each piece of data by operating in cooperation with the node apparatus500, and traces a relation between processing source data and processed data. The node apparatus500is an information processing apparatus that retains a P2P database, and performs registration of data in the P2P database, acquisition of data from the P2P database, and the like. The P2P network600is a network over which P2P databases are distributed. Note that the configuration described above explained with reference toFIG.5is merely an example, and the configuration of the information processing system according to the present embodiment is not limited to the example. The configuration of the information processing system according to the present embodiment can be modified flexibly according to specifications or how it is operated. In addition, while the processing target data is image data in the case explained above as an example in the present embodiment, the processing target data is not necessarily limited to this. For example, the processing target data may be music data, certain sensor data, or the like. 3. Configuration Example of Each Apparatus and Data to be Generated In the description above, the configuration example of the information processing system according to the one embodiment of the present disclosure has been explained. Next, a configuration example of each apparatus according to the present embodiment and data generated by each apparatus (or data stored by each apparatus) are explained with reference toFIG.6toFIG.9. Note that keys to be used by each apparatus according to the present embodiment are keys of public key cryptography such as elliptic curve cryptography, and the keys include a private key and a public key as a pair. FIG.6is a block diagram depicting configuration examples of the manufacturer apparatus100and the generating apparatus200and a configuration example of data generated by the generating apparatus200. Note thatFIG.6depicts examples of the main configurations of each apparatus and each piece of data according to the present embodiment, and partial configurations are omitted (the same applies also toFIG.7toFIG.9). As depicted inFIG.6, the manufacturer apparatus100includes a storage section110, and the storage section110stores keys of the generating apparatus200and keys of the manufacturer. The keys of the manufacturer include a private key S and a certificate of a public key S generated by giving a signature to the public key S by using the private key S. In addition, the keys of the generating apparatus200include a private key a and a certificate of a public key a generated by giving a signature to the public key a by using the private key S. In addition, as depicted inFIG.6, the generating apparatus200includes a data generating section210, a certificate generating section220, a key generating section230, a trace data processing section240, and a storage section250. The data generating section210is configured to generate, for example, image data (denoted as “data0” inFIG.6) as data. The key generating section230is configured to generate keys (a public key and a private key) of public key cryptography for the data0generated by the data generating section210. The storage section250stores the keys of the generating apparatus200explained in the description above. The certificate generating section220generates a certificate of the data0by using the private key a of the generating apparatus200to give an electronic signature to a public key0of the data0(or an ID that is generated by using the public key0of the data0and can identify the public key0of the data0) and an authentication code of the data0(that is data generated from the data0and is information used for authentication of the data0, the details of which are to be described below, or may alternatively be the data0itself). The trace data processing section240generates a file0by adding trace data and a private key0of the data0to the data0. The trace data is used for tracing that the data0is generated by the generating apparatus200, and includes the certificate generated by the certificate generating section220. FIG.7is a block diagram depicting a configuration example of the processing apparatus300and a configuration example of data generated by the processing apparatus300.FIG.7depicts a case that the processing apparatus300generates data (“data1” inFIG.7) on the basis of the data0generated by the generating apparatus200. As depicted inFIG.7, the processing apparatus300includes a data processing section310, a certificate generating section320, a key generating section330, and a trace data processing section340. The data processing section310is configured to generate second data (processed data; the data1in the example depicted inFIG.7) on the basis of at least one or more pieces of first data (processing source data; the data0in the example depicted inFIG.7). For example, the data processing section310generates the data1by performing image processing on the data0, which is image data. The key generating section330is configured to generate keys (a public key and a private key) of public key cryptography for the data1generated by the data processing section310. The certificate generating section320generates a certificate of the data1by using the private key0of the data0to give an electronic signature to a public key1of the data (or an ID that is generated by using the public key1of the data1and can identify the public key of the data1) and an authentication code of the data1(that is data generated from the data1and is information used for authentication of the data1, the details of which are to be described below, or may alternatively be the data1itself). The trace data processing section340generates a file1by adding, to the data1(second data), a private key1of the data1(second data) and trace data used for tracing a relation between the data0(first data) and the data1(second data). Note that the trace data includes the certificate generated by the certificate generating section320and the trace data added to the data0(first data). When the file1is generated, the trace data processing section340discards the private key0after the certificate of the data1is generated. Note that hereinafter the first data used for the generation of the second data is called “parent data,” and the second data is called “child data” in some cases. In addition, each piece of data that is continuous with and precedes a certain piece of data in a chain-like relation of pieces of data is called “ancestor data” in some cases. As explained with reference toFIG.6andFIG.7, because the trace data including the certificate of the second data and the certificate added to the first data is added to the second data, a relation between the pieces of data can be traced appropriately. More specifically, by using the public key of the first data which is included in the certificate of the first data, the certificate of the second data generated by using the private key which forms a pair with the public key can be examined. Thus, by tracing backward pieces of data that are in a chain-like relation, examining each certificate included in each piece of trace data, and examining a certificate of original data by using the public key a of the generating apparatus200, each piece of data can be certified as data processed on the basis of the original data generated by the generating apparatus200. In addition, while the private key of the second data is included in the trace data to be added to the second data, the private key of the first data is not included in the trace data to be added to the second data. Thus, leakages of the private key of the ancestor data can be prevented appropriately. In addition, because the certificate of the data0generated by using the private key a of the generating apparatus200is included in the trace data as depicted inFIG.6, it can be examined that the original data is generated by the generating apparatus200. In addition, it is possible to prevent a malicious third party from registering data with trace data of another party and from maliciously claiming that the data is created by the malicious third party. For example, it possible to realize a copyright managing system in which only the owner of a camera can register original photographs and processed photographs. FIG.8is a block diagram depicting a configuration example of the examining apparatus400. As depicted inFIG.8, the examining apparatus400includes an examining section410and a data similarity deciding section420. The examining section410is configured to examine the authenticity of data generated by the generating apparatus200and the processing apparatus300and trace relations between the data, by operating in cooperation with the node apparatus500. More specifically, the examining section410is configured to provide trace data to the node apparatus500that examines the authenticity of second data by using the trace data and information registered in a database (a P2P database in the present embodiment). Here, the trace data is used for tracing a relation between at least one or more pieces of first data and second data generated on the basis of the first data and is added to the second data. The node apparatus500examines the authenticity of the second data and traces a relation between the pieces of data, by using the trace data. Thereafter, the examining section410receives an examination result from the node apparatus500. By deciding a similarity between the first data and the second data, the data similarity deciding section420can decide that the second data is generated on the basis of the first data, and so on. For example, the data similarity deciding section420computes a similarity between multiple pieces of image data by image data analysis or the like (not limited to this), and in a case that the similarity is equal to or higher than a predetermined threshold, the data similarity deciding section420can decide that those pieces of data have a relation of first data and second data (i.e. a parent-child relation). On the other hand, in a case that the computed similarity is lower than the predetermined threshold, the data similarity deciding section420can decide that those pieces of data do not have a relation of first data and second data. Thus, the examining apparatus400can realize a service of certifying the authenticity or copyright of an original photograph, a service of examining whether or not a target photograph is a stolen photograph by deciding an image similarity with a registered original photograph, or other services. These services are mentioned below. FIG.9is a block diagram depicting a configuration example of the node apparatus500and a configuration example of data registered in the P2P database by the node apparatus500. As depicted inFIG.9, the node apparatus500includes a P2P database510. In addition, as depicted inFIG.9, the P2P database510includes a P2P database program511. Further, the P2P database program511includes an examining section511a. The P2P database510is a database retained by the node apparatus500and is a node of a blockchain, for example. More important data whose authenticity is required to be proven is registered in the P2P database510. Various types of data registered in the P2P database510may be given electronic signatures generated by using encryption keys or may be encrypted by using encryption keys. Note that details of the data registered in the P2P database510are not limited particularly. The P2P database program511is a predetermined program that is provided in the P2P database510and executed on the P2P database510. By using the P2P database program511, various processes including transactions of a cryptocurrency such as Bitcoin are realized while the consistency of the processes are maintained according to a predetermined rule, for example. In addition, by providing the P2P database program511in the P2P database510, the risk of unauthorized modifications of the program is reduced. The P2P database program511may be a chain code in Hyperledger or may be a smart contract. The examining section511ais configured to realize part of the function of the P2P database program511and is configured to examine the authenticity of second data (or data generated from the second data) and traces a relation between pieces of data, by using trace data and information registered in a database (the P2P database in the present embodiment). Here, the trace data is used for tracing a relation between at least one or more pieces of first data and second data generated on the basis of the first data and is added to the second data. More specifically, by using a public key of the first data included in trace data added to the first data (or an ID that is generated by using the public key of the first data and can identify the public key of the first data), the examining section511aexamines a certificate of the second data that is included in trace data and has an electronic signature given by using a private key of the first data. The examining section511arepetitively performs examinations of certificates so as to trace backward pieces of data that have a chain-like relation. In addition, the examining section511aalso functions as a registering section (not depicted) that registers the second data or the ID that can identify the second data in the P2P database510after a successful examination. In addition, as depicted inFIG.9, the certificate of the public key S of the manufacturer generated by giving the electronic signature to the public key S by using the private key S of the manufacturer is registered in the P2P database510as manufacturer information. In addition, the certificate of the public key a generated by giving the electronic signature to the public key a of the generating apparatus200(or an identifier of the public key a of the generating apparatus200) by using the private key S of the manufacturer is also registered in the P2P database510as user information (UserRecord). Note that, after its signature is examined by using the public key S of the manufacturer and it is examined that the generating apparatus200is owned by the user, the certificate of the public key a is registered in the P2P database510. In addition, the certificate of the public key a may not be registered in the P2P database510, and the public key a of the generating apparatus200(or an identifier of the public key a of the generating apparatus200) may be registered in the P2P database510after a signature examination is performed by using the public key S of the manufacturer. An ID and copyright information of the foremost data (the “data0” in the example depicted inFIG.9; also called “original data” in the present document) in a chain-like relation of data are registered in the P2P database510as data information (DataRecord), for example. Note that an ID and copyright information of each piece of data generated after the data0can also be registered as data information at a predetermined timing (details are mentioned below). As depicted inFIG.9, by registering the certificate of the public key a of the generating apparatus200in the P2P database510, a relation between pieces of data can be traced appropriately. More specifically, the examining section511aof the node apparatus500can examine each certificate included in trace data so as to trace backward pieces of data having a chain-like relation, as described above, and an examination of a certificate included in trace data of the foremost data in the chain-like relation can be performed appropriately by using the public key a of the generating apparatus200of the foremost data that is registered in the P2P database510(or an ID that is generated by using the public key a of the generating apparatus200and can identify the public key a of the generating apparatus200). In addition, by this technique, if at least the public key a of the generating apparatus200is registered in the P2P database510, certificates for tracing relations between pieces of data can be examined by using trace data. Accordingly, a transaction does not have to be generated to the P2P database510by registering each piece of data separately, and the operating costs of the P2P database510and services can be kept low. In addition, because relations between pieces of data can be examined by using certificates of trace data, each piece of data can be registered in the P2P database no matter what the order of the data is, and data management becomes easier. 4. Configuration Example of Data In the description above, the configuration example of each apparatus and the data generated by each apparatus (or data stored by each apparatus) according to the present embodiment has been explained. Next, a configuration example of data generated by each apparatus (or data stored by each apparatus) is explained. 4.1. Configuration Example of Trace Data, Etc First, a configuration example of trace data or the like is explained. When adding trace data, the trace data processing section240of the generating apparatus200and the trace data processing section340of the processing apparatus300add data information (Data Info) and a private key (Private Key) of the data along with the trace data (Trace Info) as depicted in FIG. (note that the data information is omitted inFIG.6andFIG.7). The data information (Data Info) and the private key (Private Key) of the data related to first data are not added to second data, and the trace data (Trace Info) is added to the second data as a history. Note that, because data lengths (Length) depicted inFIG.10depend on hash values, private keys, encryption methods of public keys, and security levels, the data lengths are merely examples (the same applies also toFIG.11toFIG.13). In addition, because public keys can be restored from messages to which signatures are given and the signatures in a case of elliptic curve cryptography, hash values may be recorded instead of the public keys, and signature examinations may be performed by using the hash values. By adopting this examination method, the size of the whole trace data can be reduced. Hereinafter, data information (Data Info), trace data (Trace Info), and a private key (Private Key) of the data are collectively called “Origin Trace Data” as information used for tracing original data. FIG.11is a figure depicting a configuration example of the data information (Data Info) inFIG.10. As depicted inFIG.11, the data information (Data Info) includes a data type (Data Type), the number of hashes (Number of Hashes), and hash values (Hash1to HashN). Information representing a data format such as a JPEG file is stored as the data type (Data Type). Hash values of data areas (Data Area1to Data Area N) are stored as the hash values (Hash1to HashN). For example, distinctions may be made between multiple areas of data itself according to a predetermined method, or distinctions may be made between, for example, data itself (e.g. JPEG compressed data) and metadata added to the data (e.g. EXIF metadata), as mutually different areas. A hash value of second data is obtained by linking the hash values of the data information (Data Info) and further determining a hash value of the resultant concatenation, and is used for generation of an ID mentioned below. FIG.12is a figure depicting a configuration example of the trace data (Trace Info) inFIG.10. As depicted inFIG.12, the trace data (Trace Info) includes a data length (Length of TraceInfo); an ID; digest information (DigestInfo); a public key of the data (PublicKey); a hash value of a message which is a concatenation of IDs of at least one or more pieces of parent data (ParentsHash; in a case that the data is original data, ParentsHash is a hash value of the public key of the generating apparatus200); the number of pieces of parent data (Number of parents); an electronic signature given by using a private key of parent data (Signature1); and trace data added to the parent data (TraceInfo1). Note that, in a case that there are multiple pieces of parent data, the number of included electronic signatures and the number of pieces of trace data added to the parent data (Signature2to Signature N, TraceInfo2to TraceInfo N) correspond to the number of the parents. Signature messages which are concatenations of at least certification target ID, PublicKeyID, and ParentsHash are given signatures by using the private keys of the parent data. In a case that DigestInfo is included in the trace data, DigestHash mentioned below is concatenated as a signature message. “ParentsHash” and “ID” are explained here. The trace data processing section340of the processing apparatus300performs a calculation according to a cryptographic hash function by using IDs that can identify at least one or more pieces of first data, to generate “ParentsHash” (e.g. a hash values of an ID that can identify the first data). In addition, the trace data processing section340adds, to trace data, “ID” that is generated by performing a calculation according to a cryptographic hash function (e.g. MAC (Message Authentication Code), etc.) by using at least a hash value of second data, a public key of the second data, and ParentsHash (the ID that can identify all the pieces of the first data), and that can identify the second data. More specifically, the trace data processing section340calculates HMAC (Hash-based Message Authentication Code) by using a message which is a concatenation of the public key of the second data and ParentsHash and the hash value of the second data as keys, and sets a result thereof as “ID.” Note that “ID” is a concept that functions as the authentication code (information to be used for data authentication) depicted inFIG.6andFIG.7. By generating “ParentsHash” by using the IDs of the first data, a relation between the first data and the second data (i.e. a parent-child relation) can be represented more appropriately. Note that it is possible to examine that the parent-child relation is correct, by calculating a hash value of IDs of all the pieces of parent data whose signatures have been examined and comparing the calculated hash value with ParentsHash. In addition, because “ID” is dependent on the “hash value of the second data,” the “public key of the second data,” and “ParentsHash (i.e. the parent data),” for example, even in a case that there are multiple pieces of second data having identical parent data, “IDs” of the second data become mutually different, so that distinctions can be made between the multiple pieces of second data appropriately. That is, even in a case that a malicious third party has generated forgery data whose “ID” is identical to that of certain data, sensing of the forgery data can be realized more easily. By making “ID” dependent on a public key of second data, it is possible to sense forgery of a chain-like relation of data in a case that a malicious third party has given a signature to trace data of a child by using a key which is different from a private key that forms a pair with the public key of the second data. In addition, by generating “ID” by HMAC and keeping a hash value of data secret, it is possible to appropriately prevent ancestor data of the data corresponding to “ID” from being traced, so that this is useful in terms of privacy. “ParentsHash” and “ID” generated by the generating apparatus200are explained. The trace data processing section240of the generating apparatus200generates “ParentsHash” (e.g. a hash value of the public key of the generating apparatus200) by performing a calculation according to a cryptographic hash function by using the public key of the generating apparatus200. In addition, the trace data processing section240adds, to the trace data, “ID” that is generated by performing a calculation according to a cryptographic hash function (e.g. MAC (Message Authentication Code), etc.) by using at least original data, a public key of the original data, and the public key of the generating apparatus200, and that can identify the original data. More specifically, the trace data processing section240calculates HMAC (Hash-based Message Authentication Code) by using a message which is a concatenation of the public key of the original data and ParentsHash (data generated by using the public key of the generating apparatus200) and the hash value of the original data as keys, and sets a result thereof as “ID.” FIG.13is a figure depicting a configuration example of digest information (DigestInfo) in the trace data (Trace Info). The trace data processing section240of the generating apparatus200associates digest information representing details of original data, with trace data, and the trace data processing section340of the processing apparatus300associates digest information representing details of second data, with trace data. As depicted inFIG.13, the digest information (Digest Info) includes a data length (Digest length); a digest type representing a data format such as EXIF (Digest Type); digest data which is a copy of APP1including EXIF (a thumbnail, etc.) or the like (Digest Data); and a hash value of the digest type (Digest Type) and the digest data (Digest Data) (Digest Hash). Note that Digest Hash can be said to be an ID that can identify the digest information, and Digest Hash is included in a certificate as a certification target. That is, the certificate generating section220of the generating apparatus200and the certificate generating section320of the processing apparatus300include, in a certificate and as a certification target, Digest Hash (an ID that can identify the digest information) generated by performing a calculation according to a cryptographic hash function by using the digest information. By associating the digest information with the trace data, a comparison between processing source data (first data) and processed data (second data) can be realized more easily. For example, by associating, as the digest information, EXIF or the like of the processing source data with the trace data, the examining apparatus400(not necessarily limited to this) can acquire the time of image-capturing, the location of image-capturing, a thumbnail, or the like of the processing source data that served as original data of the processed data, only by acquiring the processed data. In addition, even in a case that a malicious third party intentionally forges the processed data, the examining apparatus400(not necessarily limited to this) can decide whether or not there has been forgery, by deciding a similarity between a thumbnail of the processing source data and the processed data. For example, in a case that the similarity between the thumbnail of the processing source data and the processed data is lower than a predetermined threshold, it may be decided that the data has been forged. For example, in a decision related to the copyright of a photograph, it is possible to decide that the copyright of a processed photograph is owned by the owner of a camera, by comparing the processed photograph and digest information (a thumbnail, a three-dimensional distance image, etc.) of an original photograph included in trace data of the processed photograph, without acquiring the original photograph. 4.2. Configuration Example of Data Registered in P2P Database Next, a configuration example of data registered in the P2P database510is explained. FIG.14is a figure depicting a configuration example of the data information (DataRecord) explained with reference toFIG.9. As depicted inFIG.14, the data information (DataRecord) includes an identifier of data (dataID); an identifier of the owner of the data (ownerID); information of the copyright and license related to the data (rightsLicense); an identifier of a user who is a licensee according to a license agreement (licenseeID); and an array of IDs of child data of the data (childrenIDList). As explained in detail in the following paragraphs, in a case that an examination of the authenticity of data and an examination of tracing between pieces of data are successful, the data information (DataRecord) including an identifier (dataID) of the data whose authenticity has been examined is registered in the P2P database510. An ID included in each piece of trace data in a series of a data group having a chain-like relation may also be registered, but this is not necessarily the sole example. In addition, in order to examine the copyright set for each piece of data, IDs of all ancestors are registered in the P2P database510in a case that a series of ancestor data is not registered in the P2P database510. In this case, a tree (hierarchical structure) from original data down to all descendants can be constructed by registering also IDs of child data in an array of IDs of child data (childrenIDList), and the tree can be used for examining the copyright set for each piece of data. In addition, in a case that data information (DataRecord) including IDs has already been registered in the P2P database510in an examination of trace data, trace data of ancestor data before the data does not have to be examined. By registering, in the P2P database510, not data itself but only the information depicted inFIG.14, the amount of data registered in the P2P database510can be reduced. FIG.15is a figure depicting a configuration example of the user information (UserRecord) explained with reference toFIG.9. As depicted inFIG.15, the user information (UserRecord) includes an identifier of a user (userID); the name of the user (name); attributes of the user (description; for example, an address, an email address, etc.); an identifier of the generating apparatus200owned by the user (e.g. a camera, etc.) (originatorIDList); and an address list of clients used in the P2P database510(addressList). The identifier of the generating apparatus200owned by the user is registered in the array of the identifier of the generating apparatus200(originatorIDList), and the public key of the generating apparatus200can be identified by referring to an associative array (OriginatorCertKeyList) of the public key of the generating apparatus200mentioned below. FIG.16is a figure depicting a configuration example of a certificate of the public key of the generating apparatus200and the manufacturer. As depicted inFIG.16, the certificate includes an identifier of a certification target (subject; an identifier of the camera which is the generating apparatus200), the public key to be certified (publicKey; for example, the public key of the camera which is the generating apparatus200, or the public key of the manufacturer); an identifier of an issuer of the certificate (issuer; for example, an identifier of the manufacturer of the camera which is the generating apparatus200); and an electronic signature of the certificate (signature; a signature given to the above-mentioned configuration by using the private key S of the manufacturer). FIG.17is a figure depicting a configuration example of an associative array registered in the P2P database510. As depicted inFIG.17, the P2P database510includes an associative array of a DataRecord value for an identifier of data (dataRecord); an associative array of a UserRecord value for the identifier of the user (userRecord); an associative array of a User identifier for an address of a client used in the P2P database510(userID); an associative array of a value of a public key of an identifier (subject inFIG.16) of the generating apparatus200(e.g. a camera) (originatorKeyList); and an associative array of a Certificate value of the manufacturer (e.g. a camera manufacturer) of the generating apparatus200for a manufacturer identifier (the issuer inFIG.16) (makerCertList). The user registers, in the P2P database510, the certificate of the generating apparatus200acquired from the generating apparatus200. In the associative array (makerCertList) of the Certificate value of the registered manufacturer, the P2P database510uses the certificate to examine the certificate of the generating apparatus200, registers the public key of the generating apparatus200and a certification target identifier in the associative array (originatorKeyList), and registers the certification target identifier included in Subject, as an identifier of the camera, in the associative array (originatorKeyList) of relevant user information. Note that the associative array (makerCertList) of the Certificate value of the manufacturer of the P2P database510can be rewritten only with an address of a client having a special right. 5. Copyrights In the description above, the configuration example of the data generated by each apparatus (or data stored by each apparatus) has been explained. Next, a copyright which is metadata related to data according to the present embodiment is explained. The information processing system according to the present embodiment can also manage the copyright or license of each piece of data. More specifically, the information processing system according to the present embodiment manages copyright information (rightsLicense inFIG.14) of each piece of data by registering the copyright information in the P2P database510as data information (DataRecord). FIG.18is a figure depicting a list related to copyrights according to the present embodiment. The list includes defined values, values to be used in a program, and copyright contents. Note that what are depicted inFIG.18are merely an example, and copyrights used in the present embodiment are not limited to them. It is assumed in the present embodiment that, because there is a parent-child relation between pieces of data, copyrights or licenses that are stricter (more restricted) than those for parent data cannot be set for child or descendant data. In other words, copyrights set for child or descendant data are as strict as or are not stricter than those for parent data. Because of the existence of such a copyright setting rule, it is possible to correctly determine whether or not the copyright of data that a user is attempting to register is appropriate in relation to a descendant tree from original data registered in the P2P database510, on the basis of copyright information registered in the P2P database510. Explaining more specifically, because a chain-like relation between pieces of data can be recognized on the basis of trace data added to each piece of data, if copyright information related to original data that is positioned at the uppermost position is registered in the P2P database510, it is possible to correctly determine whether or not the copyright of child data is appropriate, on the basis of the copyright setting rule. In a case that data information (DataRecord) of ancestor data including the original data is registered in the P2P database510and that NoLicensSpecified (the value0inFIG.18) representing that a copyright is not set is set as copyright information (rightsLicense inFIG.14), it may be determined that AllRightReserved (the value6inFIG.18), which is a default copyright, is set. Note that the copyright setting rule regarding each piece of data is not necessarily limited to this. 6. Process Flow Example of Each Apparatus In the description above, the copyrights according to the present embodiment have been explained. Next, a process flow example of each apparatus is explained. 6.1. Flow of Data Processing Performed by Processing Apparatus300 First, a flow of data processing performed by the processing apparatus300is explained with reference toFIG.19.FIG.19is a flowchart depicting an example of the flow of the data processing performed by the processing apparatus300. In Step S1000, the trace data processing section340reads and analyzes a processing source file. In Step S1004, the data processing section310generates second data by processing data (first data) included in the processing source file. In a case that Origin Trace Data has been added to the processing source file (i.e. in a case that the processing source file is one generated by an apparatus according to the present embodiment; Step S1008/Yes), Origin Trace Data of the second data is generated in Step S1012. A flow of a process to generate Origin Trace Data of the second data is explained in detail with reference toFIG.20andFIG.21in the following paragraphs. In Step S1016, the trace data processing section340adds Origin Trace Data of the second data to the second data and generates a file. Then, the series of processing ends. Note that, in a case that Origin Trace Data has not been added to the processing source file in Step S1008(i.e. in a case that the processing source file is not one generated by an apparatus according to the present embodiment; Step S1008/No), the processes in Step S1012and Step S1016are omitted. Next, a flow of a process to generate Origin Trace Data of second data is explained with reference toFIG.20andFIG.21.FIG.20andFIG.21are flowcharts depicting an example of the flow of the process to generate Origin Trace Data of the second data that is performed by the processing apparatus300. Note that, inFIG.20andFIG.21and a procedure described below, in a case that there are multiple processing source files, the files are denoted as “multiple pieces of first data,” and in a case that there is one processing source file, the file is denoted as “first data.” In Step S1100, the trace data processing section340of the processing apparatus300calculates a Hash value of each area of the second data, and creates and temporarily records data information (DataInfo) from the calculated Hash value. In Step S1104, the trace data processing section340further calculates a Hash value from a message which is a concatenation of the Hash values of the data information (DataInfo), and temporarily records a result thereof as DataHash. In Step S1108, the trace data processing section340calculates a Hash value of a message which is a concatenation of IDs of trace data (Trace Info) of multiple pieces of first data, and temporarily records a result thereof as ParentsHash. In Step S1112, the trace data processing section340creates an array including private keys of the trace data (Trace Info) of the multiple pieces of first data and temporarily records the array as ParentPrivateKey. In Step S1116, the key generating section330creates a private key and a public key of public key cryptography as a pair and temporarily records the private key and the public key as PrivateKey and PublicKey. In Step S1120, the trace data processing section340calculates HMAC by using a message which is a concatenation of PublicKey and ParentsHash and DataHash as keys, and temporarily records a result thereof as an ID. In Step S1124, the certificate generating section320sets the foremost element in ParentPrivateKeyList as a private key. In Step S1128, the certificate generating section320gives a signature to a message which is a concatenation of ID, PublicKey, ParentsHash, and the like, by using the private key, and temporarily stores a result thereof as Signature. In a case that there is the next element in ParentPrivateKeyList (i.e. in a case that there is unprocessed parent data; Step S1132/Yes), the certificate generating section320sets the next element in ParentPrivateKeyList as a private key in Step S1136and repeats the process to give a signature in Step S1128. In a case that there is not the next element in ParentPrivateKeyList (i.e. in a case that there is no unprocessed parent data; Step S1132/No), the trace data processing section340generates, in Step S1140, Trace Info including ID of the second data, PublicKey of the second data, ParentHash of the second data, multiple Signatures, and TraceInfo of the multiple pieces of first data. In Step S1144, the trace data processing section340generates Origin Trace Data including DataInfo, Trace Info, and PrivateKey. Then, the series of processing ends. 6.2. Flow of Process to Register UserRecord that is Performed by P2P Database Program511 Next, a flow of a process to register UserRecord is explained with reference toFIG.22.FIG.22is a flowchart depicting an example of the flow of the process to register UserRecord that is performed by the P2P database program511. In Step S1200, the node apparatus500receives a registration request for UserRecord from an external apparatus and identifies userID by referring to an associative array (userID) on the basis of sender_address included in the request. In a case that userID has already been registered in the associative array (userID) of the P2P database510(Step S1204/Yes), the P2P database program511performs predetermined error handling in Step S1208. For example, the P2P database program511notifies a sender apparatus of the registration request that userID has already been registered. In a case that userID has not been registered in the associative array (userID) of the P2P database510yet (Step S1204/No), the P2P database program511acquires an address of a target user used at the P2P database510and sets userID in the associative array (userID) in Step S1212. Note that the address is one that also functions as Wallet that manages the asset of the target user. Because the user owns multiple addresses in some cases, when unique userID is set to the multiple addresses, it becomes possible to manage the user by using userID (i.e. without being dependent on particular Wallet, it becomes possible to provide services by using addresses of multiple Wallets). In Step S1216, the P2P database program511searches for manufacturer information in the P2P database510. In Step S1220, by referring to the associative array (makerCertList) of the P2P database510as the manufacturer information and using the public key S of the manufacturer, the examining section511aexamines a “certificate of the public key a of the generating apparatus200” which is included in the registration request and to which an electronic signature is given by using the private key S of the manufacturer. In a case that the examination of the “certificate of the public key a of the generating apparatus200” is successful (Step S1224/Yes), in Step S1228, the P2P database program511registers, in the associative array (originatorKeyList) of the P2P database510, the public key included in the “certificate of the public key a of the generating apparatus200” and registers an identifier of the key in the associative array (originatorKeyList) of the user information (UserRecord). Then, the series of processing ends. In a case that the examination of the “certificate of the public key a of the generating apparatus200” is unsuccessful (Step S1224/No), the P2P database program511performs predetermined error handling in Step S1208. Then, the series of processing ends. 6.3. Flow of Process to Register DataRecord that is Performed by P2P Database Program511 Next, a flow of a process to register DataRecord is explained with reference toFIG.23.FIG.23is a flowchart depicting an example of the flow of the process to register DataRecord that is performed by the P2P database program511. In Step S1300, the node apparatus500receives a registration request for DataRecord from an external apparatus and identifies userID by referring to an associative array (userID) on the basis of sender_address included in the request. In a case that userID has not been registered in the associative array (userID) of the P2P database510(Step S1304/No), the P2P database program511performs predetermined error handling in Step S1308. In a case that userID has been registered in the associative array (userID) of the P2P database510(Step S1304/Yes), the examining section511aexamines trace data (subroutine2-1) in Step S1312. More specifically, the examining section511aexamines whether or not certificates of all pieces of trace data (Trace Info) added to the data are correct. The subroutine2-1is explained in detail in the following paragraphs (the same applies to other subroutines). In a case that the examination of the trace data is successful (Step S1316/Yes), the examining section511aexamines registered DataRecord (subroutine2-2) in Step S1320. More specifically, the examining section511aexamines whether or not a copyright rule, owner setting, and the like are correct, by using the registered data information (DataRecord). In a case that the examination of registered DataRecord is successful (Step S1324/Yes), the examining section511aregisters DataRecord (subroutine2-3) in Step S1328. More specifically, after the examination of the certificate in the previous stage, the examining section511athat functions as a registering section registers, in the P2P database510, an ID that can identify the second data or an ID that is included in the trace data and can identify each piece of data. Note that, in a case that the examination of the trace data is unsuccessful in Step S1316(Step S1316/No) and in a case that the examination of registered DataRecord is unsuccessful in Step S1324(Step S1324/No), the P2P database program511performs predetermined error handling in Step S1308. Then, the series of processing ends. 6.4. Flow of Process to Examine Trace Data Performed by P2P Database Program511 Next, a flow of a process to examine trace data is explained with reference toFIG.24.FIG.24is a flowchart depicting an example of the flow of the process to examine the trace data that is performed by the examining section511aincluded in the P2P database program511. For example, the following process is performed according to an examination request of a user who intends to check the authenticity, a parent-child relation, or the like of certain data. In Step S1400, for example, on the basis of the examination request from the user, the examining section511aexamines trace data (subroutine2-1). In a case that the examination of the trace data is unsuccessful (Step S1404/No), the P2P database program511performs predetermined error handling in Step S1408. Then, the series of processing ends. In a case that the examination of the trace data is successful (Step S1404/Yes), the examining section511acreates an examination result of the trace data in Step S1412. Then, the series of processing ends. More specifically, the examining section511agathers, as the examination result of the trace data, copyright information related to the examination target data and ancestor data (rightsLicense inFIG.14), an identifier of the owner (ownerID inFIG.14), an identifier of a user who is a licensee according to a license agreement (licenseeID inFIG.14), and the like. 6.5. Flow of Process to Acquire UserRecord that is Performed by P2P Database Program511 Next, a flow of a process to acquire UserRecord is explained with reference toFIG.25.FIG.25is a flowchart depicting an example of the flow of the process to acquire UserRecord that is performed by the P2P database program511. For example, the following process is performed according to an acquisition request of another user who intends to check details of a user corresponding to the identifier of the owner or the identifier of the user who is the licensee according to the license agreement, the identifier of the owner and the identifier of the user being included in the examination result of the trace data obtained as described inFIG.24. In Step S1500, the P2P database program511refers to the P2P database510and searches for desired userID specified by the acquisition request from the user, for example, by referring to an associative array (userRecord). In a case that UserRecord including userID is not found (Step S1504/No), the P2P database program511performs predetermined error handling in Step S1508. Then, the series of processing ends. In a case that UserRecord including userID is found (Step S1504/Yes), the P2P database program511acquires UserRecord associated with userID, from the P2P database510in Step S1512. Then, the series of processing ends. 6.6. Flow of Process to Acquire DataRecord that is Performed by P2P Database Program511 Next, a flow of a process to acquire DataRecord is explained with reference toFIG.26.FIG.26is a flowchart depicting an example of the flow of the process to acquire DataRecord that is performed by the P2P database program511. For example, the following process is performed according to an acquisition request of a user who intends to check details of certain data. In Step S1600, the P2P database program511refers to the P2P database510and searches for desired dataID specified by the acquisition request from the user, for example, by referring to an associative array (dataRecord). In a case that DataRecord including dataID is not found (Step S1604/No), the P2P database program511performs predetermined error handling in Step S1608. Then, the series of processing ends. In a case that DetaRecord including dataID is found (Step S1604/Yes), the P2P database program511acquires DataRecord associated with dataID, from the P2P database510in Step S1612. Then, the series of processing ends. 6.7. Flow of Process to Examine Trace Data (Subroutine2-1) Next, a flow of a process to examine the trace data (subroutine2-1) is explained with reference toFIG.27.FIG.27is a flowchart depicting an example of the flow of the process to examine the trace data, which is performed inFIG.23andFIG.24. In Step S1700, the examining section511acalculates a Hash value by using a message which is a concatenation of Hash values in DataInfo in Origin Trace Data, and temporarily records a result thereof as DataHash. In Step S1704, the examining section511acalculates HMAC by using a message which is a concatenation of PublicKey and ParentsHash in TraceInfo in Origin Trace Data and DataHash as keys, and temporarily records a result thereof as MAC (i.e. the examining section511acalculates MAC by using a hash value of second data, a public key of the second data, and a hash value of an ID that can identify first data). In Step S1708, the examining section511aexamines that MAC matches an ID in TraceInfo. Note that the examination process can be said to be a process in which the examining section511aexamines that an ID which is generated by performing a calculation according to a cryptographic hash function by using at least second data (DataHash generated by using the second data), a public key (PublicKey) of the second data, and IDs (ParentsHash) that can identify all pieces of first data, and which can identify the second data matches an ID which is included in the trace data and can identify the second data. In a case that MAC matches the ID in TraceInfo (Step S1708/Yes), the examining section511aperforms a certificate examination of TraceInfo (subroutine2-1-1) in Step S1712. More specifically, the examining section511aexamines whether or not all certificates related to ancestor data included in TraceInfo are correct. Then, the series of processing ends. In a case that MAC does not match the ID in TraceInfo (Step S1708/No), the examining section511aperforms predetermined error handling in Step S1716. Then, the series of processing ends. 6.8. Flow of Process to Examine Certificate of TraceInfo (Subroutine2-1-1) Next, a flow of a process to examine a certificate of TraceInfo (subroutine2-1-1) is explained with reference toFIG.28.FIG.28is a flowchart depicting an example of the flow of the process to examine the certificate of TraceInfo, which is performed inFIG.27. In a case that there is no parent data of examination target data (i.e. in a case that examination target data is original data; Step S1800/No), the examining section511aperforms a certificate examination (subroutine2-1-2) of the original data in Step S1804. More specifically, the examining section511aexamines the certificate of the original data by using the public key a of the generating apparatus200that is registered in the P2P database510. In a case that the examination of the certificate of the original data is successful (Step S1808/Yes), the series of processing ends. In a case that the examination of the certificate of the original data is unsuccessful (Step S1808/No), the examining section511aperforms predetermined error handling in Step S1812. Then, the series of processing ends. In a case that there is parent data of the examination target data (Step S1800/Yes), the examining section511aperforms a certificate examination (subroutine2-1-3) of the data in Step S1816. More specifically, by using a public key of first data (parent data) included in a certificate added to the first data, the examining section511aexamines the certificate to which an electronic signature is given by using a private key of the first data included in the trace data (the certificate in relation to the child data). In a case that the examination of the certificate of the data is unsuccessful (Step S1820/No), the examining section511aperforms predetermined error handling in Step S1812. Then, the series of processing ends. In a case that the examination of the certificate of the data is successful (Step S1820/Yes), on the basis of the trace data, the examining section511adecides in Step S1824whether or not examinations of certificates of all pieces of ancestor data excluding the original data have ended. In a case that the examinations of the certificates of all the pieces of the ancestor data excluding the original data have ended (Step S1824/Yes), the process proceeds to Step S1804, and the examining section511athen performs a certificate examination (subroutine2-1-2) of the original data. Thereafter, the process ends. In a case that the examinations of the certificates of all the pieces of the ancestor data excluding the original data have not ended (Step S1824/No), the process proceeds to Step S1816, and the examining section511arepeats certificate examinations of data (subroutine2-1-3) until the examinations of the certificates of all the pieces of the ancestor data excluding the original data end. 6.9. Certificate Examination of Original Data (Subroutine2-1-2) Next, a certificate examination of original data (subroutine2-1-2) is explained with reference toFIG.29.FIG.29is a flowchart depicting an example of a flow of the process to examine a certificate of the original data, which is performed inFIG.28. In Step S1900, the examining section511aacquires the public key a of the generating apparatus200from the P2P database510. In Step S1904, the examining section511aexamines a certificate of original data by using the public key a of the generating apparatus200. In other words, for the examination of the certificate included in trace data of the original data, the examining section511auses the public key a that is registered in the P2P database510and is of the generating apparatus200of the original data. Because ParentHash included in the trace data of the original data is a hash value (an identifier of the generating apparatus200) of the public key of the generating apparatus200, the certificate of the original data can be examined by obtaining the public key referring to the associative array (originatorKeyList). In a case that the examination of the certificate of the original data is successful (Step S1908/Yes), the series of processing ends. In a case that the examination of the certificate of the original data is unsuccessful (Step S1908/No), the examining section511aperforms predetermined error handling in Step S1912. Then, the series of processing ends. 6.10. Certificate Examination of Data (Subroutine2-1-3) Next, a certificate examination of data (subroutine2-1-3) is explained with reference toFIG.30.FIG.30is a flowchart depicting an example of a flow of the process to examine a certificate of the data, which is performed inFIG.28. In Step S2000, the examining section511acalculates a Hash value from a message which is a concatenation of IDs of at least one or more pieces of parent data in TraceInfo (i.e. a hash value of an ID which is generated by performing a calculation according to a cryptographic hash function by using IDs that can identify at least one or more pieces of first data, and which can identify the first data). Then, in Step S2004, the examining section511adecides whether or not the Hash value is equal to ParentsHash in TraceInfo. In a case that the Hash value is not equal to ParentsHash in TraceInfo (Step S2004/No), the examining section511aperform predetermined error handling in Step S2008. Then, the series of processing ends. In a case that the Hash value is equal to ParentsHash in TraceInfo (Step S2004/Yes), the examining section511acreates an array signature_list including all Signatures of TraceInfo in Step S2012. In Step S2016, the examining section511acreates an array publickey_list including all PublicKeys of TraceInfo. In Step S2020, the examining section511asets the foremost element in signature_list as a signature, and sets the foremost element in publickey_list as a public key. In Step S2024, the examining section511aexamines the signature by using the public key. In a case that the examination of the signature is successful (Step S2028/Yes), the examining section511achecks in Step S2032whether or not there is the next element in publickey_list. In a case that there is the next element in publickey_list (Step S2032/Yes), in Step S2036, the examining section511asets the next element in signature_list as a signature, and the next element in publickey_list as a public key, and repeats the processes in Step S2024to Step S2032, and then, the series of processing ends. In such a manner, the examining section511aperforms examinations of signatures of all pieces of parent data of the examination target data. Note that, in a case that the examination of the signature is unsuccessful in Step S2028(Step S2028/No), the examining section511aperforms predetermined error handling in Step S2008. Then, the series of processing ends. 6.11. Examination of Registered DataRecord (Subroutine2-2) Next, an examination of registered DataRecord (subroutine2-2) is explained with reference toFIG.31.FIG.31is a flowchart depicting an example of a flow of the process to examine registered DataRecord, which is performed inFIG.23. In Step S2100, the examining section511arefers to the associative array (dataRecord) in the P2P database510and searches for an ID of TraceInfo. In a case that the ID is not registered in the P2P database510(Step S2104/No), the examining section511achecks in Step S2108whether or not there is parent data of the examination target data (i.e. whether or not the examination target data is original data). In a case that there is no parent data of the examination target data (Step S2108/No), the examining section511aperforms an owner examination of the original data in Step S2112. More specifically, on the basis of ParentsHash included in trace data (Trace Info) of the original data (in a case that the data is the original data, ParentsHash is a hash value of the public key of the generating apparatus200and is an identifier of the generating apparatus200), the examining section511aexamines that the original data is one generated by any of generating apparatuses200owned by a user who has made the request, by checking that the identifier of Originator is included in an array OriginatorIDList of UserRecord relevant to UserID specified by a registration request. Then, in a case that the original data is not one generated by any of the generating apparatuses200owned by the user, the examining section511aperforms predetermined error handling. In a case that there is parent data of the examination target data (Step S2108/Yes), the examining section511aexamines registered ancestor data by using TraceInfo of the examination target data in Step S2116. More specifically, the examining section511aexamines whether a copyright that is attempted to be set for the examination target data is stricter (more restricted) than a copyright set for DataRecord of registered ancestor data. Then, in a case that the copyright that is attempted to be set for the examination target data is stricter (more restricted) than the copyright set for DataRecord of the registered ancestor data, the examining section511aperforms predetermined error handling. In a case that an ID of TraceInfo is registered in the P2P database510in Step S2104(i.e. in a case that a copyright that has already been registered is attempted to be updated; Step S2104/Yes), the examining section511aexamines in Step S2120whether or not an ID of a user who is attempting to update the copyright is appropriate (i.e. the examining section511aexamines that an identifier of the user who has made the request matches the owner of the data whose copyright is attempted to be updated). More specifically, the examining section511achecks whether or not ownerID of DataRecord registered in the P2P database510(licenseeID in a case that there is a user who is a licensee according to a license agreement) and userID identified by the registration request match. In a case that the ID of the user who is attempting to update the copyright is not appropriate (Step S2120/No), the examining section511aperforms predetermined error handling. In a case that the ID of the user who is attempting to update the copyright is appropriate (Step S2120/Yes), the examining section511aperforms an examination according to the copyright rule in Step S2124. More specifically, the examining section511aexamines whether or not the update target copyright conforms to the copyright rule (the rule that a copyright stricter (more restricted) than that for parent data cannot be set for child data). In a case that the examination according to the copyright rule is unsuccessful, the examining section511aperforms predetermined error handling. Thereafter, in Step S2128, on the basis of whether or not information regarding child data of the examination target data is registered in the P2P database510, the examining section511achecks whether or not there is child data of the examination target data. In a case that there is child data of the examination target data (Step S2128/Yes), in Step S2132, the examining section511arefers to childrenIDList of DataRecord of the examination target and examines whether the copyrights of all pieces of registered child data comply with the setting rule. More specifically, the examining section511aexamines whether a copyright that is attempted to be set for the examination target data is not stricter (is less restricted) than the copyrights set for DataRecord of registered child data. Then, in a case that the copyright that is attempted to be set for the examination target data is not stricter (is less restricted) than the copyrights set for the registered child data, the examining section511aperforms predetermined error handling. 6.12. Registration of DataRecord (subroutine2-3) Next, registration of DataRecord (subroutine2-3) is explained with reference toFIG.32.FIG.32is a flowchart depicting an example of a flow of the process to register DataRecord, which is performed inFIG.23. In Step S2200, the examining section511achecks whether or not DataRecord having the ID of TraceInfo has been registered in the P2P database510. In a case that DataRecord having the ID of TraceInfo has not been registered in the P2P database510(Step S2200/No), the examining section511agenerates DataRecord in the P2P database510in Step S2204. In Step S2208, the examining section511asets a variable rights_license to NoLicenseSpecified. In Step S2212, the examining section511aadds child_id to childrenIDList. In Step S2216, the examining section511aregisters DataRecord in the associative array (dataRecord) in the P2P database510. Then, in a case that there is parent data of the registration target data (Step S2220/Yes), the processes in Step S2200to Step S2216are repeated (i.e. registration of DataRecord of ancestor data that is continuous with and precedes the registration target data and updating of childrenIDList are performed). Then, in a case that there is no parent data of the registration target data (i.e. after registration of DataRecord of ancestor data that is continuous with and precedes the registration target data and updating of childrenIDList are performed; Step S2220/No), the series of processing ends. 7. Example In the description above, the process flow example of each apparatus has been explained. Next, an example of the present disclosure is explained. Note that hereinafter a case that the P2P database510is a consortium blockchain is explained as an example. 7.1. Registration of Image Data For example, the owner of the generating apparatus200(or the processing apparatus300) can register image data in a blockchain by using a manufacturer-provided application. In view of this, an example of a flow of a process to be performed in this case is explained with reference toFIG.33.FIG.33is a sequence diagram depicting an example of a flow of a process to be performed in a case that the owner of the generating apparatus200registers, in the blockchain, image data by using the manufacturer-provided application. Note that, in addition to the node apparatus500, a user apparatus and a service providing apparatus are mentioned with reference toFIG.33. The user apparatus is a certain information processing apparatus operated by a user and can be realized by, for example, the generating apparatus200(but certainly is not limited to this). The service providing apparatus, instead of the user, performs Wallet management of the P2P database510and can be realized by a server or the like of a manufacturer that provides applications (i.e. services). In Step S2300, the user apparatus sends, to the service providing apparatus, a login request including an ID and PassWord for login. In Step S2304, the service providing apparatus performs user authentication by comparing the ID and PassWord included in the login request and a preregistered ID and PassWord. In Step S2308, the service providing apparatus sends an authentication result to the user apparatus. In a case that the user authentication is successful, in Step S2312, the user apparatus sends, to the service providing apparatus, a registration request for UserRecord that includes user information (e.g. a name, attributes, etc.), a certificate of the public key a of the generating apparatus200, and the like. In Step S2316, the service providing apparatus sends the request to the node apparatus500, as a transaction of an address of Wallet of the authenticated user. In Step S2320, the node apparatus500registers UserRecord in the blockchain. More specifically, the node apparatus500registers UserRecord in the blockchain by performing the series of processing depicted inFIG.22. In Step S2324, the node apparatus500sends a registration result to the service providing apparatus. In Step S2328, the service providing apparatus sends the registration result to the user apparatus. In a case that the registration of UserRecord is successful, in Step S2332, the user apparatus sends, to the service providing apparatus, a registration request for DataRecord that includes image data, an identifier of an owner, copyright information, and the like. In Step S2336, the service providing apparatus sends the request to the node apparatus500, as a transaction of the address of Wallet of the authenticated user. In Step S2340, the node apparatus500registers DataRecord in the blockchain. More specifically, the node apparatus500registers DataRecord in the blockchain by performing the series of processing depicted inFIG.23. In Step S2344, the node apparatus500sends a registration result to the service providing apparatus. In Step S2348, the service providing apparatus sends the registration result to the user apparatus. Then, the series of processing ends. Registration of DataRecord of the image data in the P2P database510is realized by the series of processing explained thus far. 7.2. Creation of Certificate Related to Image Data, Etc In addition, a user who has generated image data by using, for example, the generating apparatus200(or the processing apparatus300) can create a certificate to certify the authenticity of the image data, by using a manufacturer-provided service, and share the certificate with another party. In view of this, an example of a flow of a process to be performed in this case is explained with reference toFIG.34.FIG.34is a sequence diagram depicting an example of a flow of a process to be performed in a case that the user who has generated the image data uses the manufacturer-provided service to create a certificate to certify the authenticity of the image data and shares the certificate with another party. Note that a user apparatus inFIG.34can be realized by, for example, the examining apparatus400(but certainly is not limited to this). The service providing apparatus can be realized by a server or the like of a manufacturer that provides services. In Step S2400to Step S2408, the series of processing that is related to the user authentication and explained in Step S2300to Step S2308inFIG.33is performed. In a case that the user authentication is successful, the user apparatus sends the image data including Origin Trace Data and the like, to the service providing apparatus in Step S2412. In Step S2416, the service providing apparatus checks that a Hash value of DataInfo included in Origin Trace Data matches a hash value of the image data, creates an examination request including the hash value of the data and TraceData from the Hash value of DataInfo, and sends the request to the node apparatus500, as a transaction of an address of Wallet of the authenticated user. In Step S2420, the node apparatus500examines TraceData by using data registered in the blockchain. More specifically, the node apparatus500examines TraceData by performing the series of processing depicted inFIG.24. In a case that TraceData has been examined, the node apparatus500generates a predetermined certificate and sends the certificate to the service providing apparatus, as an examination result in Step S2424. In Step S2428, the service providing apparatus sends the certificate to the user apparatus, as the examination result. Then, the user presents the certificate provided as the examination result, to another party (e.g. a buyer or a viewer of the image data). In addition, the service providing apparatus may present the certificate to another party by disclosing the certificate on a predetermined website or the like. Creation and sharing of the certificate related to the image data are realized by the series of processing explained thus far. 7.3. Reporting of Unauthorized Use of Image Data In addition, in a case that a user who has generated image data finds unauthorized use of the image data by another party (e.g. use against the copyright, forgery of the image data, etc.), the user can create a report for reporting the unauthorized use by using a manufacturer-provided service and share the report with another party. In view of this, an example of a flow of a process to be performed in this case is explained with reference toFIG.35.FIG.35is a sequence diagram depicting an example of a flow of a process to be performed in a case that the user uses the manufacturer-provided service to create a report for reporting the unauthorized use and shares the report with another party. Note that a user apparatus inFIG.35can be realized by, for example, the examining apparatus400(but certainly is not limited to this). The service providing apparatus can be realized by a server or the like of a manufacturer that provides services. In Step S2500to Step S2508, the series of processing that is related to the user authentication and explained in Step S2300to Step S2308inFIG.33is performed. In a case that the user authentication is successful, in Step S2512, the user apparatus sends, to the service providing apparatus, a data unauthorized use examination request including OriginTraceData and a path (e.g. a URL, etc.) of the data that is being unauthorizedly used. Here, the OriginTraceData is created by the user who has generated the image data, and is registered in the P2P database510. The service providing apparatus computes a hash value of the image data from the path of the data being unauthorizedly used. After checking that the hash value matches an ID of OriginTraceData, in Step S2516, the service providing apparatus sends, to the node apparatus500, an examination request including the hash value of the image data being unauthorizedly used and OriginTraceData, as a transaction of an address of Wallet of the authenticated user. In Step S2520, the node apparatus500examines whether or not the data corresponding to OriginTraceData is registered in the blockchain, by using data registered in the blockchain. For example, the node apparatus500performs the series of processing depicted inFIG.24, to thereby examine that the hash value of the image data being unauthorizedly used matches the hash value of OriginTraceData, on the basis of the ID of requested OriginTraceData. Then, on the basis of copyright information obtained as an examination result, the node apparatus500checks that the use is authorized use, and in a case that the use is not authorized use, the node apparatus500decides that unauthorized use of the image data created by the user is being performed. In Step S2524, the node apparatus500generates a predetermined report and sends the report to the service providing apparatus, as an examination result. In Step S2528, the service providing apparatus sends the report to the user apparatus, as the examination result. Then, the user presents the report provided as the examination result, to another party. In addition, the service providing apparatus may open a Web page including the report, on a predetermined website or the like, and notify the URL to thereby present the report to another party. Reporting of the unauthorized use of the image data is realized by the series of processing explained thus far. 7.4. Purchase of Image Data In addition, for example, a user may purchase image data generated by another user (i.e. may acquire the ownership of the image data from another user). In view of this, an example of a flow of a process to be performed in this case is explained with reference toFIG.36.FIG.36is a sequence diagram depicting an example of a flow of a process to set UserID of a buyer as LicenseeID of data in a case that the user purchases the image data generated by another user. Note that a user apparatus inFIG.36can be realized by, for example, the processing apparatus300(but certainly is not limited to this). The service providing apparatus can be realized by a server or the like of a manufacturer that provides services. In Step S2600to Step S2608, the series of processing that is related to the user authentication and explained in Step S2300to Step S2308inFIG.33is performed. In a case that the user authentication is successful, in Step S2612, the user apparatus sends, to the service providing apparatus, a request to purchase the image data (a request to set UserID of the buyer as LicenseeID of the image data). It is assumed that, at this time, the buyer has agreed with license conditions presented by the owner of the data in advance and has paid a consideration for the purchase of the image data, and the owner has received a purchase request from the buyer. In Step S2616, on the basis of the request, the service providing apparatus sends a request to change information regarding the owner in data information (DataRecord) registered in the P2P database510in association with the purchase target image data, to the node apparatus500, as a transaction of an address of Wallet of the authenticated user. In Step S2620, on the basis of the request, the node apparatus500changes the information regarding Licensee in the data information (DataRecord) registered in the P2P database510. More specifically, the node apparatus500changes licenseeID in DataRecord (an identifier of a user who is a licensee according to a license agreement) to UserID of the user who intends to purchase the image data. In addition, rightsLicense is changed to AllRightsReserved_UnderAgreements inFIG.18. In Step S2624, the node apparatus500sends an owner change result (a change result of licenseeID) to the service providing apparatus. In Step S2628, the service providing apparatus sends the owner change result to the user apparatus. Setting of LicenseeID at the time of purchase of image data is realized by the series of processing explained thus far. 8. Hardware Configuration Example of Each Apparatus The example of the present disclosure has been explained in the description above. Next, a hardware configuration example of each apparatus according to the present embodiment that is explained thus far in the description above is explained with reference toFIG.37.FIG.37is a block diagram depicting a hardware configuration example of an information processing apparatus900that realizes at least any of the manufacturer apparatus100, the generating apparatus200, the processing apparatus300, the examining apparatus400, and the node apparatus500according to the present embodiment. Information processing by each apparatus according to the present embodiment is realized by cooperative operation between software and hardware explained below. As depicted inFIG.37, the information processing apparatus900includes a CPU (Central Processing Unit)901, a ROM (Read Only Memory)902, a RAM (Random Access Memory)903, and a host bus904a. In addition, the information processing apparatus900includes a bridge904, an external bus904b, an interface905, an input device906, an output device907, a storage device908, a drive909, a connection port911, a communication device913, and a sensor915. The information processing apparatus900may has a processing circuit such as an LSI, a DSP, or an ASIC for encryption calculation, instead of or in addition to the CPU901. The CPU901functions as a calculation processing unit and a control device and controls the overall operation in the information processing apparatus900according to various types of programs. In addition, the CPU901may be a microprocessor. The ROM902stores programs, calculation parameters, and the like to be used by the CPU901. The RAM903temporarily stores a program to be used in execution of the CPU901, parameters that change as appropriate in the execution, and the like. The CPU901can realize configurations to execute, for example, the data generating section210, the certificate generating section220, the key generating section230, and the trace data processing section240of the generating apparatus200; the data processing section310, the certificate generating section320, the key generating section330, and the trace data processing section340of the processing apparatus300; the examining section410and the data similarity deciding section420of the examining apparatus400; and the P2P database program511of the node apparatus500. The CPU901, the ROM902, and RAM903are interconnected by the host bus904aincluding a CPU bus or the like. The host bus904ais connected to the external bus904bsuch as a PCI (Peripheral Component Interconnect/Interface) bus via the bridge904. Note that the host bus904a, the bridge904, and the external bus904bneed not necessarily be configured separately, and one bus may implement these functions. For example, the input device906is realized by devices through which information is input by a user, such as a mouse, a keyboard, a touch panel, a button, a microphone, a switch, or a lever. In addition, for example, the input device906may be a remote control device that uses infrared or other radio waves, or may be externally connected equipment such as a mobile phone or a PDA supporting operation of the information processing apparatus900. Further, for example, the input device906may include an input control circuit or the like that generates an input signal on the basis of information input by a user by using the input means described above and outputs the input signal to the CPU901. By operating the input device906, the user of the information processing apparatus900can input various types of data to the information processing apparatus900and give instructions regarding process operation. The output device907includes a device that can notify acquired information to a user visually or by sounds. Examples of such a device include a display device such as a CRT display device, a liquid crystal display device, a plasma display device, an EL display device, or a lamp, an audio output device such as a speaker or headphones, and a printer device. The storage device908is a device for data storage that is formed as an example of a storage section of the information processing apparatus900. For example, the storage device908is realized by a magnetic storage section device such as an HDD, a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like. The storage device908may include a storage medium, a recording device that records data on a storage medium, a reading device that reads out data from a storage medium, a deleting device that deletes data recorded on a storage medium, and the like. The storage device908stores programs to be executed by the CPU901, various types of data, various types of data acquired externally, and the like. For example, the storage device908can realize the storage section110of the manufacturer apparatus100, the storage section250of the generating apparatus200, and the P2P database510of the node apparatus500. The drive909is a reader/writer for storage media, and is built in the information processing apparatus900or is externally attached to the information processing apparatus900. The drive909reads out information recorded on an attached removable storage medium such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and outputs the information to the RAM903. In addition, the drive909can also write information on a removable storage medium. The connection port911is an interface connected with external equipment and is, for example, a port for connection with external equipment through which data can be transferred by USB (Universal Serial Bus) or the like. For example, the communication device913is a communication interface including a communication device or the like for connection to a network920. For example, the communication device913is a communication card or the like for wired or wireless LAN (Local Area Network), LTE (Long Term Evolution), Bluetooth (registered trademark), or WUSB (Wireless USB). In addition, the communication device913may be an optical communication router, an ADSL (Asymmetric Digital Subscriber Line) router, various types of communication modems, or the like. For example, the communication device913can send and receive signals or the like to and from the Internet or other communication equipment while conforming to a predetermined protocol such as TCP/IP. For example, the sensor915includes various types of sensors such as an imaging sensor, a pressure sensor, an acceleration sensor, a gyro sensor, a geomagnetic sensor, a light sensor, a sound sensor, or a distance measurement sensor. In a case that the generating apparatus200is a camera in the present embodiment, the sensor915can realize an imaging sensor of the generating apparatus200. Note that the network920is a wired or wireless transfer path for information sent from apparatuses connected to the network920. For example, the network920may include public networks such as the Internet, a telephone network, or a satellite communication network, various types of LAN (Local Area Network) and WAN (Wide Area Network) including Ethernet (registered trademark), and the like. In addition, the network920may include a dedicated network such as IP-VPN (Internet Protocol-Virtual Private Network). The hardware configuration example of each apparatus according to the present embodiment has been depicted thus far. Each constituent element in the description above may be realized by using a generally-used member or may be realized by hardware specialized for the function of each constituent element. Accordingly, the hardware configuration to be used can be changed as appropriate according to the technology level of the very time when the present embodiment is implemented. Note that it is possible to fabricate a computer program for realizing respective functions of the information processing apparatus900described above and implement the computer program on a PC or the like. In addition, a computer-readable recording medium on which such a computer program is stored can also be provided. The recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. In addition, the computer program described above may be distributed via a network, for example, without using a recording medium. While the suitable embodiment of the present disclosure is explained in detail with reference to the attached figures thus far, the technical scope of the present disclosure is not limited to the example. It is obvious that it is possible for those with ordinary knowledge in the technical field of the present disclosure to conceive of various types of altered examples or corrected examples within the scope of the technical idea described in claims, and those various types of altered examples or corrected examples are understood to belong to the technical scope of the present disclosure certainly. In addition, the advantages described in the present specification are presented merely for explanation or illustration but not for limitation. That is, the technology according to the present disclosure can exhibit other advantages that are obvious for those skilled in the art from the description of the present specification, along with the advantages described above or instead of the advantages described above. Note that configurations mentioned below also belong to the technical scope of the present disclosure. (1) An information processing apparatus including:a key generating section that generates a public key and a private key of second data generated on the basis of at least one or more pieces of first data;a certificate generating section that generates a certificate by using a private key of the first data to give an electronic signature tothe public key of the second data or an ID that is generated by using the public key of the second data and is capable of identifying the public key of the second data, andthe second data or data generated from the second data; anda trace data processing section that adds, to the second data, the private key of the second data and trace data to be used for tracing a relation between the first data and the second data, in whichthe trace data includes the certificate generated by the certificate generating section and trace data added to the first data. (2) The information processing apparatus according to (1), in which the trace data processing section adds, to the trace data, an ID that is capable of identifying the second data, the ID being generated by performing a calculation according to a cryptographic hash function by using at least the second data, the public key of the second data, and IDs that are capable of identifying all pieces of the first data. (3) The information processing apparatus according to (1) or (2), in whichthe trace data processing section associates digest information representing details of the second data, with the trace data, andthe certificate generating section includes, in the certificate, as a certification target, an ID that is capable of identifying the digest information, the ID being generated by performing a calculation according to a cryptographic hash function by using the digest information. (4) A program causing a computer to implement:generating a public key and a private key of second data generated on the basis of at least one or more pieces of first data;generating a certificate by using a private key of the first data to give an electronic signature tothe public key of the second data or an ID that is generated by using the public key of the second data and is capable of identifying the public key of the second data, andthe second data or data generated from the second data; andadding, to the second data, the private key of the second data and trace data to be used for tracing a relation between the first data and the second data, in whichthe trace data includes the certificate and trace data added to the first data. (5) An information processing apparatus including:a data generating section that generates data;a key generating section that generates a public key and a private key of the data;a certificate generating section that generates a certificate by using a private key of the information processing apparatus to give an electronic signature tothe public key of the data or an ID that is generated by using the public key of the data and is capable of identifying the public key of the data, andthe data or data generated from the data; anda trace data processing section that adds, to the data, the private key of the data and trace data that is to be used for tracing generation of the data by the information processing apparatus and includes the certificate generated by the certificate generating section. (6) The information processing apparatus according to (5), in which the trace data processing section adds, to the trace data, an ID that is capable of identifying the data, the ID being generated by performing a calculation according to a cryptographic hash function by using at least the data, the public key of the data, a public key of the information processing apparatus. (7) The information processing apparatus according to (5) or (6), in whichthe trace data processing section associates digest information representing details of the data, with the trace data, andthe certificate generating section includes, in the certificate, as a certification target, an ID that is capable of identifying the digest information, the ID being generated by performing a calculation according to a cryptographic hash function by using the digest information. (8) The information processing apparatus according to any one of (5) to (7), in which a certificate, a public key of the information processing apparatus, or an identifier of the public key of the information processing apparatus is registered in a P2P database, the certificate being generated by using a private key of a manufacturer of the information processing apparatus to give an electronic signature to the public key of the information processing apparatus or the identifier of the public key of the information processing apparatus. (9) The information processing apparatus according to (8), in which a certificate generated by using the private key of the manufacturer to give an electronic signature to a public key of the manufacturer is registered in the P2P database. (10) An information processing apparatus including:an examining section that uses trace data and information registered in a database, the trace data being used for tracing a relation between at least one or more pieces of first data and second data generated on the basis of the first data and being added to the second data, to thereby examine authenticity of the second data or data generated from the second data; anda registering section that registers, in the database, the second data or an ID that is capable of identifying the second data, in whichthe trace data includes a certificate and trace data added to the first data,the certificate being generated by using a private key of the first data to give an electronic signature toa public key of the second data or an ID that is generated by using the public key of the second data and is capable of identifying the public key of the second data, andthe second data or the data generated from the second data. (11) The information processing apparatus according to (10), in whichthe examining sectionexamines the certificate of the second data that is included in the trace data and provided with the electronic signature by using the private key of the first data, by using a public key of the first data that is included in the trace data added to the first data or an ID that is generated by using the public key of the first data and is capable of identifying the public key of the first data, andexamines that an ID that is generated by performing a calculation according to a cryptographic hash function by using at least the second data, the public key of the second data, and IDs that are capable of identifying all pieces of the first data and is capable of identifying the second data matches an ID that is included in the trace data and is capable of identifying the second data. (12) The information processing apparatus according to (11), in whichdata to be treated as the second data is also treated as the first data, so that those pieces of data have a chain-like relation,for an examination of a certificate included in the trace data of foremost data in the chain-like relation, the examining section uses a public key of a generating apparatus of the foremost data that is registered in the database, or an ID that is generated by using the public key of the generating apparatus and is capable of identifying the public key of the generating apparatus, andthe public key of the generating apparatus is examined according to a certificate provided with an electronic signature by using a private key of a manufacturer and is registered in the database. (13) The information processing apparatus according to (12), in which, after the examination of the certificate by the examining section, the registering section registers, in the database, an ID that is capable of identifying the second data or an ID that is included in the trace data and is capable of identifying each piece of data. (14) The information processing apparatus according to any one of (10) to (13), in whichthe database includes a P2P database, andthe examining section is provided in the P2P database and is realized by a predetermined program executed on the P2P database. (15) An information processing method executed by a computer, the information processing method including:using trace data and information registered in a database, the trace data being used for tracing a relation between at least one or more pieces of first data and second data generated on the basis of the first data and being added to the second data, to thereby examine authenticity of the second data or data generated from the second data; andregistering, in the database, the second data or an ID that is capable of identifying the second data, in whichthe trace data includes a certificate and trace data added to the first data,the certificate being generated by using a private key of the first data to give an electronic signature toa public key of the second data or an ID that is generated by using the public key of the second data and is capable of identifying the public key of the second data, andthe second data or the data generated from the second data. (16) A program that provides an external apparatus with trace data that is used for tracing a relation between at least one or more pieces of first data and second data generated on the basis of the first data and is added to the second data, the external apparatus being configured to examine authenticity of the second data or data generated from the second data, by using the trace data and information registered in a database, the program causing a computer to realize:a configuration that the trace data includes a certificate and trace data added to the first data,the certificate being generated by using a private key of the first data to give an electronic signature toa public key of the second data or an ID that is generated by using the public key of the second data and is capable of identifying the public key of the second data, andthe second data or the data generated from the second data. REFERENCE SIGNS LIST 100: Manufacturer apparatus110: Storage section200: Generating apparatus210: Data generating section220: Certificate generating section230: Key generating section240: Trace data processing section250: Storage section300: Processing apparatus310: Data processing section320: Certificate generating section330: Key generating section340: Trace data processing section400: Examining apparatus410: Examining section420: Data similarity deciding section500: Node apparatus510: P2P database511: P2P database program511a: Examining section (registering section)600: P2P network | 103,390 |
11943340 | DETAILED DESCRIPTION FIG.1shows an example of data transfer between linked virtual machines. In this example, a first VM (e.g., VM1) transfers data to a second VM (e.g., VM2) using two memory movements and associated total memory encryption (TME) operations. VM1and VM2do not share the same memory space or share memory addresses. To copy data from VM1to VM2, the data payload is encrypted and protected using various page keys to ensure no plain text is exposed inside a dynamic random access memory (DRAM). The data is copied from memory page1(source) to memory page2(intermediate buffer). A second copy is made from memory page2to memory page3(destination). There are four cryptography operations involved during the two copy operations (e.g., a decryption, an encryption, another decryption, and another encryption). More specifically, for a data copy from memory page1to memory page2, data is decrypted after being read from memory page1and encrypted before being written to memory page2. For a data copy from memory page2to memory page3, the data is decrypted after being read from memory page2and encrypted before being written to memory page3. Encryption and decryption operations use processor resources and consume power and introduce latency before data is available at a destination (memory page3). Encryption for a write to an intermediate buffer and decryption for buffer read from the intermediate buffer are a pair of crypto-operations on the same content and use the same key but the payload is not modified. Duplicated crypto operations involving copies to and from the intermediate buffer introduce copy latency and power cost. Various embodiments provide for copying an encrypted memory page or a portion of a source memory page in volatile or non-volatile memory to a destination memory page in volatile or non-volatile memory using a single decryption and a single encryption operation. Fewer decryption and encryption operations can reduce CPU use, save power, and reduce latency for data copy availability at the destination. A virtual channel can copy the data from a source memory page to an intermediate page without decryption and encryption of the data. In connection with a copy of data to the intermediate page, the virtual channel can store metadata concerning the copied data into a metadata table or the intermediate page. The metadata can include an encryption key identifier and a source memory address. The virtual channel can be used to invoke a decryption operation and encryption operation for the data to the destination memory page. The key associated with the encryption key identifier can be used to decrypt data stored in the intermediate page. Before the data is written to the destination, the data can be encrypted using a different key. Some embodiments can invoke use of a memory controller with a cryptography capability to decrypt stored data (e.g., bits, bytes, plain text, encrypted content) read from an intermediate buffer, encrypt the decrypted data, and cause the encrypted data to be written to a destination volatile memory region. To decrypt the data, the memory controller can use a key associated with the data and the key is identified in a metadata table provisioned by a trusted hypervisor or a virtual channel. To encrypt the data that is to be written to the destination memory region, the memory controller can use a different page key that is provisioned by a hypervisor or VM2. In some embodiments, to secure data stored in memory, memory pages or buffers of various domains are protected by Intel TME, Intel MK-TME, Intel SGX, AMD's Secure Memory Encryption (SME), AMD's Secure Encrypted Virtualization (SEV), ARM TrustZone, and so forth. The use of virtual channel with crypto engine provides an integrated solution at least for secure VM-to-VM communications, intra-VM communications, or container-to-container communications, but also adds cost of data translation (e.g., decryption or encryption) while moving data between various memory pages. A virtual machine running a VNF can request processing to be performed by a second virtual machine running another VNF thereby linking VNF operations. Data copied from the source memory page to the destination memory page can be used by the second virtual machine's VNF to perform processing. Processed data from the second virtual machine can be encrypted and transferred back to the first virtual machine using similar techniques as used to transfer data from the first virtual machine to the second virtual machine. Total memory encryption (TME) and multi-key total memory encryption (MKTME) are available from Intel Corporation and are described in the Intel Architecture Memory Encryption Technologies Specification version 1.1 dated Dec. 17, 2017 and later revisions, available at https://software.intel.com/en-us/blogs/2017/12/22/intel-relea es-new-technology-specification-for-memory-encryption. Embodiments are not limited to employing only TME or MKTME. TME provides a mechanism to encrypt data on the memory interfaces. The memory controller uses a cryptography engine to encrypt the data flowing out on the memory interfaces to the memory or decrypt data flowing in from memory and provides plain text for internal consumption by the processor. TME is a technology that encrypts a device's entire memory or portion of a memory with a single key. When enabled via basic I/O system (BIOS) configuration, TME can help ensure that all memory accessed by a processor on an external memory bus is encrypted, including customer credentials, encryption keys, and other data. TME supports a variety of encryption algorithms and in one embodiment may use a National Institute of Standards and Technology (NIST) encryption standard for storage such as the advanced encryption system (AES) XTS algorithm with 128-bit keys. The encryption key used for memory encryption can be generated using a hardened random number generator in the processor and is never exposed to software. Data in memory and on the external memory buses is encrypted and is only in plain text while inside the processor circuitry. This allows existing software to run unmodified while protecting memory using TME. There may be scenarios where it would be advantageous to not encrypt a portion of memory, so TME allows the BIOS to specify a physical address range of memory to remain unencrypted. The software running on a TME-capable system has full visibility into all portions of memory that are configured to not be encrypted by TME. This is accomplished by reading a configuration register in the processor. In an embodiment, TME supports multiple encryption keys (Multi-Key TME (MKTME)) and provides the ability to specify the use of a specific key for encrypting or decrypting a page of memory (e.g., an addressable region of memory). This architecture allows either processor-generated keys or tenant-provided keys, giving full flexibility to customers. VMs and containers can be cryptographically isolated from each other in memory with separate encryption keys, an advantage in multi-tenant cloud environments. VMs and containers can also be pooled to share an individual key, further extending scale and flexibility. This includes support for both standard dynamic random-access memory (DRAM) and non-volatile random-access memory (NVRAM). A virtual machine (VM) can be software that runs an operating system and one or more applications. The virtual machine is defined by specification, configuration files, virtual disk file, NVRAM setting file, and the log file and is backed by the physical resources of a host computing platform. A container can a software package of applications, configurations and dependencies so the applications run reliably on one computing environment to another. Containers can share an operating system installed on the server platform and run as isolated processes. A process can be any software that is executed by a computer (including executables, binaries, libraries, or any code). An example process is a VNF. Multiple processes can be executed within a VM or container. A core can be an execution core or computational engine that is capable of executing instructions. A core can have access to its own cache and read only memory (ROM), or multiple cores can share a cache or ROM. Cores can be homogeneous and/or heterogeneous devices. Frequency or power use of a core can be adjustable. Any type of inter-processor communication techniques can be used, such as but not limited to messaging, inter-processor interrupts (IPI), inter-processor communications, and so forth. Cores can be connected in any type of manner, such as but not limited to, bus, ring, or mesh. Cores can also include a system agent. System agent can include or more of: a memory controller, a shared cache, a cache coherency manager, arithmetic logic units, floating point units, core or processor interconnects, or bus or link controllers. System agent can provide one or more of: direct memory access (DMA) engine connection, non-cached coherent master connection, data cache coherency between cores and arbitrates cache requests, or Advanced Microcontroller Bus Architecture (AMBA) capabilities. FIG.2Adepicts a system in accordance with some embodiments. In connection with VM-to-VM communications such as VNF-to-VNF communications or intra-process communications within a VM, the system can copy data from a source memory page associated with a first VM (or process) to a destination memory page associated with a second VM (or process) via an intermediate buffer using a single decrypt operation and a single encrypt operation. Example operations that VNFs can perform include next destination determination, routing, firewall, Intrusion detection systems (IDS), intrusion prevention systems (IPS), gateway general packet radio service support node (GGSN), serving general packet radio service support node (SGSN), Radio Network Controller (RNC), or Evolved Packet Core (EPC). The source memory page can be an origin of data and accessible by VM1. VM1can be executed by core202-A, for example. The source memory page can be encrypted with a key identified using a Key ID. Key ID can be provided with a physical address and provisioned into a page table during a memory page allocation phase. For example, a crypto engine can be used to encrypt data using the key. The key can be stored in a memory location accessible to the crypto engine. The key can be accessed by using the Key ID and a source address of the data. The data can be copied without decryption or translation from a source memory page to an intermediate memory buffer. The VM1can initiate a data transfer by instructing virtual channel204to cause a copy operation. Virtual channel204can be assigned to provide VM1-to-VM2communication by the hypervisor of VM1and VM2. VM2can be executed by core202-B, for example. In some examples, VM1and VM2are separate virtual machines. In some examples, VM1and VM2represent different processes that are performed within a single virtual machine. For example, virtual channel (vCH)204provides single-producer single-consumer channel between VMs. Virtual channel204can include a ring that stores messages provided between virtual machines. In this example, a hypervisor has provisioned virtual channel204as solely between VM1and VM2. Virtual channel204can provide a message ring for message sending between VMs. Virtual channel204can manage an intermediate buffer page or pages212without use of crypto engine222available for use to memory controller220(e.g., without total memory encryption engine (TME) protection). In this example, VM1places a message on a ring of virtual channel204and VM2obtains the message from the ring. The message ring can be stored in volatile memory. In response to a message from VM1to transmit data, virtual channel204can cause memory controller220to copy the data from source buffer210to intermediate memory buffer212but without decryption or encryption. Virtual channel204can also store metadata in a volatile memory. The metadata can include a key Id and starting physical address of the source data. Accordingly, the data stored in intermediate buffer212can be associated with metadata including key Id and a starting physical address (at the source memory page). For example, data can include one or more of: packet data, packet context, packet header, packet payload, and so forth. The data can be provided for processing by a VNF of VM2for example. The metadata can be used to decrypt the page or data read from the intermediate buffer before writing the data into destination buffer214in one or more memory pages accessible to VM2. In some examples, the metadata is opaque data managed by the virtual channel and is not visible to any virtual machine or hypervisor. In some examples, the metadata can be stored in a memory region that is visible to either or both of VM1and VM2and encrypted. Before decrypting the data from intermediate buffer212, crypto engine receives the key ID and source address from virtual channel204. For example, the write command from VM1can include a combination of the keyID with the source address, both of which are embodied inside the physical address. Virtual channel204can post an interrupt to VM2or otherwise indicate that data is available for copy. VM2can issue a receive command to the virtual channel by placing the receive command on the message ring. In response to a receive command, the virtual channel can invoke use of memory controller220and crypto engine22to copy data from intermediate buffer212to destination buffer214. Virtual channel204can cause crypto engine222to use a key accessible from key table224associated with the key ID and source address in order to decrypt the source data stored in the intermediate buffer. Virtual channel204can provide a key ID and source address for a memory access to memory controller220so that crypto engine222can retrieve a key to use to decrypt the source data from the intermediate buffer. In some examples, crypto engine222maintains an internal key table224not accessible by software to store key associated with each KeyID. The physical address (source address) can be used to configure the data encryption or decryption. For example, crypto engine222can access a key from a secure table (e.g., key table224) accessible to crypto engine222. In addition, virtual channel204causes crypto engine222to encrypt the data using a default memory page key for VM2and memory controller220causes the encrypted data (based on the default memory page key for VM2) to be stored into VM2's destination buffer214. In some examples, crypto engine222is in or accessible to one or more memory controllers. Crypto engine222can provide Advanced Encryption Standard (AES)-XEX tweaked-codebook mode with ciphertext stealing (XTS) compatible encryption or decryption. Crypto engine222can have a memory region or access a memory region that stores a key table224with keys and corresponding of encryption/decryption mode to use based on the key. In some examples, the key and corresponding of encryption/decryption mode can be indexed using a KeyID and verified using the source address. For example,FIG.4depicts a non-limiting example of crypto engine222. Crypto engine222can use, be invoked by, or implemented using one or more of: Intel TME, Intel MK-TME, Intel Software Guard Extensions (SGX), AMD's Secure Memory Encryption (SME), AMD's Secure Encrypted Virtualization (SEV), ARM TrustZone, or derivatives thereof. Note that if there are multiple VM-VM communications, separate virtual channel instances are created with various identifiers. VMs can communicate with another VM that shares an identifier. If a VM attempts to communicate with VM without a shared identifier, data is not delivered by virtual channel204and error would be returned. FIG.2Bdepicts an example of a system. In this example, virtual channel (vCH)204can be implemented in the same CPU as cores that execute VM1and VM2. For example, vCH204can provide messages between VM1and VM2and commands to memory controller220and crypto engine222. In this example, VM1is provisioned to provide transmit or receive commands to virtual channel204. Likewise, VM2is provisioned to provide transmit or receive commands to virtual channel204. Based on a command from a VM1or VM2, virtual channel204can issue a read request for source data in a memory page and request decryption or encryption of the source data or not request any decryption or encryption of the source data. In addition, based on a command from a VM1or VM2, vCH204can issue a write request for source data in a memory page and request decryption or encryption of the source data or not request any decryption or encryption of the source data. A key ID can be embedded in unused bits of a physical address and vCH204can retrieve the key ID from a memory read/write command. For example, if a VM1requests a copy of data from source buffer210to destination buffer214(e.g., one or more memory pages), VM1can issue a transmit request to virtual channel204. Virtual channel204can cause memory controller220to copy data from one or more memory pages in source buffer210to intermediate buffer212(e.g., one or more memory pages) without modification (e.g., without modification by crypto engine). Virtual channel204can store a key ID and source physical address in the source buffer of the data into a metadata table216. Virtual channel204can inform the VM2of availability of data. The VM2can issue a receive request to virtual channel204to copy data from intermediate buffer212to destination buffer214(e.g., one or more memory pages). Crypto engine222can use a key ID and source address to identify a key from key table224to use to decrypt the data. Crypto engine222can encrypt the data using a page key for VM2and memory controller220can cause the encrypted data to be stored into a memory page of destination buffer214. In some examples, page keys are stored inside or solely accessible to crypto engine222and VM2page key can be provisioned by VM2or hypervisor. FIG.3depicts a data movement sequence for copying data from a memory region used by a first virtual machine to a destination memory region used by a second virtual machine. At301, a hypervisor sets up a virtual channel (vCH) in the CPU that can be used for communication between specified virtual machines and the virtual machines and the memory controller. The hypervisor also allocates intermediate buffer page(s) in volatile memory. The hypervisor can be a virtual machine monitor (VMM) that creates and supervises virtual machine operation. The hypervisor can reside and execute on the same core, CPU, node or platform as that which can run one or more of the VMs (or processes within a VM) that are to copy or receive data. A platform can include a core, memory, networking, and communication interfaces and is capable of executing software. At302A, the hypervisor provisions a VM image for VM1and at302B, the hypervisor provisions a VM image for VM2. In this example, VM1and VM2run on the same CPU as that of the hypervisor. In some examples, VM1and VM2execute on a same platform as that which runs the hypervisor, however either or both of VM1and VM2can be provisioned to run on a different platform than that of the hypervisor. During302A and302B, the hypervisor provisions VM1and VM2with memory page keys, key IDs, and assigned vCH. Accordingly, VM1and VM2are launched with an assigned vCH to use for VM-to-VM communication and the vCH is provisioned with key IDs for use by VM1and VM2. At303A, a VM1constructs a payload in a source buffer for transfer or copy to a memory region associated with VM2. A crypto engine can perform an encryption of the payload in connection with a write of the payload into the source buffer. The encryption can be made using the key associated with key ID provided by the hypervisor at302A. At303B, VM1posts to the vCH a transmit command and includes the starting physical address of data in source buffer and length of payload. The physical address can be a volatile memory starting address of the source data in a source buffer (from the standpoint of the hypervisor). At303C, in response to the transmit command from VM1, vCH invokes a memory controller to copy the payload from the source buffer to an intermediate buffer. The transmit command can include a key ID and a source address. In response, vCH copies encrypted payload from the source buffer in memory (e.g., volatile memory) to the intermediate buffer (e.g., in volatile memory) without modification of the payload (e.g., decryption and/or encryption). The decrypt-then-encrypt operations are bypassed because vCH works as a channel that causes a memory controller to copy plain text payload data (or encrypted payload data) without modification. In some examples, a crypto engine accessible to the memory controller is not used during the copy of the payload from the source buffer to the intermediate buffer. The crypto engine is available for use by one or more memory controllers in connection with write or read operations. At303C, vCH causes the key ID for the payload and payload's starting physical address in source buffer to be stored into a metadata table associated with the intermediate buffer. In some examples, a key ID can be embedded in unused bits of a physical address provided with a transmit command and vCH204can retrieve the key ID from the transmit command. In some examples, data stored in the source buffer has a key ID allocated by a hypervisor or provisioned by VM1. In some examples, a hypervisor can provide a key ID for the source payload or source buffer to the vCH. The metadata table can be stored in the intermediate buffer or a different memory page than that used to store the intermediate buffer. Also, at303C, the vCH posts an interrupt to VM2that a payload is available. In addition, or alternatively, VM1can provide to VM2a data length of the data copied to the intermediate buffer using a transmit or receive command provided to the vCH. At304A, VM2issues a receive command to the vCH to retrieve payload from an intermediate buffer and copy the payload to a destination buffer. In some examples, the receive command specifies use of a key ID to use decrypt the payload and a VM2page key to use to encrypt the payload. At304B, in response to the receive command, vCH provides a command to the memory controller and its crypto engine to perform a copy of the payload from the intermediate buffer and decryption using a key retrieved based on the key ID and the starting address. The vCH accesses the key Id and source address stored in metadata and the vCH provides keyID and source address to the crypto engine used by a memory controller. The crypto engine retrieves the key using the keyID and source address. The key can be stored in a secure table accessible to the crypto engine. At304C1, the crypto engine performs decryption of the payload stored in the intermediate buffer (e.g., memory page) into plain text using the key associated with the key ID. At304C2, the crypto engine encodes the plain text data using a VM2page key. The VM2page key can be provisioned by the hypervisor and stored inside the crypto engine or in a secure memory location accessible to the crypto engine. At304D, memory controller writes the encrypted data into a destination memory region that is associated with VM2. Accordingly, two different keys can be used for reading and writing data from the intermediate buffer to a destination buffer. In addition, instead of four cryptography operations, two cryptography operations can be performed in connection with secure transfer of data for use by another VM. Accordingly, linking of VNFs with secure sharing of data can be provided. After the VM2processes the data from the VM1, the VM2can make the processed data available to VM1or to another VM or VNF for additional processing. For example, if VM2is to provide result data to VM1, the VM2can invoke vCH to perform a copy of result data through intermediate buffer to a memory region associated with VM1in a manner described herein using a single decryption operation and single encryption operation. Likewise, if VM2is to copy result data or any data for use by another VM (e.g., VM3), VM2can invoke vCH to cause a copy of data to a memory region associated with VM3. The examples can be extended whereby a hypervisor provisions VM3with memory page keys, key Ids, and assigned vCH for use to send or receive data. FIG.4depicts an example of a memory controller with access to a crypto engine. A memory controller can be provided for read or write operations involving a volatile memory. The same or other memory controller can be provided for read or write operations involving a non-volatile memory. A crypto engine is in a direct data path to external memory buses and all the memory data entering and/or leaving the CPU can be encrypted or decrypted. The data inside the caches and accessible to the cores are in plain text. The encryption key can be generated by the CPU and therefore, is not visible to the software. Crypto engine can include or access a table402that can store keys. The keys can be accessed using a key ID and source physical address. For example, the crypto engine can receive the key ID and source physical address from a vCH or access the key ID and source physical address from a meta data table. FIG.5Adepicts an example process. At502, a virtual channel for inter-virtual machine or intra-virtual machine communications is provisioned. For example, a hypervisor can provision the virtual channel. The virtual channel can provide a ring for message passing between virtual machines or processes within a virtual machine. In addition, the virtual channel can receive communications from a virtual machine to initiate a payload transmit or payload receive operation. The payload transmit operation can cause a payload in a source buffer to be copied without modification to an intermediate buffer. The key ID and the source address of the payload can be stored in a metadata table. The payload receive operation can cause the payload to be decrypted using a key associated with the key ID and the source address of the payload and encrypted using the same or different key. At504, virtual machines are setup to use the virtual channel for communications. For example, a hypervisor can cause instantiation of multiple virtual machines and setup the virtual machines to use the particular virtual channel for inter-virtual machine or intra-virtual machine communications. FIG.5Bdepicts an example process. The process can be performed by multiple virtual machines or processes within a virtual machine for a data transfer. For example, a virtual machines (or processes) can be linked VNFs where a first VNF (or process) calls a second VNF (or process) to perform processing on data. At550, a first virtual machine requests data to be copied to a buffer associated with a second virtual machine. The data can be requested to be copied from a source buffer to a destination buffer. At552, the data is copied to an intermediate buffer without modification. For example, a virtual channel provisioned for communication between virtual machines can cause the copy of data from a source buffer to an intermediate buffer. At554, in connection with the copy of the data from the source buffer to the intermediate buffer, the virtual channel can store a key ID and source address associated with the data. At556, the data is copied from the intermediate buffer to the destination buffer. For example, the virtual channel that is managing the operation can cause a memory controller to copy the data and utilize a crypto engine for decrypting the data read from the intermediate buffer and encrypting the data written to the destination buffer. For example,556can include one or more of558to564(FIG.5C). FIG.5Cdepicts an example process of action556. At558, a crypto engine retrieves a key associated with the key ID and source address for the data. The key can be retrieved from a secure table accessible solely or semi-exclusively to the crypto engine using the key ID and the source address for the data. At560, the crypto engine decrypts the data using the key associated with the key ID and source address for the data. At562, the crypto engine encrypts the data using a page key associated with the destination VM. At564, a memory controller causes the encrypted data to be written to the destination buffer. FIG.6depicts a system. The system can use embodiments described herein. System600includes processor610, which provides processing, operation management, and execution of instructions for system600. Processor610can include any type of microprocessor, central processing unit (CPU), graphics processing unit (GPU), processing core, or other processing hardware to provide processing for system600, or a combination of processors. Processor610controls the overall operation of system600, and can be or include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices. In one example, system600includes interface612coupled to processor610, which can represent a higher speed interface or a high throughput interface for system components that needs higher bandwidth connections, such as memory subsystem620, graphics interface components640, or accelerators642. Interface612represents an interface circuit, which can be a standalone component or integrated onto a processor die. Where present, graphics interface640interfaces to graphics components for providing a visual display to a user of system600. In one example, graphics interface640can drive a high definition (HD) display that provides an output to a user. High definition can refer to a display having a pixel density of approximately 100 PPI (pixels per inch) or greater and can include formats such as full HD (e.g., 1080p), retina displays, 4K (ultra-high definition or UHD), or others. In one example, the display can include a touchscreen display. In one example, graphics interface640generates a display based on data stored in memory630or based on operations executed by processor610or both. In one example, graphics interface640generates a display based on data stored in memory630or based on operations executed by processor610or both. Accelerators642can be a fixed function offload engine that can be accessed or used by a processor610. For example, an accelerator among accelerators642can provide compression (DC) capability, cryptography services such as public key encryption (PKE), cipher, hash/authentication capabilities, decryption, or other capabilities or services. In some embodiments, in addition or alternatively, an accelerator among accelerators642provides field select controller capabilities as described herein. In some cases, accelerators642can be integrated into a CPU socket (e.g., a connector to a motherboard or circuit board that includes a CPU and provides an electrical interface with the CPU). For example, accelerators642can include a single or multi-core processor, graphics processing unit, logical execution unit single or multi-level cache, functional units usable to independently execute programs or threads, application specific integrated circuits (ASICs), neural network processors (NNPs), programmable control logic, and programmable processing elements such as field programmable gate arrays (FPGAs). Accelerators642can provide multiple neural networks, processor cores, or graphics processing units can be made available for use by artificial intelligence (AI) or machine learning (ML) models. For example, the AI model can use or include any or a combination of: a reinforcement learning scheme, Q-learning scheme, deep-Q learning, or Asynchronous Advantage Actor-Critic (A3C), combinatorial neural network, recurrent combinatorial neural network, or other AI or ML model. Multiple neural networks, processor cores, or graphics processing units can be made available for use by AI or ML models. Memory subsystem620represents the main memory of system600and provides storage for code to be executed by processor610, or data values to be used in executing a routine. Memory subsystem620can include one or more memory devices630such as read-only memory (ROM), flash memory, one or more varieties of random access memory (RAM) such as DRAM, or other memory devices, or a combination of such devices. Memory630stores and hosts, among other things, operating system (OS)632to provide a software platform for execution of instructions in system600. Additionally, applications634can execute on the software platform of OS632from memory630. Applications634represent programs that have their own operational logic to perform execution of one or more functions. Processes636represent agents or routines that provide auxiliary functions to OS632or one or more applications634or a combination. OS632, applications634, and processes636provide software logic to provide functions for system600. In one example, memory subsystem620includes memory controller622, which is a memory controller to generate and issue commands to memory630. It will be understood that memory controller622could be a physical part of processor610or a physical part of interface612. For example, memory controller622can be an integrated memory controller, integrated onto a circuit with processor610. While not specifically illustrated, it will be understood that system600can include one or more buses or bus systems between devices, such as a memory bus, a graphics bus, interface buses, or others. Buses or other signal lines can communicatively or electrically couple components together, or both communicatively and electrically couple the components. Buses can include physical communication lines, point-to-point connections, bridges, adapters, controllers, or other circuitry or a combination. Buses can include, for example, one or more of a system bus, a Peripheral Component Interconnect (PCI) bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1364 bus. In one example, system600includes interface614, which can be coupled to interface612. In one example, interface614represents an interface circuit, which can include standalone components and integrated circuitry. In one example, multiple user interface components or peripheral components, or both, couple to interface614. Network interface650provides system600the ability to communicate with remote devices (e.g., servers or other computing devices) over one or more networks. Network interface650can include an Ethernet adapter, wireless interconnection components, cellular network interconnection components, USB (universal serial bus), or other wired or wireless standards-based or proprietary interfaces. Network interface650can transmit data to a remote device, which can include sending data stored in memory. Network interface650can receive data from a remote device, which can include storing received data into memory. Various embodiments can be used in connection with network interface650, processor610, and memory subsystem620. In one example, system600includes one or more input/output (I/O) interface(s)660. I/O interface660can include one or more interface components through which a user interacts with system600(e.g., audio, alphanumeric, tactile/touch, or other interfacing). Peripheral interface670can include any hardware interface not specifically mentioned above. Peripherals refer generally to devices that connect dependently to system600. A dependent connection is one where system600provides the software platform or hardware platform or both on which operation executes, and with which a user interacts. In one example, system600includes storage subsystem680to store data in a nonvolatile manner. In one example, in certain system implementations, at least certain components of storage684can overlap with components of memory subsystem620. Storage subsystem680includes storage device(s)684, which can be or include any conventional medium for storing large amounts of data in a nonvolatile manner, such as one or more magnetic, solid state, or optical based disks, or a combination. Storage684holds code or instructions and data686in a persistent state (i.e., the value is retained despite interruption of power to system600). Storage684can be generically considered to be a “memory,” although memory630is typically the executing or operating memory to provide instructions to processor610. Whereas storage684is nonvolatile, memory630can include volatile memory (i.e., the value or state of the data is indeterminate if power is interrupted to system600). In one example, storage subsystem680includes controller682to interface with storage684. In one example controller682is a physical part of interface614or processor610or can include circuits or logic in both processor610and interface614. A power source (not depicted) provides power to the components of system600. More specifically, power source typically interfaces to one or multiple power supplies in system600to provide power to the components of system600. In one example, the power supply includes an AC to DC (alternating current to direct current) adapter to plug into a wall outlet. Such AC power can be renewable energy (e.g., solar power) power source. In one example, power source includes a DC power source, such as an external AC to DC converter. In one example, power source or power supply includes wireless charging hardware to charge via proximity to a charging field. In one example, power source can include an internal battery, alternating current supply, motion-based power supply, solar power supply, or fuel cell source. In an example, system600can be implemented using interconnected compute sleds of processors, memories, storages, network interfaces, and other components. High speed interconnects can be used such as PCIe, Ethernet, or optical interconnects (or a combination thereof). FIG.7depicts an example of a network interface. Various embodiments can use the network interface or be used by the network interface. For example, a data center or server can use the network interface. For example, network interface700can be a smart network interface controller (NIC) that executes multiple or linked VMs or VNFs. Network interface700can use transceiver702, processors704, transmit queue706, receive queue708, memory710, and bus interface712, and DMA engine752. Transceiver702can be capable of receiving and transmitting packets from a wired or wireless medium in conformance with the applicable protocols such as Ethernet as described in IEEE 802.3, although other protocols may be used. Transceiver702can receive and transmit packets from and to a network via a network medium (not depicted). Transceiver702can include PHY circuitry714and media access control (MAC) circuitry716. PHY circuitry714can include encoding and decoding circuitry (not shown) to encode and decode data packets according to applicable physical layer specifications or standards. MAC circuitry716can be configured to assemble data to be transmitted into packets, that include destination and source addresses along with network control information and error detection hash values. Processors704can be any a combination of a: processor, core, graphics processing unit (GPU), field programmable gate array (FPGA), application specific integrated circuit (ASIC), or other programmable hardware device that allow programming of network interface700. For example, processors704can provide for identification of a resource to use to perform a workload and generation of a bitstream for execution on the selected resource. For example, a “smart network interface” can provide packet processing capabilities in the network interface using processors704. Interrupt coalesce722can perform interrupt moderation whereby network interface interrupt coalesce722waits for multiple packets to arrive, or for a time-out to expire, before generating an interrupt to host system to process received packet(s). Receive Segment Coalescing (RSC) can be performed by network interface700whereby portions of incoming packets are combined into segments of a packet. Network interface700provides this coalesced packet to an application. Packet allocator724can provide distribution of received packets for processing by multiple CPUs or cores using timeslot allocation described herein or RSS. When packet allocator724uses RSS, packet allocator724can calculate a hash or make another determination based on contents of a received packet to determine which CPU or core is to process a packet. Direct memory access (DMA) engine752can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa, instead of copying the packet to an intermediate buffer at the host and then using another copy operation from the intermediate buffer to the destination buffer. Memory710can be any type of volatile or non-volatile memory device and can store any queue or instructions used to program network interface700. Transmit queue706can include data or references to data for transmission by network interface. Receive queue708can include data or references to data that was received by network interface from a network. Descriptor queues720can include descriptors that reference data or packets in transmit queue706or receive queue708. Bus interface712can provide an interface with host device (not depicted). For example, bus interface712can be compatible with PCI, PCI Express, PCI-x, Serial ATA, and/or USB compatible interface (although other interconnection standards may be used). FIG.8depicts an example of a data center. Various embodiments can be used in or with the data center ofFIG.8. As shown inFIG.8, data center800may include an optical fabric812. Optical fabric812may generally include a combination of optical signaling media (such as optical cabling) and optical switching infrastructure via which any particular sled in data center800can send signals to (and receive signals from) the other sleds in data center800. However, optical, wireless, and/or electrical signals can be transmitted using fabric812. The signaling connectivity that optical fabric812provides to any given sled may include connectivity both to other sleds in a same rack and sleds in other racks. Data center800includes four racks802A to802D and racks802A to802D house respective pairs of sleds804A-1and804A-2,804B-1and804B-2,804C-1and804C-2, and804D-1and804D-2. Thus, in this example, data center800includes a total of eight sleds. Optical fabric812can provide sled signaling connectivity with one or more of the seven other sleds. For example, via optical fabric8012, sled804A-1in rack802A may possess signaling connectivity with sled804A-2in rack802A, as well as the six other sleds804B-1,804B-2,804C-1,804C-2,804D-1, and804D-2that are distributed among the other racks802B,802C, and802D of data center800. The embodiments are not limited to this example. For example, fabric812can provide optical and/or electrical signaling. Various examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware elements may include devices, components, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, ASICs, PLDs, DSPs, FPGAs, memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. In some examples, software elements may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, APIs, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation. It is noted that hardware, firmware and/or software elements may be collectively or individually referred to herein as “module,” “logic,” “circuit,” or “circuitry.” Some examples may be implemented using or as an article of manufacture or at least one computer-readable medium. A computer-readable medium may include a non-transitory storage medium to store logic. In some examples, the non-transitory storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. In some examples, the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, API, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. According to some examples, a computer-readable medium may include a non-transitory storage medium to store or maintain instructions that when executed by a machine, computing device or system, cause the machine, computing device or system to perform methods and/or operations in accordance with the described examples. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a machine, computing device or system to perform a certain function. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language. One or more aspects of at least one example may be implemented by representative instructions stored on at least one machine-readable medium which represents various logic within the processor, which when read by a machine, computing device or system causes the machine, computing device or system to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. The appearances of the phrase “one example” or “an example” are not necessarily all referring to the same example or embodiment. Any aspect described herein can be combined with any other aspect or similar aspect described herein, regardless of whether the aspects are described with respect to the same figure or element. Division, omission or inclusion of block functions depicted in the accompanying figures does not infer that the hardware components, circuits, software and/or elements for implementing these functions would necessarily be divided, omitted, or included in embodiments. Some examples may be described using the expression “coupled” and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, descriptions using the terms “connected” and/or “coupled” may indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The terms “first,” “second,” and the like, herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. The term “asserted” used herein with reference to a signal denote a state of the signal, in which the signal is active, and which can be achieved by applying any logic level either logic 0 or logic 1 to the signal. The terms “follow” or “after” can refer to immediately following or following after some other event or events. Other sequences of steps may also be performed according to alternative embodiments. Furthermore, additional steps may be added or removed depending on the particular applications. Any combination of changes can be used and one of ordinary skill in the art with the benefit of this disclosure would understand the many variations, modifications, and alternative embodiments thereof. Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Additionally, conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, should also be understood to mean X, Y, Z, or any combination thereof, including “X, Y, and/or Z.′” Illustrative examples of the devices, systems, and methods disclosed herein are provided below. An embodiment of the devices, systems, and methods may include any one or more, and any combination of, the examples described below. Example 1 includes an apparatus for process-to-process communication in network functions virtualization (NFV) infrastructures, the apparatus comprising: a memory; and at least one processor comprising a memory controller and a crypto engine, the at least one processor to: execute a first network function within a virtual machine; execute a second network function within a second virtual machine; provide a virtual channel for communication between the first network function and the second network function; and in response to the first network function requesting a copy of data for access by the second network function, the at least one processor is to copy the data without modification to an intermediate buffer, store a reference to a key for the data, and copy the data from the intermediate buffer to a destination buffer by use of the crypto engine to decrypt the data based on the key and encrypt the data prior to storage in a destination buffer accessible to the second network function. Example 2 includes the subject matter of any Example, wherein to store a reference to a key for the data, the at least one processor is to cause storage of a key identifier and source address of the data into a metadata table, wherein the metadata table is accessible to the crypto engine. Example 3 includes the subject matter of any Example, wherein the crypto engine is to access a table to retrieve a key using the key identifier and the source address. Example 4 includes the subject matter of any Example, wherein the virtual channel is to provide a ring for communication between the first network function and the second network function. Example 5 includes the subject matter of any Example, wherein the first network function is to issue a transmit command to the virtual channel and in response to the transmit command, the virtual channel is to cause the memory controller to copy the data without modification to an intermediate buffer and store the reference to the key for the data. Example 6 includes the subject matter of any Example, wherein the virtual channel is to cause the second network function to issue a receive command to the virtual channel and in response to the receive command, the virtual channel is to cause the memory controller to copy the data from the intermediate buffer to the destination buffer and use the crypto engine to decrypt the data based on the key and encrypt the data using a page key associated with the second network function. Example 7 includes the subject matter of any Example, wherein the crypto engine is to perform Advanced Encryption Standard (AES)-XEX tweaked-codebook mode with ciphertext stealing (XTS) compatible encryption or decryption. Example 8 includes the subject matter of any Example, wherein the first network function or the second network function can perform one or more of: next destination determination, routing, firewall, Intrusion detection systems (IDS), intrusion prevention systems (IPS), gateway general packet radio service support node (GGSN), serving general packet radio service support node (SGSN), Radio Network Controller (RNC), or Evolved Packet Core (EPC). Example 9 includes the subject matter of any Example, comprising one or more of: a base station, central office, server, network interface, rack, or data center. Example 10 includes a method comprising: receiving a request to write data associated with a first virtual network function to a destination buffer associated with a second virtual network function, the first virtual network function and the second virtual network function together performing linked operations; in response to the request to write data, copying the data without modification to an intermediate buffer and storing a key identifier and source address associated with the data; and in response to the request to receive data from the second virtual network function: a crypto engine retrieving a key based on the key identifier and source address; the crypto engine decrypting the data in the intermediate buffer using the key; the crypto engine encrypting the data using a key associated with the second virtual network function; and causing the encrypted data to be written to the destination buffer. Example 11 includes the subject matter of any Example, comprising a virtual channel receiving the request to write data and the virtual channel causing a memory controller to copy the data to the intermediate buffer without modification and the virtual channel causing the crypto engine to retrieve the key from a table based on the key identifier and the source address. Example 12 includes the subject matter of any Example, comprising the virtual channel providing communication among the first virtual network function, the second virtual network function, a memory controller, and the crypto engine. Example 13 includes the subject matter of any Example, wherein the key identifier and the source address are hidden from the second virtual network function. Example 14 includes the subject matter of any Example, wherein the first network function or the second network function can perform one or more of: next destination determination, routing, firewall, Intrusion detection systems (IDS), intrusion prevention systems (IPS), gateway general packet radio service support node (GGSN), serving general packet radio service support node (SGSN), Radio Network Controller (RNC), or Evolved Packet Core (EPC). Example 15 includes the subject matter of any Example, wherein the crypto engine applies Advanced Encryption Standard (AES)-XEX tweaked-codebook mode with ciphertext stealing (XTS) compatible encryption or decryption. Example 16 includes the subject matter of any Example, comprising: a hypervisor setting up a virtual channel and the hypervisor starting the first network function and the second network function and programming the first network function and the second network function to use the virtual channel for communication. Example 17 includes a system for virtual network function linking comprising: an interface; a memory controller comprising a crypto engine; at least one memory; and at least one core communicatively coupled to the interface, the memory controller, and the at least one memory, the at least one core to: execute a first virtual network function; execute a second virtual network function; in response to a transmit request from the first virtual network function, cause data to be written to an intermediate buffer without modification and cause a key identifier and source address to be stored; and in response to a receive request from the second virtual network function, cause the data in the intermediate buffer to be: decrypted based on a key associated with the key identifier and the source address, encrypted using a key associated with the second virtual network function, and the encrypted data to be written to a destination buffer associated with the second virtual network function. Example 18 includes the subject matter of any Example, wherein the crypto engine is to access the key associated with the key identifier and the source address from a secure table. Example 19 includes the subject matter of any Example, wherein the first network function or the second network function can perform one or more of: next destination determination, routing, firewall, Intrusion detection systems (IDS), intrusion prevention systems (IPS), gateway general packet radio service support node (GGSN), serving general packet radio service support node (SGSN), Radio Network Controller (RNC), or Evolved Packet Core (EPC). Example 20 includes the subject matter of any Example, wherein the crypto engine is to apply Advanced Encryption Standard (AES)-XEX tweaked-codebook mode with ciphertext stealing (XTS) compatible encryption or decryption. Example 21 includes the subject matter of any Example, wherein the interface comprises one or more of: a network interface, a fabric interface, a bus interface. | 58,531 |
11943341 | The figures are not to scale. Wherever possible, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts, elements, etc. DETAILED DESCRIPTION Example methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to implement contextual key management for data encryption are disclosed herein. Example apparatus disclosed herein to perform contextual encryption key management, which are also referred to herein as contextual key managers, include an example context discoverer to discover context information associated with a request to access first encrypted data. Such disclosed example apparatus also include an example contextual key mapper to identify a combination of context rules associated with a key that is to provide access to the first encrypted data, validate the context information associated with the request based on the combination of context rules associated with the key to determine whether the request to access the first encrypted data is valid, and obtain the key from a key management service when the request to access the first encrypted data is valid. In some disclosed examples, the contextual key mapper is further to provide the key to an encryption engine that is to decrypt the first encrypted data in response to the request to access the first encrypted data. Additionally or alternatively, at least some disclosed example apparatus further include a context rule engine to define a plurality of possible contexts and associated context rules to be evaluated by the context discoverer and the contextual key mapper. For example, the plurality of possible contexts can be a plurality of heterogeneous contexts including at least two of a data classification context, an access classification context, a geographic location context, a time context, a business organization context, a user context, a data destination context and/or other context(s). In such examples, the combination of context rules can include at least two of a data classification context rule associated with the key, an access classification context rule associated with the key, a geographic location context rule associated with the key, a time context rule associated with the key, a business organization context rule associated with the key, a user context rule associated with the key, a data destination context rule associated with the key and/or other context rule(s) associated with the key. Furthermore, in at least some such disclosed examples, the context discoverer is to determine at least two of a data classification context value associated with the request to access the first encrypted data, an access classification context value associated with the request to access the first encrypted data, a geographic location context value associated with the request to access the first encrypted data, a time context value associated with the request to access the first encrypted data, a business organization context value associated with the request to access the first encrypted data, a user context value associated with the request to access the first encrypted data, a data destination context value associated with the request to access the first encrypted data, and/or other context values(s) associated with the request to access the first encrypted data. Additionally or alternatively, in at least some disclosed examples in which the context information is first context information, the context discoverer is further to evaluate, based on the plurality of possible contexts, second context information associated with first unencrypted data to determine the combination of context rules associated with the key. In at least some such disclosed examples, the context discoverer evaluates the second context information associated with the first unencrypted data in response to a request to encrypt the first unencrypted data to form the first encrypted data. Furthermore, in at least some such disclosed examples, the contextual key mapper is to determine whether the combination of context rules has been determined previously for other unencrypted data that has undergone encryption, send a request to the key management service to retrieve the key when the combination of context rules has been determined previously for other unencrypted data that has undergone encryption, or send a request to the key management service to generate the key when the combination of context rules has not been determined previously for other unencrypted data that has undergone encryption. In at least some such disclosed examples, the contextual key mapper may further provide the key to the encryption engine to encrypt the first unencrypted data to form the first encrypted data. Also, in at least some such disclosed examples, the contextual key mapper is further to map the combination of context rules associated with the key to a key identifier identifying the key, provide the key identifier to the encryption engine, which is to include the key identifier with the first encrypted data, and in response to the request to access the first encrypted data, map the key identifier included with the first encrypted data to the combination of context rules associated with the key to identify the combination of context rules associated with the key. These and other example methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to implement contextual key management for data encryption are disclosed in further detail below. Contextual key management for data encryption as disclosed herein provides technical solutions to the technical problem of managing a complex data protection environment in which data is to be protected and access is to be restricted to varying degrees depending on a range of heterogeneous contextual factors. These heterogeneous contextual factors may be diverse and examples include, but are not limited to, a classification of the data, a storage location of the data, a location from which the data is permitted to be accessed, a time at which the data is permitted to be accessed, identification information for a system and/or user accessing the data, etc. Many existing encryption techniques for data protection utilize a single encryption key for all data, or simplistic rules based on having just a few different keys selected based on homogenous factors, such as different classifications of data content. Having large amounts of data encrypted with just a few keys can make key management simple but increases the risk of theft of large amounts of data because theft of a single key can expose many data files, data elements, etc. Furthermore, from an access control point-of-view, the risk vector is large because the fewer the number of keys, the larger the group of people that has access to each key, which increases the vulnerability of each key. Conversely, data protection solutions in which many keys are used to divide data into smaller groups for protection can reduce risk and improve access control granularity, but may require a more complex management layer for controlling use of and access to these keys. Example contextual key management solution disclosed herein provide highly granular protection, with a potentially unlimited number of keys used to protect different categories of data, while also providing simple and intuitive key management by hiding key management complexity from the administrator. Turning to the figures, a block diagram of an example computing environment100including an example data center105with an example contextual key manager110implemented in accordance with teachings of this disclosure to perform contextual key management for data encryption is illustrated inFIG.1. The example data center105can be any type of data center, cloud service, server farm, etc., that receives data, stores data, provides access to data, etc. In the illustrated example ofFIG.1, the data center105is in communication with one or more computing devices, such as the example computing devices115A-C, which provide and/or gain access to data maintained by the data center105. The example computing devices115A-C include one or more of, but are not limited to, an example mobile phone115A, an example notebook computer115B and an example desktop computer115C. The computing devices115A-C communicate with the data center105via an example network120, which can be implemented by any type(s) and/or number of networks, such as one or more wireless networks (e.g., mobile cellular networks, wireless local area networks, satellite networks, etc.), one or more wired networks (e.g., cable networks, dedicated transmission lines, etc.), the Internet, etc. As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events. The example data center105ofFIG.1also includes or has access to an example key management service125. As mentioned above, the encryption keys utilized in a data protection ecosystem, such as the example computing environment100, are typically stored within a key management service, such as the key management service125. In the illustrated example, the key management service125provides secure key storage backed up with hardware security measures to prevent at-rest key theft. Further, the key management service125releases the appropriate encryption keys once a system or user has been authenticated. In the illustrated example ofFIG.1, the contextual key manager110enhances the key management service125by implementing a mapping layer between contextual factors and the encryption keys. The mapping layer implemented by the contextual key manager110is based on relatively simple rules involving contextual information, which allows for easy management of data access control. For example, the contextual key manager110may specify a first context and associated context rule in which a first key, Key1, is to be used to protect data governed under the Health Insurance Portability and Accountability Act (HIPAA data), may specify a second context and associated second context rule in which a second key, Key2, is to be used to protect secret data, etc. When data (e.g., a data file, a data object, etc.) is to be encrypted, the contextual key manager110discovers relevant contextual information (e.g., based on a defined set of possible contexts, as disclosed below), and uses the discovered context information to identify the one or more context rules associated with the data and to infer, based on the context rule(s), a particular key (which may require creation of a new key, as disclosed below) to be used to encrypt the data. For example, the contextual key manager110may discover that a first document contains HIPAA data and, thus, determines that Key1 corresponding to the first context rule described above is to be used to protect (e.g., encrypt) the document, whereas the contextual key manager110may discover that a second document contains HIPAA data and is secret and, thus, determines that a new key, Key3, needs to be created to cover the combination of first and second context rules described above for these context values. The contextual key manager110may discover context information for data using any context discovery mechanism or combination of context discovery mechanisms such as, but not limited to, analyzing text stored in a data file, analyzing header data of the data file, analyzing metadata contained in the data file, etc. When access to encrypted data (e.g., an encrypted data file, an encrypted data object, etc.) is requested (such as when key release is requested), the contextual key manager110determines whether to grant the requesting user (or system, device, etc.) access to the key dependent on relatively simple rules governing the contextual information to which the particular key for that encrypted data is mapped. Although the individual context rules themselves are relatively simple, the combination of the context values (which may be many) yields an associated combination of context rules that provides highly granular control. For example, the contextual key manager110may determine that a requested, protected document contains HIPAA data and is secret, and may discover that the requesting user is permitted access to HIPAA data but is not permitted access secret data. Thus, in such an example, the contextual key manager110derives a combined access rule implied by the intersection between the individual access rules for HIPAA data and secret data, which results in a determination that the user cannot access the particular key (e.g., Key3) associated with the combined access rule for that document. By implementing such a mapping layer, the contextual key manager110provides several benefits to the data center105. For example, the contextual key manager110enhances the data protection ecosystem provided by the data center105by supporting potentially complex encryption key usage and/or access scenarios, but that are based on a relatively simple and intuitive set of rules to enforce these scenarios. Also, there is no need for manual key management, and therefore no practical limit to the number of keys managed by the contextual key manager110. Rather, the contextual key manager110provides a flexible and automatic key management capability to be harnessed by users of the data center105without having to limit the number of keys to an amount capable of being managed by a human. Furthermore, because use of the contextual key manager110results in fewer data objects being encrypted with any given key, the impact of any given key being stolen is reduced. As yet another example benefit, the contextual information discovered at the point of data protection by the contextual key manager110is recorded in a way that is indelible and remote to the data itself because the encryption key obtained by the contextual key manager110for a given data object is specific to the context values determined for that data object at the point of encryption and, thus, forms a contextual fingerprint for the key and the associated data being protected by the key. For example, documents encrypted with Key3 in the example above, by definition, include HIPAA data and are secret. In other words, the combination of context values determined by the contextual key manager110for given encrypted data can be inferred from the combination of context rules mapped to the key for that given encrypted data. By comparison, tagging of files with their classification, as is done in existing data loss prevention systems, is open to tampering by bad actors. The example data center105ofFIG.1also includes an example administrative workstation130to allow an operator to administer the data center105and control operation of the contextual key manager110and the example key management service125. As disclosed in further detail below, the contextual key manager110is able to accept user input (e.g., from the administrative workstation130) defining set(s) of possible contexts and/or associated context access rules to be used by the contextual key manager110. The example administrative workstation130can be implemented by any type of computing device, and the example data center105can include any number of administrative workstation130. A more detailed block diagram of the example data center105ofFIG.1is illustrated inFIG.2. In the illustrated example ofFIG.2, aspects of the data center105related to contextual key management for data encryption are shown, whereas other aspects are omitted for clarity. The example data center105ofFIG.2includes the example contextual key manager110and the example key management service125described above in connection withFIG.1. The example data center105ofFIG.2also includes an example administrative user interface205, an example encryption engine210and example business logic215. In the illustrated example ofFIG.2, the administrative user interface205interfaces with or is otherwise in communication with the example administrative workstation130to obtain context and access control configuration information to be used to configure operation of the contextual key manager110. Operation of the administrative user interface205is described in further detail below. In the illustrated example ofFIG.2, the encryption engine210is provided to encrypt data and decrypt encrypted data. The encryption engine210can implement any number and/or type(s) of data encryption techniques, which are configured according to rules specified by the business logic215. Operation of the encryption engine210is described in further detail below. The example key management service125ofFIG.2includes an example encryption key server220and an example key access controller225. The encryption key server220of the illustrated example creates and returns keys to be used to encrypt and decrypt data. The key access controller225of the illustrated example controls access to the keys to be served by the encryption key server220. The key access controller225may implement any conventional or unconventional access control technique(s), or combination thereof, to determine whether to provide access to keys requested from the encryption key server220. Operation of the encryption key server220is described in further detail below. In the illustrated example ofFIG.2, the contextual key manager110includes an example contextual rules engine230, an example context discoverer235and an example contextual key mapper240. The contextual rules engine230of the illustrated example is an enhancement over prior classification rule engines used in data loss prevention systems. Unlike prior classification rule engines, the contextual rules engine230defines classification rules based on potentially many more heterogenous levels of contextual information. In some examples, each context rule defines enforcement criteria, and may or may not specify an access restriction. For example, the contextual rules engine230can, based on user input received from the administrative user interface205, define a set of possible contexts each having one or more possible context values. Examples of possible contexts and associated context values include, but are not limited to, a data classification context, an access classification context, a geographic location context, a time context, a business organization context, a user context, a data destination context, etc. For example, the contextual rules engine230may define a data classification context having possible context values of “HIPAA data,” which is a context for data associated with medical content and having an access restriction, “PCI data,” which is a context for data associated with payment card information and having an access restriction, etc. As another example, the contextual rules engine230may define an access classification context having possible context values of “secret,” which is a context for data that is secret and has an access restriction, “confidential,” which is a context for data that is confidential and has an access restriction. As a further example, the contextual rules engine230may define a business organization context having possible context values of “HR,” which is a context for data associated with a human resources organization and having an access restriction, “finance,” which is a context for data associated with a financial organization and having an access restriction, etc. As yet another example, the contextual rules engine230may define a user context having possible context values associated with user names (e.g., “Jane Doe”) and may not have an access restriction. As yet a further example, the contextual rules engine230may define a data destination context having possible context values associated with destination of the data, such as, but not limited to, “SharePoint,” which a context for data being uploaded to a SharePoint server and that may have no access restriction, etc. Operation of the contextual rules engine230is described in further detail below. In the illustrated example, the contextual rules engine230also defines, based on user input received from the administrative user interface205, a set of individual context rules based on the possible contexts and associated context values. For example, a first context rule may specify that data having a data classification context value of “HIPAA data” is to be protected (e.g., encrypted and subject to access restriction), a second context rule may specify that data having a data classification context value of “PCI data” is to be protected (e.g., encrypted and subject to access restriction), a third context rule may specify that data having an access classification context value of “secret” is to be protected (e.g., encrypted and subject to access restriction), a fourth context rule may specify that data having an access classification context value of “confidential” is to be protected (e.g., encrypted and subject to access restriction), a fifth context rule may specify that data having an business organization context value of “HR” is to be protected (e.g., encrypted and subject to access restriction), a sixth context rule may specify that data having an business organization context value of “finance” is to be protected (e.g., encrypted and subject to access restriction), a seventh context rule may specify that data having a user context value of “Jane Doe” is to be protected (e.g., encrypted and subject to access restriction), etc. The context discoverer235of the illustrated example discovers contextual information relating to the data to be encrypted. The context discoverer235performs such discovery at the point a data protection policy is enforced (e.g., such as when the data is moved to backup storage, etc.). The context discoverer235discovers the context information for the data to be encrypted based on the group of possible contexts and associated possible context values and context rules defined by the contextual rules engine230. By parsing text and/or other information included in data (e.g., a data file, a data object, etc.) to be encrypted, evaluating header information, metadata, etc., associated with the given data to be encrypted, evaluating other contextual values such as system clock and/or network settings, etc., and/or implementing any other context discovery technique(s), the context discoverer235determines context values (e.g., user, destination, file type, time of day, geolocation, etc.) for the given data that correspond to the group of possible contexts defined by the contextual rules engine230. In some examples, the context discoverer235limits discovery of contextual information to the possible contexts for which rules have been defined by the contextual rules engine230. Operation of the context discoverer235is described in further detail below. The contextual key mapper240of the illustrated example is responsible for ensuring keys exist for the possible contexts and associated context values and context rules that have been defined by the contextual rules engine230. The contextual key mapper240also ensures contextual access restrictions are met before requesting keys from the key management service125. While the total number of individual possible context values and associated context rules (and keys associated with them) may be relatively small, the number of combinations of these context values and associated context rules can become large. Specifically, when combined, N possible individual context values can yield 2N-1different keys, with each key corresponding to a respective different combination of context rules corresponding to the possible individual context values. A feature of the contextual key mapper240is to request key creation on-demand, which helps prevent a performance and storage bottleneck within the key management service125. In this way, the contextual key mapper240forms a sparse matrix that maps combinations of individual context rules to corresponding keys only as the specific combinations of context values associated with the individual context rules are encountered, and after some time the sparse matrix is likely to tend to a stable state. In some examples, the contextual key mapper240is implemented as a separate component that interfaces with the key management service125, (e.g., such as an existing key management service). In other examples, the contextual key mapper240is implemented as part of the key management service125. To implement such functionality, the example contextual key mapper240ofFIG.2includes an example key context mapping engine245and an example contextual access controller250. Operation of contextual key mapper240, including the key context mapping engine245and the contextual access controller250, is described in further detail below. Further implementation details of the contextual key manager110are described in the context ofFIGS.3-5, which illustrate three example stages of operation for the contextual key manager110. An example operation stage300of the contextual key manager110to create and/or modify a contextual policy is illustrated inFIG.3. In the example operation stage300ofFIG.3, the contextual rules engine230receives user input data from the administrative user interface205specifying additional and/or updated context configuration data, and defines the set of possible individual contexts, associated possible context values and context access rules based on the user input configuration data (labeled as operation “1” inFIG.3). Then, for each individual context rule defined by the contextual rules engine230, the key context mapping engine245of the contextual key mapper240creates a mapping between a given possible context value associated with that context rule and a respective key identifier (ID) of a key that is to encrypt/decrypt data associated with the given context rule (labeled as operation “2” inFIG.3). For example, key context mapping engine245may map a first context rule, which specifies that data having a data classification context value of “HIPAA data” is to be protected (e.g., encrypted), to a first key (e.g., Key1), whereas the key context mapping engine245may map a second context rule, which specifies that data having a data classification context value of “PCI data” is to be protected (e.g., encrypted), to a second key (e.g., Key2), whereas the key context mapping engine245may map a third context rule, which specifies that data having an access classification context value of “secret” is to be protected (e.g., encrypted), to a third key (e.g., Key3), etc. If a key is not yet available for a given individual context rule, the key context mapping engine245requests a new key from the key management service125(labeled as operation “3” inFIG.3), which returns the key ID of the new key (labeled as operation “4” inFIG.3). If a given context value is associated with a context rule specifying an access restriction, the context rule specifying the access restriction associated with the context value is stored in the contextual access controller250of the contextual key mapper240(labeled as operation “5” inFIG.3). Thus, when the contextual policy is defined, the only keys created are those directly mapped to individual context rules based on individual context values, and no keys are mapped to combinations of the context rules/values (e.g., respective keys are mapped to the individual context rules based on the individual context values of “HIPAA data”, “PCI data,” “secret,”, “confidential,” etc., but no keys are mapped at this point for a combination of context rules based on a combination of “HIPAA data” and “secret” context values, or a combination of “PCI data” and “confidential” context values, etc.). Rather, the keys that are mapped to combinations of context rules corresponding to combinations of the context values are generated on-demand, as described below. An example operation stage400of the contextual key manager110to perform data encryption enforcement is illustrated inFIG.4. In the example operation stage400ofFIG.4, a request to encrypt an example data file405is received by the encryption engine210(labeled as operation “1” inFIG.4). For example, this request may come from a system function that is storing the data file405in persistent storage. In response to the request, the context discoverer235discovers context information associated with the data file405based on the set of possible contexts and associated context values and context rules previously defined by the contextual rules engine230(labeled as operations “2” and “3” inFIG.4). The context information discovered by the context discoverer235for the data file405includes the context values associated with the data file405that correspond to the set of possible contexts and associated context values and context rules previously defined by the contextual rules engine230. As described above, the context discoverer235can discover this context information based on parsing text and/or other information from the data file405, evaluating header data, metadata, etc., associated with the data file405, and/or evaluating other external sources, such as system clock, geolocation, network data, etc. Next, the context discoverer235provides the set of context values discovered for the data file405(e.g., as a list, data array, etc.) to the contextual key mapper240, which returns the particular key, which corresponds to a set of context rules defined for that set of context values, to be used by the encryption engine210to encrypt the data file405(e.g., corresponding to operations “4,” “5,” “6,” “7” and “8” inFIG.4). For example, if the context information discovered by the context discoverer235for the data file405includes only one context value, the key context mapping engine245of the contextual key mapper240can identify, based on the map (e.g., mapping matrix) created above in connection withFIG.3, the key ID of the key mapped to the individual context rule corresponding to that individual context value, and request that key from the key management service125by including the key ID in the request (e.g., corresponding to operation “6” and “7” inFIG.4). For example, the key context mapping engine245can send a request to the key management service125to return the key related to the context rule specifying that data having the context value “HIPAA data” is to be protected (e.g., encrypted). However, if the context information discovered by the context discoverer235for the data file405includes multiple context values, the key context mapping engine245of the contextual key mapper240will attempt to identify a key ID that is already mapped to a combination of context rules corresponding to that particular combination of context values (e.g., because that combination of context values was discovered previously for other data that has undergone encryption). If such a key ID mapping is found, the key context mapping engine245can request that key from the key management service125by including the key ID in the request (e.g., corresponding to operation “6” and “7” inFIG.4). For example, the key context mapping engine245can send a request to the key management service125to return the key mapped to the combination of context rules specifying that (i) data having the context value of “HIPAA data” is to be protected (e.g., encrypted) and (ii) data having the context value of “secret” is to be protected (e.g., encrypted). However, if the key context mapping engine245determines that no key ID is yet mapped to a combined context rules corresponding to the particular combination of context values, the key context mapping engine245sends a request to the key management service125to obtain a new key (e.g., corresponding to operation “6” and “7” inFIG.4) to be associated with the combination of context rules corresponding to that particular combination of context values. For example, the key context mapping engine245can send a request to the key management service125to return the key that will be mapped to a combination of context rules specifying that (i) data having the context value of “HIPAA data” is to be protected (e.g., encrypted) and (iii) data having the context value of “confidential” is to be protected (e.g., encrypted). The key context mapping engine245then maps the key ID of the new key to the combination of context rules corresponding to this particular combination of context values and stores this mapping for future use. After the key context mapping engine245of the contextual key mapper240obtains the requested key from the key management service125, the key context mapping engine245returns the key to the encryption manager210(labeled as operation “8” inFIG.4). The encryption engine210then uses this key to encrypt the data file405(labeled as operation “9” inFIG.4). The encryption engine210also associates the key ID of the key with the encrypted data file405(e.g., by including the key ID in header information and/or metadata of the encrypted data file405, etc.) In this way, the contextual key manager110causes the data file405to be encrypted by a particular key that corresponds to a particular combination of context information discovered for the data file405. An example operation stage500of the contextual key manager110to provide access to encrypted data is illustrated inFIG.5. In the example operation stage500ofFIG.4, a request to access an example encrypted data file505is received by the encryption engine210(labeled as operation “1” inFIG.4). For example, this request may come from a user's computing device that is requesting access to the encrypted data file505from persistent storage. In response to the access request, the context discoverer235discovers context information associated with the access request based on the set of possible contexts and associated context values and context rules previously defined by the contextual rules engine230(labeled as operation “2” inFIG.5). The context information discovered by the context discoverer235for the access request includes the context values associated with the access request that correspond to the set of possible contexts and associated context values previously defined by the contextual rules engine230. The context discoverer235can discover this context information based on data, such as metadata, information elements, etc., included in the access request and/or signaling conveying the access request, etc., but not from the content of the encrypted data file505itself which is encrypted. For example, the context discoverer235can discover the identity of the user requesting access to the data file505, and potentially other contextual information, such as a geographic location of the user, time-of-day, etc. In the illustrated example, the context discoverer235also determines the the key ID of the key that was used to encrypt the encrypted data file505(labeled as operation “3” inFIG.5). For example, the context discoverer235can determine the key ID from header information, metadata, etc., included with the encrypted data file505when it was encrypted by the encryption engine210. Next, the context discoverer235provides the set of context values discovered for the access request (e.g., as a list, data array, etc.) and the key ID of the key to be used to decrypt the encrypted file505to the contextual key mapper240(labeled as operation “4” inFIG.5). The key context mapping engine245of the contextual key mapper240identifies the set of context rules (e.g., one context rule or a combination of context rules) mapped to the key ID (also referred to as key mapped context rules) (labeled as operation “5” inFIG.5). For each key mapped context rule having an associated access restriction, the key context mapping engine245invokes the contextual access controller250of the contextual key mapper110to validate whether the user is allowed access based on that key mapped context rule (labeled as operation “6” inFIG.5). If the discovered context values of the access request are validated against all the key mapped context rules associated with the key ID, the key context mapping engine245grants access to the key and sends a request to the key management service125to return the key corresponding to the key ID (labeled as operation “7” inFIG.5). In response, the key management service125releases the key to the key context mapping engine245(labeled as operation “7” inFIG.5), which returns the key to the encryption engine210(labeled as operation “8” inFIG.5). The encryption engine210then uses the retrieved key to decrypt the encrypted data file505(labeled as operation “9” inFIG.5). Thus, in view of the foregoing disclosure, the contextual rules engine230of the illustrated examples implements means for defining a group of possible contexts and associated context values and context rules to be evaluated by the context discoverer235and the contextual key mapper240. In some examples, the group of possible contexts includes at least two of a data classification context, an access classification context, a geographic location context, a time context, a business organization context, a user context, a data destination context and/or other context(s). The contextual rules engine230further defines possible context value(s) for respective ones of the group of possible contexts, and context rule(s) based on the context value(s). In view of the foregoing disclosure, the context discoverer235implements means for discovering context information associated with a request to access encrypted data. For example, when the group of possible contexts includes at least two of a data classification context, an access classification context, a geographic location context, a time context, a business organization context, a user context, a data destination context and/or other context values(s), the context discoverer235may determine at least two of a data classification context value associated with the request to access the encrypted data, an access classification context value associated with the request to access the encrypted data, a geographic location context value associated with the request to access the encrypted data, a time context value associated with the request to access the encrypted data, a business organization context value associated with the request to access the encrypted data, a user context value associated with the request to access the encrypted data, a data destination context value associated with the request to access the encrypted data, and/or other context values(s) associated with the request to access the first encrypted data. In view of the foregoing disclosure, the contextual key mapper240implements means for identifying a combination of context rules associated with a key that is to provide access to encrypted data in response to a request to access the encrypted data. For example, when the group of possible contexts includes at least two of a data classification context, an access classification context, a geographic location context, a time context, a business organization context, a user context, a data destination context and/or other context(s), the combination of context rules identified by the contextual key mapper240for the key may include at least two of a data classification context rule associated with the key, an access classification context rule associated with the key, a geographic location context rule associated with the key, a time context rule associated with the key, a business organization context rule associated with the key, a user context rule associated with the key, a data destination context rule associated with the key and/or other context rule(s) associated with the key. The contextual key mapper240also implements means for validating the context information associated with the request to access the encrypted data to determine whether the request to access the first encrypted data is valid. Such validation is based on the combination of context rules associated with the key. The contextual key mapper240further implements means for obtaining the key from the key management service125when the request to access the encrypted data is valid, and providing the key to the encryption engine210, which is to decrypt the encrypted data in response to the request to access the encrypted data. In view of the foregoing disclosure, to encrypt unencrypted data, the context discoverer235implements means for evaluating, based on the plurality of possible contexts defined by the contextual rules engine230, context information associated with the unencrypted data to determine the combination of context values to be associated with the key that is to encrypt and decrypt that data. For example, the context discoverer235may evaluate the context information associated with the unencrypted data in response to a request to encrypt the unencrypted data to form the encrypted data described above. In view of the foregoing disclosure, to encrypt unencrypted data, the contextual key mapper240implements means for determining whether the combination of context rules to be associated with the key to encrypt the data has been determined previously for other unencrypted data that has undergone encryption. The contextual key mapper240further implements means for sending a request to the key management service125to retrieve the key (e.g., based on a key ID) when the combination of context rules has been determined previously for other unencrypted data that has undergone encryption, or sending a request to the key management service125to generate the key (e.g., a new key) when the combination of context rules has not been determined previously for other unencrypted data that has undergone encryption. The contextual key mapper240also implements means for providing the key to the encryption engine210to encrypt the unencrypted data to form the encrypted data described above. In some examples, the contextual key mapper240further implements means for mapping the combination of context rules associated with a key, which is to be used to encrypt data, to a key identifier identifying the key. The contextual key mapper240in such examples can also implement means for providing the key identifier to the encryption engine210, which is to include (e.g., store) the key identifier with the encrypted data. The contextual key mapper240in such examples can further implement means for mapping the key identifier, which is included with the encrypted data, to the combination of context rules associated with the key in response to a request to access the encrypted data that was encrypted based on the key. In this way, the contextual key mapper240is able to identify the combination of context rules associated with the key. While an example manner of implementing the contextual key manager110ofFIG.1is illustrated inFIGS.2-5, one or more of the elements, processes and/or devices illustrated inFIGS.2-5may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example contextual rules engine230, the example context discoverer235, the example contextual key mapper240, the example key context mapping engine245, the example contextual access controller250and more generally, the example contextual key manager110ofFIGS.2-5may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example contextual rules engine230, the example context discoverer235, the example contextual key mapper240, the example key context mapping engine245, the example contextual access controller250and more generally, the example contextual key manager110could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example contextual key manager110, the example contextual rules engine230, the example context discoverer235, the example contextual key mapper240, the example key context mapping engine245and/or the example contextual access controller250is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware. Further still, the example contextual key manager110ofFIG.1may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated inFIGS.2-5, and/or may include more than one of any or all of the illustrated elements, processes and devices. As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events. Flowcharts representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the example contextual key manager110, the example contextual rules engine230, the example context discoverer235, the example contextual key mapper240, the example key context mapping engine245and/or the example contextual access controller250are shown inFIGS.6-8. In these examples, the machine readable instructions may be one or more executable programs or portion(s) thereof for execution by a computer processor, such as the processor912shown in the example processor platform900discussed below in connection withFIG.9. The one or more programs, or portion(s) thereof, may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray Disk™, or a memory associated with the processor912, but the entire program or programs and/or parts thereof could alternatively be executed by a device other than the processor912and/or embodied in firmware or dedicated hardware (e.g., implemented by an ASIC, a PLD, an FPLD, discrete logic, etc.). Further, although the example program(s) is(are) described with reference to the flowcharts illustrated inFIGS.6-8, many other methods of implementing the example contextual key manager110, the example contextual rules engine230, the example context discoverer235, the example contextual key mapper240, the example key context mapping engine245and/or the example contextual access controller250may alternatively be used. For example, with reference to the flowcharts illustrated inFIGS.6-8, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, combined and/or subdivided into multiple blocks. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, a Field Programmable Gate Array (FPGA), an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware. As mentioned above, the example processes ofFIGS.6-8may be implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. Also, as used herein, the terms “computer readable” and “machine readable” are considered equivalent unless indicated otherwise. “Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim lists anything following any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, etc.), it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim. As used herein, when the phrase “at least” is used as the transition term in a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. An example program600that may be executed to implement the contextual key manager110ofFIGS.1-2to perform context rule management is represented by the flowchart shown inFIG.6. With reference to the preceding figures and associated written descriptions, the example program600ofFIG.6begins execution at block605at which the example contextual rules engine230of the contextual key manager110accesses, via the administrative user interface205, user input specifying contexts and/or access rules to define a set of possible contexts and associated possible context values and context rules to be used by the contextual key manager110. At block610, the example contextual key mapper240of the contextual key manager110creates map entries (e.g., in a mapping matrix) mapping individual context rules corresponding respectively to the individual possible context values to corresponding keys. For example, the individual context rules may each specify that data having the respective possible context value is to be protected (e.g., encrypted). At block615, the contextual key mapper240sends one or more requests to the key management service125to create respective keys for the individual context rules. At block620, the contextual key mapper240receives one or more responses from the key management service125containing the key IDs for the respective keys that were created for the individual context rules. At block625, the contextual key mapper240stores the returned key IDs in the corresponding map entries created at block610. At block630, the contextual key mapper240stores any access restrictions specified by the individual context rules for the individual possible context values. An example program700that may be executed to implement the contextual key manager110ofFIGS.1-2to perform data encryption is represented by the flowchart shown inFIG.7. With reference to the preceding figures and associated written descriptions, the example program700ofFIG.7begins execution at block705at which the example context discoverer235of the contextual key manager110detects a request to encrypt data. At block710, the context discoverer235accesses a set of possible contexts and associated context values (e.g., previously defined by the contextual rules engine230). At block715, the context discoverer235evaluates context information associated with the data to be encrypted to determine a combination of context values associated with the data. At block720, the example contextual key mapper240of the contextual key manager110determines whether the particular combination of context values discovered by the context discoverer235was already detected previously for other data that has undergone encryption. If that particular combination of context values has already been detected previously (block725), at block730the contextual key mapper240sends a request to the key management service125to request the existing key mapped to the combination of context rules for that combination of context values (e.g., by including the key ID associated with the combination of context rules in the request). However, if that particular combination of context values has not been detected previously (block725), at block735the contextual key mapper240sends a request to the key management service125to create a new key to be mapped to the combination of context rules for that combination of context values. At block740, the contextual key mapper240receives a response from the key management service125containing the requested key, as well as a key ID for the requested key if the key is a new key. If the returned key is a new key, at block745, the contextual key mapper240also associates the key ID with the particular combination of context rules (e.g., by including the key ID in the map entry corresponding to that combination of context rules). At block750, the contextual key mapper240provides the requested key and its key ID to the encryption engine210, which is to encrypt the data based on the key and include the key ID with the encrypted data (e.g., as metadata, header information, etc.). An example program800that may be executed to implement the contextual key manager110ofFIGS.1-2to access encrypted data is represented by the flowchart shown inFIG.8. With reference to the preceding figures and associated written descriptions, the example program800ofFIG.8begins execution at block805at which the example context discoverer235of the contextual key manager110detects a request to access encrypted data. At block810, the context discoverer235accesses a set of possible contexts and associated context values and context rules (e.g., previously defined by the contextual rules engine230). At block815, the context discoverer235evaluates context information associated with the access request to determine a combination of context values associated with the access request. The context discoverer235also determines (e.g., from header information, metadata, etc., associated with the encrypted data) the key ID for the key to be used to decrypt the data. At block820, the example contextual key mapper240of the contextual key manager110identifies (e.g., based on the mapping described above) a combination of context rules associated with the key that is to be used to decrypt the data. At block825, the contextual key mapper240validates the context information associated with the access request based on the combination of context rules associated with key. For example, the contextual key mapper240determines whether the combination of context rules associated with key are satisfied by the corresponding context values discovered for the access request by the context discoverer235at block815. If the contextual key mapper240determines the access request is not valid (e.g., because one or more context values do not meet the requirements of the access rules defined for their respective contexts) (block830), at block835the contextual key mapper240indicates the access request is invalid and denies access to the key to be used to decrypt the encrypted data. However, if the contextual key mapper240determines the access request is valid (e.g., because all context values meet the requirements of the access rules defined for their respective contexts) (block830), at block840the contextual key mapper240grants the access request and sends a request to the key management service125to return the key to be used to decrypt the encrypted data (e.g., by including the key ID in the request). At block845, the contextual key mapper240receives a response from the key management service125with the requested key. At block850, the contextual key mapper240provides the requested key to the encryption engine210, which uses the key to decrypt the encrypted data in response to the access request. FIG.9is a block diagram of an example processor platform900structured to execute the instructions ofFIGS.6-8to implement the example contextual key manager110ofFIGS.1-5. The processor platform900can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), or any other type of computing device. The processor platform900of the illustrated example includes a processor912. The processor912of the illustrated example is hardware. For example, the processor912can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer. The hardware processor912may be a semiconductor based (e.g., silicon based) device. In this example, the processor912implements the example contextual rules engine230, the example context discoverer235, the example contextual key mapper240, the example key context mapping engine245and/or the example contextual access controller250. The processor912of the illustrated example includes a local memory913(e.g., a cache). The processor912of the illustrated example is in communication with a main memory including a volatile memory914and a non-volatile memory916via a link918. The link918may be implemented by a bus, one or more point-to-point connections, etc., or a combination thereof. The volatile memory914may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory916may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory914,916is controlled by a memory controller. The processor platform900of the illustrated example also includes an interface circuit920. The interface circuit920may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface. In the illustrated example, one or more input devices922are connected to the interface circuit920. The input device(s)922permit(s) a user to enter data and/or commands into the processor912. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, a trackbar (such as an isopoint), a voice recognition system and/or any other human-machine interface. Also, many systems, such as the processor platform900, can allow the user to control the computer system and provide data to the computer using physical gestures, such as, but not limited to, hand or body movements, facial expressions, and face recognition. One or more output devices924are also connected to the interface circuit920of the illustrated example. The output devices924can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speakers(s). The interface circuit920of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip or a graphics driver processor. The interface circuit920of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network926. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL), connection, a telephone line connection, coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc. The processor platform900of the illustrated example also includes one or more mass storage devices928for storing software and/or data. Examples of such mass storage devices928include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives. The machine executable instructions932corresponding to the instructions ofFIGS.6-8may be stored in the mass storage device928, in the volatile memory914, in the non-volatile memory916, in the local memory913and/or on a removable non-transitory computer readable storage medium, such as a CD or DVD936. The foregoing disclosure provides examples of contextual key management for data encryption. The following further examples, which include subject matter such as an apparatus to perform contextual encryption key management, a computer-readable storage medium including instructions that, when executed, cause one or more processors to perform contextual encryption key management, and a method to perform contextual encryption key management, are disclosed herein. The disclosed examples can be implemented individually and/or in one or more combinations. Example 1 is an apparatus to perform contextual encryption key management. The apparatus of example 1 includes a context discoverer implemented by hardware or at least one processor to discover context information associated with a request to access first encrypted data. The apparatus of example 1 also includes a contextual key mapper implemented by hardware or the at least one processor to identify a combination of context rules associated with a key that is to provide access to the first encrypted data, validate the context information associated with the request based on the combination of context rules associated with the key to determine whether the request to access the first encrypted data is valid, and obtain the key from a key management service when the request to access the first encrypted data is valid. Example 2 includes the subject matter of example 1, wherein the contextual key mapper is further to provide the key to an encryption engine that is to decrypt the first encrypted data in response to the request to access the first encrypted data. Example 3 includes the subject matter of example 1 or example 2, and further includes a context rule engine to define a plurality of possible contexts and associated context rules to be evaluated by the context discoverer and the contextual key mapper. Example 4 includes the subject matter of example 3, wherein the plurality of possible contexts is a plurality of heterogeneous contexts including at least two of a data classification context, an access classification context, a geographic location context, a time context, a business organization context, a user context or a data destination context. In example 12, the combination of context rules includes at least two of a data classification context rule associated with the key, an access classification context rule associated with the key, a geographic location context rule associated with the key, a time context rule associated with the key, a business organization context rule associated with the key, a user context rule associated with the key or a data destination context rule associated with the key. Example 5 includes the subject matter of example 4, wherein the context discoverer is to determine at least two of a data classification context value associated with the request to access the first encrypted data, an access classification context value associated with the request to access the first encrypted data, a geographic location context value associated with the request to access the first encrypted data, a time context value associated with the request to access the first encrypted data, a business organization context value associated with the request to access the first encrypted data, a user context value associated with the request to access the first encrypted data or a data destination context value associated with the request to access the first encrypted data. Example 6 includes the subject matter of any one of examples 3 to 5, wherein the context information is first context information. In example 6, the context discoverer is further to evaluate, based on the plurality of possible contexts and associated context rules, second context information associated with first unencrypted data to determine the combination of context rules associated with the key, the context discoverer to evaluate the second context information associated with the first unencrypted data in response to a request to encrypt the first unencrypted data to form the first encrypted data. In example 6, the contextual key mapper is further to determine whether the combination of context rules has been determined previously for other unencrypted data that has undergone encryption, send a request to the key management service to retrieve the key when the combination of context rules has been determined previously for other unencrypted data that has undergone encryption, or send a request to the key management service to generate the key when the combination of context rules has not been determined previously for other unencrypted data that has undergone encryption. Example 7 includes the subject matter of example 6, wherein the contextual key mapper is further to provide the key to an encryption engine to encrypt the first unencrypted data to form the first encrypted data. Example 8 includes the subject matter of example 7, wherein the contextual key mapper is further to map the combination of context rules associated with the key to a key identifier identifying the key, provide the key identifier to the encryption engine, the encryption engine to include the key identifier with the first encrypted data, and in response to the request to access the first encrypted data, map the key identifier included with the first encrypted data to the combination of context rules associated with the key to identify the combination of context rules associated with the key. Example 9 is a non-transitory computer readable storage medium including computer readable instructions that, when executed, cause one or more processors to at least discover context information associated with a request to access first encrypted data, identify a combination of context rules associated with a key that is to provide access to the first encrypted data, validate the context information associated with the request based on the combination of context rules associated with the key to determine whether the request to access the first encrypted data is valid, and obtain the key from a key management service when the request to access the first encrypted data is valid. Example 10 includes the subject matter of example 9, wherein the computer readable instructions, when executed, further cause the one or more processors to provide the key to an encryption engine that is to decrypt the first encrypted data in response to the request to access the first encrypted data. Example 11 includes the subject matter of example 9 or example 10, wherein the combination of context values is based on at least a subset of a plurality of possible contexts and associated context rules defined based on user input data. Example 12 includes the subject matter of example 11, wherein the plurality of possible contexts is a plurality of heterogeneous contexts including at least two of a data classification context, an access classification context, a geographic location context, a time context, a business organization context, a user context or a data destination context. In example 12, the combination of context rules includes at least two of a data classification context rule associated with the key, an access classification context rule associated with the key, a geographic location context rule associated with the key, a time context rule associated with the key, a business organization context rule associated with the key, a user context rule associated with the key or a data destination context rule associated with the key. Example 13 includes the subject matter of example 12, wherein to discover the context information associated with the request to access the first encrypted data, the computer readable instructions, when executed, cause the one or more processors to determine at least two of a data classification context value associated with the request to access the first encrypted data, an access classification context value associated with the request to access the first encrypted data, a geographic location context value associated with the request to access the first encrypted data, a time context value associated with the request to access the first encrypted data, a business organization context value associated with the request to access the first encrypted data, a user context value associated with the request to access the first encrypted data, or a data destination context associated with the request to access the first encrypted data. Example 14 includes the subject matter of any one or examples 11 to 13, wherein the context information is first context information, and the computer readable instructions, when executed, further cause the one or more processors to: (i) evaluate, based on the plurality of possible contexts and associated context rules, second context information associated with first unencrypted data to determine the combination of context rules associated with the key, the second context information associated with the first unencrypted data to be evaluated in response to a request to encrypt the first unencrypted data to form the first encrypted data; (ii) determine whether the combination of context rules has been determined previously for other unencrypted data that has undergone encryption; (iii) send a request to the key management service to retrieve the key when the combination of context rules has been determined previously for other unencrypted data that has undergone encryption; and (iv) send a request to the key management service to generate the key when the combination of context rules has not been determined previously for other unencrypted data that has undergone encryption. Example 15 includes the subject matter of example 14, wherein the computer readable instructions, when executed, further cause the one or more processors to provide the key to an encryption engine to encrypt the first unencrypted data to form the first encrypted data. Example 16 includes the subject matter of example 15, wherein the computer readable instructions, when executed, further cause the one or more processors to: (i) map the combination of context rules associated with the key to a key identifier identifying the key; (ii) provide the key identifier to the encryption engine, the encryption engine to include the key identifier with the first encrypted data; and (iii) in response to the request to access the first encrypted data, map the key identifier included with the first encrypted data to the combination of context rules associated with the key to identify the combination of context rules associated with the key. Example 17 is a method to perform contextual encryption key management. The method of example 17 includes discovering, by executing an instruction with at least one processor, context information associated with a request to access first encrypted data, and identifying, by executing an instruction with the at least one processor, a combination of context rules associated with a key that is to provide access to the first encrypted data. The method of example 17 also includes validating, by executing an instruction with the at least one processor, the context information associated with the request based on the combination of context rules associated with the key to determine whether the request to access the first encrypted data is valid. The method of example 17 further includes obtaining, by executing an instruction with the at least one processor, the key from a key management service when the request to access the first encrypted data is valid. Example 18 includes the subject matter of example 17, and further includes providing the key to an encryption engine that is to decrypt the first encrypted data in response to the request to access the first encrypted data. Example 19 includes the subject matter of example 17 or example 18, wherein the combination of context values is based on at least a subset of a plurality of possible contexts and associated context rules defined based on user input data. Example 20 includes the subject matter of example 19, wherein the plurality of possible contexts is a plurality of heterogeneous contexts including at least two of a data classification context, an access classification context, a geographic location context, a time context, a business organization context, a user context or a data destination context. In example 20, the combination of context rules includes at least two of a data classification context rule associated with the key, an access classification context rule associated with the key, a geographic location context rule associated with the key, a time context rule associated with the key, a business organization context rule associated with the key, a user context rule associated with the key or a data destination context rule associated with the key. Example 21 includes the subject matter of example 20, wherein the discovering of the context information associated with the request to access the first encrypted data includes determining at least two of a data classification context value associated with the request to access the first encrypted data, an access classification context value associated with the request to access the first encrypted data, a geographic location context value associated with the request to access the first encrypted data, a time context value associated with the request to access the first encrypted data, a business organization context value associated with the request to access the first encrypted data, a user context value associated with the request to access the first encrypted data or a data destination context value associated with the request to access the first encrypted data. Example 22 includes the subject matter of any one of examples 19 to 21, wherein the context information is first context information. Example 22 further includes: (i) evaluating, based on the plurality of possible contexts and associated context rules, second context information associated with first unencrypted data to determine the combination of context rules associated with the key, the second context information associated with the first unencrypted data to be evaluated in response to a request to encrypt the first unencrypted data to form the first encrypted data; (ii) determining whether the combination of context rules has been determined previously for other unencrypted data that has undergone encryption; (iii) sending a request to the key management service to retrieve the key when the combination of context rules has been determined previously for other unencrypted data that has undergone encryption; and (iv) sending a request to the key management service to generate the key when the combination of context rules has not been determined previously for other unencrypted data that has undergone encryption. Example 23 includes the subject matter of example 22, and further includes further including providing the key to an encryption engine to encrypt the first unencrypted data to form the first encrypted data. Example 24 includes the subject matter of example 23, and further includes: (i) mapping the combination of context rules associated with the key to a key identifier identifying the key; (ii) providing the key identifier to the encryption engine, the encryption engine to include the key identifier with the first encrypted data; and (iii) in response to the request to access the first encrypted data, mapping the key identifier included with the first encrypted data to the combination of context rules associated with the key to identify the combination of context rules associated with the key. Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent. | 77,467 |
11943342 | DETAILED DESCRIPTION In general, embodiments perform private categorization using shared keys to preserve user privacy while combining data from multiple users to train machine learning models. In one embodiment, a group of multiple users utilize a shared key to encrypt information using fully homomorphic encryption. The encrypted information from the group of multiple users may then be pooled and used to train machine learning models for the group. The processes train the machine learning models using the homomorphically encrypted information and do not have access to the unencrypted information. User privacy is preserved by preventing access to the unencrypted information for the processes training the machine learning models. The machine learning models are trained using the homomorphically encrypted information. In one embodiment, groups of users are identified by clustering aggregated report information from the different users. The users may periodically generate aggregated reports that includes statistical information about transaction records. The use of statistical information, instead of direct transaction information, prevents the training systems and machine learning models disseminating explicit transaction data. The statistical information from the aggregated reports may be processed by clustering algorithms to identify groups of similar users. A group of similar users may then share an encryption key that is used to encrypt the data from the group of users and train a machine learning model. In one example, Professor X and Lex Luthor each need to categorize their transactions. Professor X and Lex Luthor run in different circles and do not know each other but get clustered together based on aggregated reports of their transaction records. Professor X and Lex Luthor both opt in to using more accurate models trained with group data and receive shared group encryption keys. The group keys are used to encrypt historical data from Professor X and Lex Luthor, which is then used to train a group model for predicting categories for transaction records. After the models are trained, Professor X and Lex Luthor independently log into the system to categorize transactions using the group model. The figures of the disclosure show diagrams of embodiments that are in accordance with the disclosure. The embodiments of the figures may be combined and may include or be included within the features and embodiments described in the other figures of the application. The features and elements of the figures are, individually and as a combination, improvements to the technology of computer implemented models and encryption. The various elements, systems, components, and steps shown in the figures may be omitted, repeated, combined, and/or altered as shown from the figures. Accordingly, the scope of the present disclosure should not be considered limited to the specific arrangements shown in the figures. FIGS.1A,1B, and1Cshow processes and systems that operate to perform private categorization using shared keys. The processes (140), (150), (158) ofFIG.1Aand the processes (170), (175), (180), (185), and (190) ofFIG.1Bmay operate on the user devices A (102) and B (107) through N (109) and the server (112) ofFIG.1C. In one embodiment, the processes (140) and (158) ofFIG.1Aand the processes (170), (180), and (185) may operate on the user devices A (102) and B (107) through N (109) ofFIG.1Cwhile the process (150) ofFIG.1Aand the processes (175) and (190) operate on the server (112) ofFIG.1C. Dividing the work of the processes between systems in this manner may reduce the amount of compute resources utilized by the server (112). For example, the server (112) ofFIG.1Cmay host an application with a REST interface used by the applications executing on the user devices A (102) and B (107) through N (109) ofFIG.1C In one embodiment, the processes (140), (150), (158) ofFIG.1Aand the processes (170), (175), (180), (185), and (190) may operate on the server (112). The server (112) may then provide access to information generated by the processes to the user devices A (102) and B (107) through N (109). For example, the server (112) may host a website that operates the processes in response to user accesses to the website. Turning toFIG.1A, the encryption process (140), the categorization process (150), and the decryption process (158) may operate to perform private categorization using shared keys. The encryption process (140), the categorization process (150), and the decryption process (158) may execute on one or more of the user devices A (102) and B (107) through N (109) and the server (112). The encryption process (140) is a collection of programs and components with instructions to process the transaction records (141) to generate the encrypted transaction vectors (146). The encryption process (140) uses the encoder model (142) and the encryption controller (145). The transaction records (141) are records of transactions and may be a subset of the transaction information (121) ofFIG.1C. The transaction records (141) are inputs to the encoder model (142). In one embodiment, a transaction record may include a date (year, month, day, hour, minute, second, etc.), one or more numeric fields, one or more text fields, etc. For example, the numeric fields may include a floating-point value that identifies an amount for the transaction and an integer value that identifies the transaction. The text fields may include payee information, payor information, a description of the transaction, etc. The following JavaScript object notation (JSON) text provides an example of transaction data. { “ID”: “1234”,“payor”: “Superman”,“payee”: “Dark Knight Industries”,“date”: “Dec. 28, 2022”,“amount”: “199.99”,“description”: “superhero costume dry cleaning” } The encoder model (142) is a collection of programs with instructions that may operate as part of the encryption process (140). The encoder model (142) processes the transaction records (141) to generate the transaction vectors (143). The encoder model (142) may apply mappings and transformations to data from the transaction records (141) to generate the transaction vectors (143). In one embodiment, the encoder model (142) may include a machine learning model that calculates the transaction vectors (143) from the transaction records (141). The machine learning model may include a neural network, an autoencoder, a transformer network, a fully connected network, etc. The transaction vectors (143) are vectors that describe the transaction records (141). The transaction vectors (143) may include features extracted from the transaction records (141) by the encoder model (142). In one embodiment, a transaction vector may include multiple floating-point values that represent information from the transaction records (141). A transaction vector may include data from one or more of the transaction records (141). The transaction vectors (143) are input to the encryption controller (145). The keys A (144) are cryptographic keys that are used to encode (i.e., encrypt) the transaction vectors (143). The keys A (144) may include public keys (used for encoding) that are paired with private keys (used for decoding). The encryption controller (145) is a collection of programs with instructions that may operate as part of the encryption process (140). The encryption controller (145) processes the transaction vectors (143) with the keys A (144) to generate the encrypted transaction vectors (146). In one embodiment, the encryption controller (145) uses fully homomorphic encryption to generate the transaction vectors (143). The encrypted transaction vectors (146) are vectors encrypted using the keys A (144) by the encryption controller (145). In one embodiment, the encrypted transaction vectors (146) were encrypted with public keys and may be decrypted with the corresponding private key. The encrypted transaction vectors (146) are processed with the categorization process (150) to generate the encrypted category vectors (159). The categorization process (150) is a collection of programs and components with instructions to process the encrypted transaction information (151) to generate the encrypted category information (154). The categorization process (150) uses the classifier controller (153) and the classifier models (152). The encrypted transaction information (151) is data that includes the encrypted transaction vectors (146) generated by the encryption controller (145). The encrypted transaction information (151) may include encrypted transaction vectors for multiple different users from multiple different instances of the encryption process (140). For example, multiple instances of the encryption process (140) may be executing on multiple user devices that send encrypted transaction vectors to the categorization process (150). The classifier models (152) are collections of programs with instructions that may operate as part of the categorization process (150). Each of the classifier models (152) is trained with data encrypted with a specific key. One of the encrypted transaction vectors (146) is encrypted with one of the keys A (144) and is processed by one of the classifier models (152) trained for the key. For example, a transaction vector (of the transaction vectors (143)) is encrypted with a public key (of the keys A (144)) and the resulting encrypted transaction vector (of the encrypted transaction vectors (146)) is processed with a classifier model (of the classifier models (152)) that was trained with data encrypted with the public key used to encrypt the original transaction vector. One of the classifier models (152) may correspond to one of the keys A (144). The classifier models (152) may include logistic regression models, neural network models, etc. The classifier controller (153) is a collection of programs with instructions that may operate as part of the categorization process (150). The classifier controller (153) receives the encrypted transaction information (151), selects and applies the classifier models (152) to the encrypted transaction information (151) based on the keys used to encrypt the encrypted transaction information (151), and returns the encrypted category information (154). For example, the classifier controller (153) may receive an encrypted transaction vector (of the encrypted transaction vectors (146)), select a corresponding classifier model (from the classifier models (152)), and process the encrypted transaction vector with the selected classifier model to generate an encrypted category vector (of the encrypted category vectors (159)), which are part of the encrypted category information (154)). The encrypted category information (154) is data that includes the encrypted category vectors (159) used by the decryption process (158). The encrypted category information (154) may include encrypted category vectors for multiple different users from multiple different instances of the decryption process (158). For example, multiple instances of the decryption process (158) may be executing on multiple user devices that receive encrypted category vectors from the categorization process (150). The decryption process (158) is a collection of programs and components with instructions to process the encrypted category vectors (159) to generate the category identifiers (164). In one embodiment, the decryption process (158) uses the decryption controller (161), the decoder model (163), and the presentation controller (165). The encrypted category vectors (159) are vectors that are effectively encrypted by the keys A (144) but were generated by one of the classifier models (152) from the encrypted transaction vectors (146). In one embodiment, the encrypted category vectors (159) may be decrypted with a private key (from the keys B (160)) that corresponds to the public key (from the keys A (144)) used to encrypt the encrypted transaction vectors (146). The encrypted category vectors (159) are input to the decryption controller (161). The keys B (160) are cryptographic keys that are used to decode (i.e., decrypt) the encrypted category vectors (159). The keys B (160) may include private keys (used for decoding) that are paired with public keys (used for encoding). The decryption controller (161) is a collection of programs with instructions that may operate as part of the decryption process (158). The decryption controller (161) processes the encrypted category vectors (159) with the keys B (160) to generate the category vectors (162). In one embodiment, the decryption controller (161) uses fully homomorphic decryption to generate the category vectors (162). The category vectors (162) are vectors that identify categories for the corresponding transaction vectors (143). In one embodiment, a category vector (of the category vectors (162)) may include a set of floating-point values in the range from 0 to 1. In one embodiment, the dimensions of the category vector correspond to different categories, which may represent account names of a chart of accounts, and which may be assigned to a transaction record. The values in the dimensions of the category vector indicate the likelihood that the category (e.g., account name) associated with a particular dimension will be assigned to a transaction record. A higher value (e.g., closer to 1) in a dimension of a category vector indicates a higher likelihood that the corresponding transaction record will be assigned to the category represented by the dimension. The category vectors (162) are input to the decoder model (163). The decoder model (163) is a collection of programs with instructions that may operate as part of the decryption process (158). The decoder model (163) processes the category vectors (162) to generate the category identifiers (164). The decoder model (163) may apply mappings and transforms to data from the category vectors (162) to generate the category identifiers (164). In one embodiment, the decoder model (163) may select the category identifier for the category that corresponds to the dimension of a category vector with the highest value. The category identifiers (164) are data that identify categories. In one embodiment, each account of a chart of accounts corresponds to a category that is identified by a category identifier. In one embodiment, the category identifiers (164) are text values that include the name of an account. In one embodiment, the category identifiers (164) are numerical values that uniquely identify the accounts of a chart of accounts. One of the category identifiers (164) corresponds to one of the transaction records (141). The category identifiers (164) may be input to the presentation controller (165). The presentation controller (165) is a collection of programs with instructions that may operate as part of the decryption process (158). In one embodiment, the presentation controller (165) generates and presents recommendations. In one embodiment, a recommendation is the name of the account that corresponds to one of the category vectors (162). The recommendation may be generated by mapping a category identifier to a text value that is injected into a message and displayed on a user interface. Turning toFIG.1B, the aggregation process (170), the clustering process (175), the key process (180), the training input process (185), and the training process (190) may operate to train the machine learning models (192) to perform private categorization using shared keys. The aggregation process (170), the clustering process (175), the key process (180), the training input process (185), and the training process (190) may operate to train the machine learning models (192) may execute on one or more of the user devices A (102) and B (107) through N (109) and the server (112) ofFIG.1C. The aggregation process (170) is a collection of programs and components with instructions to process the historical data A (171) to generate the aggregated report data (173). The aggregation process (170) uses the report controller (172). The historical data A (171) is data that is used to generate the cluster information (178). The historical data A (171) may be a subset of the historical information (127) ofFIG.1C. In one embodiment, the historical data A (171) includes historical transaction records. In one embodiment, the historical data A (171) may include recategorization data that tracks the manual recategorization of transaction records. The tracking for a recategorization may include a transaction identifier, a predicted category identifier, a selected category identifier, etc. The report controller (172) is a collection of programs with instructions that may operate as part of the aggregation process (170). The report controller (172) processes the historical data A (171) to generate the aggregated report data (173). In one embodiment, the report controller (172) may aggregate multiple transaction records from the historical data A (171) using statistical methods. The aggregated report data (173) is data that describes the transactions and categorizations from the historical data A (171). In one embodiment, the aggregated report data (173) may include distribution of categories, most common recategorizations, and recategorization rate. In one embodiment, the aggregated report data (173) may also include financial data, which may include average income amount, average expense amount, transaction location counts, etc. The aggregated report data (173) may be gathered periodically (e.g., every week, month, quarter, etc.) and sent to the clustering process (175). The aggregated report data (173) is sent to the clustering process (175) and forms a portion of the aggregated report information (176). The clustering process (175) is a collection of programs and components with instructions to process the aggregated report information (176) to generate the cluster information (178). The clustering process (175) uses the cluster controller (177). The aggregated report information (176) is data that includes the aggregated report data (173) generated by the report controller (172). The aggregated report information (176) may include aggregated data for multiple different users from multiple different instances of the aggregation process (170). For example, multiple instances of the aggregation process (170) may be executing on multiple user devices that send aggregated report data (including the aggregated report data (173)) to the clustering process (175) operating on a server. The cluster controller (177) is a collection of programs with instructions that may operate as part of the clustering process (175). The cluster controller (177) processes the aggregated report information (176) to generate the cluster information (178). The cluster controller (177) applies a clustering algorithm to the aggregated report information (176). Clustering algorithms that may be used include DB-SCAN, K-means, etc. The cluster information (178) is a collection of cluster data (including the cluster data (181)). In one embodiment, for each user, the cluster controller (177) processes the aggregated report data for the user to generate a user cluster vector. The cluster controller (177) may then process the user cluster vectors with a clustering algorithm to identify clusters. A cluster within the cluster information (178) identifies a set of users that have similar information in their respective aggregated report data. A group cluster identifier may be a multidimensional variable the identifies the centroid of a cluster of users. Each user may be assigned a user cluster identifier that identifies the cluster to which a user belongs. The key process (180) is a collection of programs and components with instructions to acquire the keys C (183) using the cluster data (181). The key process (180) uses the key controller (182). In one embodiment, the key process (180) may operate on user devices to prevent a server from being able to access private keys. In one embodiment, the key process (180) is performed on a server (which may be separate from a server that executes the training process (190)) that distributes the keys to the user devices. The cluster data (181) may be a subset of the cluster information (178) for a single user. In one embodiment, the cluster data (181) includes a user cluster identifier The key controller (182) is a collection of programs with instructions that may operate as part of the key process (180). The key controller (182) processes the cluster data (181) to acquire the keys C (183). In one embodiment, a Diffie-Hellman key exchange may take place between the group of users of a cluster. The keys C (183) are cryptographic keys. The keys C (183) may include public private key pairs for encrypting and decrypting data. The keys C (183) are shared between a group of users in the same cluster. The training input process (185) is a collection of programs and components with instructions to process the historical data B (187) using the keys D (186) to generate the training data (189). The training input process (185) uses the training controller (188). The keys D (186) are cryptographic keys. The keys D (186) are used to encrypt the historical data B (187). The keys D (186) may include keys for a specific user and may include group keys for groups of users in a cluster. Each encryption key of the keys D (186) may correspond to one of the machine learning models (192). The historical data B (187) is data used to train the machine learning models (192). The historical data B (187) may be a subset of the historical information (127) (ofFIG.1C) that includes transaction records for training classifier models of the machine learning models (192). The training controller (188) is a collection of programs with instructions that may operate as part of the training input process (185). The training controller (188) processes the historical data B (187) with the keys D (186) to generate the training data (189). In one embodiment, the training controller (188) may filter the historical information (127) to identify the historical data B (187). The training data (189) is data used to training the machine learning models (192). The training data (189) is for one user and may form a portion of the training input (191). The training process (190) is a collection of programs and components with instructions to process the training input (191) to train the machine learning models (192). The training process (190) uses the update controller (194). The training input (191) is training data for one of the machine learning models (192). When training a model for a single user, the training input (191) may include the training data (189). When training a model for a group user (i.e., from a cluster), the training input (191) may include the training data (189) as well as additional training data from other users. The training input (191) is input to the machine learning models (192). The machine learning models (192) include the models used by the system to private categorization using shared keys. In one embodiment, the machine learning models (192) may include the encoder model (142), the classifier models (152), and the decoder model (163) ofFIG.1A. One of the machine learning models (192) may be trained for each of the encryption keys for each user and for each group of users. The machine learning models (192) may include regression models, neural network models, etc. The machine learning models (192) process the training input (191) to generate the training output (193). The training output (193) is the data generated by the machine learning models (192) from the training input (191). The training input (191) may be input to the update controller (194). The update controller (194) is a collection of programs with instructions that may operate as part of the training process (190). The update controller (194) processes the training output (193) to generate the training updates (195) to improve the machine learning models (192). In one embodiment, the update controller (194) uses iterative algorithms, which may include regression, backpropagation, gradient descent, etc. In one embodiment, the update controller (194) compares the training output (193) to expected outputs and generates the training updates (195) responsive to the error between the training output (193) and the expected outputs. The training updates (195) are updates for the machine learning models (192) generated based on the training output (193). In one embodiment, the training updates (195) include model parameters (e.g., weights) that are added to the existing weights of the machine learning models (192) to improve the machine learning models (192). Turning toFIG.1C, the system (100) performs private categorization using shared keys. In one embodiment, the system (100) executes the processes (140), (150), (158) ofFIG.1Aand the processes (170), (175), (180), (185), and (190) ofFIG.1Busing the user devices A (102) and B (107) through N (109) and the server (112) ofFIG.1C. The system (100) includes the server (112), the user devices A (102) and B (107) through N (109), and the repository (120). The server (112) is a computing system (further described inFIG.4A). The server (112) may include multiple physical and virtual computing systems that form part of a cloud computing environment. In one embodiment, execution of the instructions, programs, and applications of the server (112) is distributed to multiple physical and virtual computing systems in the cloud computing environment. The server (112) may include the server application (115). The server application (115) is a collection of programs with instructions that may execute on multiple servers of a cloud environment, including the server (112). The server application (115) executes the processes (118). In one embodiment, the server application (115) hosts websites and may serve structured documents (hypertext markup language (HTML) pages, extensible markup language (XML) pages, JavaScript object notation (JSON) files and messages, etc.) to interact with the user devices A (102) and B (107) through N (109). Requests from the user devices A (102) and B (107) through N (109) may be processed to generate responses that are returned to the user devices A (102) and B (107) through N (109). The processes (118) are a collection of programs and components with instructions to process the communications between the user devices A (102) and B (107) through N (109) and the server (112). The processes (118) may include one or more of the processes (140), (150), (158), (170), (175), (180), (185), and (190) ofFIGS.1A and1B. The user devices A (102) and B (107) through N (109) are computing systems (further described inFIG.4A). For example, the user devices A (102) and B (107) through N (109) may be desktop computers, mobile devices, laptop computers, tablet computers, server computers, etc. The user devices A (102) and B (107) through N (109) include hardware components and software components that operate as part of the system (100). The user devices A (102) and B (107) through N (109) communicate with the server (112) to access, manipulate, and view services and information hosted by the system (100). In one embodiment, the user devices A (102) and B (107) through N (109) may communicate with the server (112) using standard protocols and file types, which may include hypertext transfer protocol (HTTP), HTTP secure (HTTPS), transmission control protocol (TCP), internet protocol (IP), hypertext markup language (HTML), extensible markup language (XML), etc. The user devices A (102) and B (107) through N (109) respectively include the user applications A (105) and B (108) through N (110). The user applications A (105) and B (108) through N (110) may each include multiple programs respectively running on the user devices A (102) and B (107) through N (109). The user applications A (105) and B (108) through N (110) may be native applications, web applications, embedded applications, etc. In one embodiment, the user applications A (105) and B (108) through N (110) may execute one or more of the processes (140), (150), (158), (170), (175), (180), (185), and (190) ofFIGS.1A and1B. In one embodiment, the user applications A (105) and B (108) through N (110) include web browser programs that display web pages from the server (112). As an example, the user application A (105) may be used to categorize transaction records by accessing the server (112). The user application A (105) sends a request to the server (112), which generates a response that identifies a transaction record and a predicted category for the transaction record. The user application A (105) may receive a user input to accept the predicted category or to recategorize the transaction record. The repository (120) is a computing system that may include multiple computing devices in accordance with the computing system (400) and the nodes (422) and (424) described below inFIGS.4A and4B. The repository (120) may be hosted by a cloud services provider that also hosts the server (112). The cloud services provider may provide hosting, virtualization, and data storage services as well as other cloud services to operate and control the data, programs, and applications that store and retrieve data from the repository (120). The data in the repository (120) includes the transaction information (121), the key information (122), the aggregated report information (123), the cluster information (124), the training information (125), the model information (126), and the historical information (127). The transaction information (121) is data that describes transactions of users of the system (100). The transaction information (121) may include transaction records for each user, including the transaction records (141) ofFIG.1A. In one embodiment, the transaction information (121) may be real time data the is processed upon performance of individual transactions and the storage of corresponding transaction records. The key information (122) is data that defines the keys used by the system (100). The keys are encryption keys used to encrypt information in the system (100) to prevent unauthorized access to information. The key information (122) includes public keys paired with private keys. The pairs of keys include keys for individual users and keys for groups of users. The key information (122) may include the keys A (144), the keys B (160), the keys C (183), and the keys D (186). The aggregated report information (123) is data that describes the users of the system (100). In one embodiment, the aggregated report information (123) includes summarizes of transaction records for each user of the system (100). The aggregated report information (123) is used as the basis for clustering the users into groups. The cluster information (124) is data that identifies groups of users. the cluster information (124) includes the cluster data (181) ofFIG.1B. Cluster data may be generated for each user that identifies the cluster and group to which a user belongs. The cluster information (124) may include a cluster identifier for each user that identifies the cluster into which a user is grouped. The training information (125) is data used to train the machine learning models of the system (100), including the machine learning models (192) of FIG.1B. The training information (125) may include one or more of the historical data B (187), the training data (189), and the training input (191) ofFIG.1B. In one embodiment, the training information (125) includes the training input (191), which is encrypted to protect the underlying information (e.g., the historical data B (187)). The model information (126) is data that defines the machine learning models of the system (100). The model information (126) includes data defining the machine learning models (192) ofFIG.1B The historical information (127) is data that describes the interactions of users with the system (100). In one embodiment, the historical information (127) includes transaction records, categorization records, and recategorization records. The transaction records describe the transactions of users, the categorization records describe the categories assigned to the transaction records, and the recategorization records describe changes to the categorization records. In one embodiment, the categorization records include the predicted categories generated by the system (100) for the transaction records, and the recategorization records may identify discrepancies between the predicted categories and manually selected categories. Although shown using distributed computing architectures and systems, other architectures and systems may be used. In one embodiment, the server application (115) may be part of a monolithic application that implements the modeling and management of affinity networks. In one embodiment, the user applications A (105) and B (108) through N (110) may be part of monolithic applications that perform private categorization using shared keys without the server application (115). Turning toFIG.2, the process (200) performs private categorization using shared keys. The process (200) may be performed by a computing device interacting with one or more additional computing devices. For example, the process (200) may execute on a server response to one or more user devices. At Step202, an encryption key is selected. In one embodiment, the encryption key is one of a user key and a group key. In one embodiment, the group key corresponds to a group of user identifiers of a cluster of users. Cluster data maintained by the system maps between the user identifiers and cluster identifiers to identify the users that belong to specific clusters. In one embodiment, the cluster is generated using an aggregated report that includes historical transaction records. In one embodiment, an aggregated report may be a vector of statistical information generated from historical data. In one embodiment, the encryption key is a group key that is distributed responsive to a user selection for joining the group of user identifiers of the cluster. The selection may be received from a user device. In one embodiment, the group key is generated and distributed in response to the user selection. In one embodiment, the encryption key is a public key and the decryption key is a corresponding private key. The public key and the private key form a key pair. The public key may be used to encrypt or verify information and the private key may be used to decrypt or sign information. In one embodiment, the cluster is generated by a cluster controller in response to receiving a multiple aggregated reports from multiple user processes on a periodic basis. The user processes may correspond to users of the system. In one embodiment, the aggregated report may include statistical data, distribution data, re-categorization data, and financial data generated from multiple transaction records. At Step205, a transaction vector, generated from a transaction record, is encrypted with the encryption key to generate an encrypted transaction vector. In one embodiment, the encryption may be performed using an asymmetric cryptographic algorithm In one embodiment, the transaction record being encrypted is processed with an encoder model to generate the transaction vector. In one embodiment, the encoder model may include a neural network operating on numerical data or textual data, and the neural network may include an autoencoder, an embedding layer, a transformer network, an attention layer, etc. In one embodiment, the encoder model may be trained with historical data encrypted with an encryption key distributed to each user that is periodically updated. In one embodiment, the transaction vector is encrypted using fully homomorphic encryption. Fully homomorphic encryption prevents the servers that process the data form access to personally identifiable information from the underlying transaction record. At Step208, an encrypted category vector is received that is generated by a classifier model from the encrypted transaction vector. The classifier model corresponds to the encryption key. A classifier model trained on data encrypted with one key may not work with data encrypted with a different key. In one embodiment, the classifier model is one of a user classifier model and a group classifier model. In one embodiment, the user classifier model is trained on user transaction records corresponding to a user identifier of the group of user identifiers. In one embodiment, the group classifier model is trained on group transaction records corresponding to the group of user identifiers and including the user transaction records for the users of the group of a cluster. At Step210, a category from the encrypted category vector is decrypted with a decryption key corresponding to the encryption key. In one embodiment, the category is identified by decrypting the encrypted category vector to generate a category vector and processing the category vector with a decoder model. In one embodiment, the decoder model is a category decoder model. In one embodiment, the decoder model may be trained with historical data encrypted with a key distributed to each user and periodically updated. In one embodiment, the category identifies an account of a chart of accounts for a transaction record. A chart of accounts may include names for accounts. The names may include “accounts receivable”, “accounts payable”, “revenue”, etc. At Step212, the category is presented. In one embodiment, the category may be presented by sending a message to a user device that includes the category. The user device may display the category in a user interface. In one embodiment, the category is presented for an account of a chart of accounts. In one embodiment, an account for the category is updated to include the transaction record in response to a user selection. Turning toFIG.3, the system (300) performs private categorization using shared keys. In their quests to avoid getting audited, Professor X and Lex Luthor each seek to categorize their transactions using the system (300). Professor X operates the Professor X Device (301), and Lex Luthor operates the Lex Luthor device (371). The Professor X Device (301) is a computing device that may be a smartphone, desktop computer, tablet computer, etc. The Professor X Device (301) displays the Professor X interfaces A (303), B (311), and C (321). The Professor X interface A (303) is displayed on the Professor X Device (301) after an aggregated report (one of the aggregated reports (353)) is sent to the server (351) and is used to generate the cluster information (357) to identify a cluster of users with transactions and categories that are similar to those for Professor X. The text (305) along with the yes element (307) and the no element (309) are displayed in the Professor X interface A (303). The text (305) is displayed to indicate that Professor X may use models trained with transaction records for groups of users to provide more accurate categorizations. Since Professor X fears getting audited more than any supervillain, Professor X readily accepts by selecting the yes element (307) and will use group models. Had the no element (309) been selected, a user specific model trained on Professor X's data (without group data) would have been used. The Professor X interface B (311) is displayed after Professor X has selected to categorize the transaction record B (313). The transaction record B (313) is displayed along with the recommended category (315). The recommended category (315) was identified by the classification model (361) after the transaction record B (313) is processed. The transaction record B (313) is processed by extracting a transaction vector from the transaction record B (313), encrypting the transaction vector with the group encryption key on the Professor X Device (301) to form an encrypted transaction vector, and transmitting the encrypted transaction vector (part of the encrypted transaction vectors (359)) to the server (351). The server (351) processes the encrypted transaction vector with the classification model (361) to generate an encrypted category vector that is returned to the Professor X Device (301). The Professor X Device (301) decrypts the encrypted category vector and selects the recommended category (315) as the category identified by the category vector. Professor X may recategorize the transaction record B (313) by selecting the recategorize element (317) or accept the recommended category (315) by selecting the accept element (319). Selection of the recategorize element (317) may bring up a window that allows Professor X to specify the category for the transaction record B (313). The Professor X interface C (321) is displayed after interaction with the Professor X interface B (311). The Professor X interface C (321) provides the text (323) to indicate that the transaction was successfully categorized by selection of either the recategorize element (317) or the accept element (319). The server (351) is a computing device. The server (351) may be used to train and use machine learning models to generate predictions of categories for transaction records. The server (351) may train models for specific users and for groups of users. Data for the models may obfuscated or encrypted to prevent the server (351) from accessing the underlying data. The aggregated reports (353) are received from each of the users of the system (300). The aggregated reports (353) include aggregated reports from the Professor X Device (301) and from the Lex Luthor device (371). The cluster controller (355) processes the aggregated reports (353) to cluster the users of the system and generate the cluster information (357). In one embodiment, the cluster controller (355) may use K-means clustering. The cluster information (357) maps users to groups of users. Each user may acquire a set of user encryption keys to use user specific models. Each group of users may acquire a set of group encryption keys to use group specific models. The encrypted transaction vectors (359) are received from the users of the system (300). The encrypted transaction vectors (359) include encrypted transaction vectors for the transaction record B (313) from the Professor X Device (301) and the transaction record E (383) from the Lex Luthor device (371). The classification model (361) is a group model that processes the encrypted transaction vectors (359) that have been encrypted with the same group encryption key. The classification model (361) processes the encrypted transaction vectors (359) to generate the encrypted category vector (363). The encrypted category vector (363) is generated from the encrypted transaction vectors (359) by the classification model (361). The encrypted category vector (363) includes encrypted category vectors for the recommended category (315) of the Professor X Device (301) and the recommended category (385) of the Lex Luthor device (371). The Lex Luthor device (371) is a computing device that may be a smartphone, desktop computer, tablet computer, etc. The Lex Luthor device (371) and its components operate in a similar fashion as those of the Professor X Device (301). The Lex Luthor interface D (373) displays the text (375) and receives user input for the selection of one of the yes element (377) and the no element (379). In an ironic twist, Lex Luthor fears getting audited as much as any superhero and also selects the yes element (377) to use a group model. The Lex Luthor interface E (381) displays the transaction record E (383) which has been processed by the Lex Luthor device (371) and the server (351) to generate the recommended category (385), which may be the same as the recommended category (315) that was recommended for Professor X. The recategorize element (387) and the accept element (389) are displayed and Lex Luthor selects the accept element (389) to accept the the recommended category (385) for the transaction record E (383). The Lex Luthor interface F (391) is displayed with the text (393). The text (393) indicates that the transaction record E (383) was successfully categorized. Embodiments may be implemented on a computing system specifically designed to achieve an improved technological result. When implemented in a computing system, the features and elements of the disclosure provide a significant technological advancement over computing systems that do not implement the features and elements of the disclosure. Any combination of mobile, desktop, server, router, switch, embedded device, or other types of hardware may be improved by including the features and elements described in the disclosure. For example, as shown inFIG.4A, the computing system (400) may include one or more computer processors (402), non-persistent storage (404), persistent storage (406), a communication interface (412) (e.g., Bluetooth interface, infrared interface, network interface, optical interface, etc.), and numerous other elements and functionalities that implement the features and elements of the disclosure. The computer processor(s) (402) may be an integrated circuit for processing instructions. The computer processor(s) may be one or more cores or micro-cores of a processor. The computer processor(s) (402) includes one or more processors. The one or more processors may include a central processing unit (CPU), a graphics processing unit (GPU), a tensor processing unit (TPU), combinations thereof, etc. The input device(s) (410) may include a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device. The input device(s) (410) may receive inputs from a user that are responsive to data and messages presented by the output device(s) (408). The inputs may include text input, audio input, video input, etc., which may be processed and transmitted by the computing system (400) in accordance with the disclosure. The communication interface (412) may include an integrated circuit for connecting the computing system (400) to a network (not shown) (e.g., a local area network (LAN), a wide area network (WAN) such as the Internet, mobile network, or any other type of network) and/or to another device, such as another computing device. Further, the output device(s) (408) may include a display device, a printer, external storage, or any other output device. One or more of the output devices may be the same or different from the input device(s). The input and output device(s) may be locally or remotely connected to the computer processor(s) (402). Many different types of computing systems exist, and the aforementioned input and output device(s) may take other forms. The output device(s) (408) may display data and messages that are transmitted and received by the computing system (400). The data and messages may include text, audio, video, etc., and include the data and messages described above in the other figures of the disclosure. Software instructions in the form of computer readable program code to perform embodiments may be stored, in whole or in part, temporarily or permanently, on a computer program product that includes a non-transitory computer readable medium such as a CD, DVD, storage device, a diskette, a tape, flash memory, physical memory, or any other computer readable storage medium. Specifically, the software instructions may correspond to computer readable program code that, when executed by a processor(s), is configured to perform one or more embodiments of the invention, which may include transmitting, receiving, presenting, and displaying data and messages described in the other figures of the disclosure. The computing system (400) inFIG.4Amay be connected to or be a part of a network. For example, as shown inFIG.4B, the network (420) may include multiple nodes (e.g., node X (422), node Y (424)). Each node may correspond to a computing system, such as the computing system shown inFIG.4A, or a group of nodes combined may correspond to the computing system shown inFIG.4A. By way of an example, embodiments may be implemented on a node of a distributed system that is connected to other nodes. By way of another example, embodiments may be implemented on a distributed computing system having multiple nodes, where each portion may be located on a different node within the distributed computing system. Further, one or more elements of the aforementioned computing system (400) may be located at a remote location and connected to the other elements over a network. The nodes (e.g., node X (422), node Y (424)) in the network (420) may be configured to provide services for a client device (426), including receiving requests and transmitting responses to the client device (426). For example, the nodes may be part of a cloud computing system. The client device (426) may be a computing system, such as the computing system shown inFIG.4A. Further, the client device (426) may include and/or perform all or a portion of one or more embodiments of the invention. The computing system ofFIG.4Amay include functionality to present raw and/or processed data, such as results of comparisons and other processing. For example, presenting data may be accomplished through various presenting methods. Specifically, data may be presented by being displayed in a user interface, transmitted to a different computing system, and stored. The user interface may include a GUI that displays information on a display device. The GUI may include various GUI widgets that organize what data is shown as well as how data is presented to a user. Furthermore, the GUI may present data directly to the user, e.g., data presented as actual data values through text, or rendered by the computing device into a visual representation of the data, such as through visualizing a data model. In the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as by the use of the terms “before”, “after”, “single”, and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements. Further, unless expressly stated otherwise, or is an “inclusive or” and, as such includes “and.” Further, items joined by an or may include any combination of the items with any number of each item unless expressly stated otherwise. In the above description, numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description. Further, other embodiments not explicitly described above can be devised which do not depart from the scope of the invention as disclosed herein. Accordingly, the scope of the invention should be limited only by the attached claims. | 51,822 |
11943343 | DETAILED DESCRIPTION FIG.1a FIG.1ais a graphical illustration of an exemplary system, where device communicates data with a network in order to conduct a key exchange, in accordance with exemplary embodiments. The system100can include a device103and a network105, where the nodes can communicate data106over an Internet Protocol (IP) network107. Network105can comprise a plurality of servers supporting communication such as data106with a plurality of devices103. In exemplary embodiments, network105can include a server101and a key server102. The exemplary servers shown for network105in system100can be either different physical computers such as rack-mounted servers, or different logical or virtual servers or instances operating in a “cloud” configuration. Or, server101and key server102could represent different logical “server-side” processes within a network105, including different programs running on a server that listen and communicate using different IP port numbers within one physical server. In exemplary embodiments, server101and key server102can operate using the physical electrical components depicted and described for a server101inFIG.1bbelow. Other possibilities exist as well for the physical embodiment of server101and key server102without departing from the scope of the present disclosure. In exemplary embodiments, server101can be described as a “first server” and key server102can be described as a “second server”. Further, the combination of a first server101and a second server102can comprise a network105. The combination of a first server101and a second server102can also comprise a “set of servers”. Although server101and key server102are depicted inFIG.1aas belonging to the same network105, server101and key server102could be associated with different networks and communicate in a secure manner. Secure sessions between server101and key server102could be established over IP network107using methods including a physical wired connection via a local area network (LAN), transport layer security (TLS), a virtual private network (VPN), and IP Security (IPSEC), and other possibilities exist as well. As depicted inFIG.1a, server101and key server102could communicate over a private network107a. Device103can be a computing device for sending and receiving data. Device103can take several different embodiments, such as a general purpose personal computer, a mobile phone based on the Android® from Google® or the IOS operating system from Apple®, a tablet, a networked device with a sensor or actuator for the “Internet of Things”, a module for “machine to machine” communications, a device that connects to a wireless or wired Local Area Network (LAN), an initiator according to the Device Provisioning Protocol specification (DPP) from the WiFi alliance, a router, and/or a server, and other possibilities exist as well without departing from the scope of the present disclosure. Exemplary electrical components within a device103can be similar to the electrical components for a server101depicted and described inFIG.1bbelow, where device103can use electrical components with smaller capacities and lower overall power consumption, compared to the capacity and power consumption for the same electrical components in a server101. Device103can include a device identity103i, which could comprise a string or number to uniquely identify device103with network105and/or server101and server102. Device identity103icould comprise a medium access control (MAC) address for a physical interface such as Ethernet or WiFi, a Subscription Permanent Identifier (SUPI) with 5G networks, an international mobile subscriber identity (IMSI) or international mobile equipment identity (IMEI) with 2G/3G/4G networks, and other possibilities exist as well without departing from the scope of the present disclosure. In exemplary embodiments, device identity103ican be written to hardware in device103and operate as a unique, long-term identity for device103. Device103can record at least one elliptic curve cryptography (ECC) static public key for network105comprising network static public key PK.network102a. Network static public key102acould be recorded in nonvolatile or volatile memory within device103. For embodiments where key102ais recorded in nonvolatile memory, key102acould be recorded by a device manufacturer or device distributor. For embodiments where key102ais recorded in volatile memory, device103could obtain key102afrom a different server than server101for network105before sending data106, such as device103obtaining key102avia a secure session from a different server before sending data106. A device103can record a plurality of different network static public keys102ain a network public key table103t. Different keys102ain a table103tcould be associated with different networks105or different servers101that device103communicates with over time. Exemplary data for a network public key table103tfor device103is depicted and described in connection withFIG.1cbelow. The different keys102acan be associated with network names and/or Uniform Resource Locators (URLs) or domain names, such that device103can select the network static public key102abased on a URL or domain name for a network105or a server101where device103will send data106. Network static public key PK.network102acan be obtained by device103before conducting an elliptic curve Diffie-Hellman (ECDH) key exchange or an ephemeral elliptic curve Diffie-Hellman (ECHDE) key exchange. Network static public key102acould be obtained by device103in several different ways. Network static public key102acould be written into memory by a manufacturer, distributor, or owner of device103before device103connects with server101or a network105. Network static public key102acould be received by device103over an IP network107via a secured session, such as a TLS, DTLS, IPSec, or VPN connection before sending data106to server101. In exemplary embodiments, network static public key102ais recorded in device103in a secured and authenticated manner, such that device103can trust network static public key102a. As one exemplary embodiment, network static public key102acould be a public key within a certificate, where the public key102ais signed by a certificate authority. Although not depicted inFIG.1a, device103could also record a certificate authority root certificate, and device103could (a) verify the signature of a certificate authority in a certificate for the public key102ausing (b) the recoded root certificate for the certificate authority (and any intermediary parent certificates). Network static public key102acould be processed or formatted according to a set of parameters104, and network static public key102acould also be compatible with parameters104. Although public key102ais described as “static”, the key could change over time such as with the expiration of a validity date when recorded in a certificate. Public key102acould remain static over the period of time for device103to conduct at least one ECDHE key exchange, where the ECDHE key exchange uses ephemeral or derived ECC PKI keys. Public key102acould comprise a long-term public key for use by device103when communicating with network105. Although the use of a certificate for public key102ais described in this paragraph for public key102a, the use of a certificate is not required. In an embodiment depicted inFIG.3cbelow, (i) public key102acould comprise a responder bootstrap public key and (ii) device103could comprise an initiator according to the DPP standard, which is also depicted and described in connection withFIG.3cbelow. Cryptographic parameters104can specify values or settings for (i) conducting an ECDH or ECDHE key exchange, (ii) mutually deriving a symmetric ciphering key, and (iii) using a symmetric ciphering algorithm. As contemplated herein, cryptographic parameters104may also be referred to as parameters104. Each of device103, server101, and key server102can record at least one compatible subset of parameters within a set of cryptographic parameters104. Parameters104can specify values for an elliptic curve cryptography (ECC) curve name, key length, key formatting (e.g. compressed or uncompressed), encoding rules, etc. As contemplated herein, the parameters104and cryptographic algorithms used with ECC PKI keys and a key exchange in the present disclosure can be compatible and substantially conform with ECC algorithms and keys as specified in (i) the IETF Request for Comments (RFC) 6090 titled “Fundamental Elliptic Curve Cryptography Algorithms”, and (ii) IETF RFC 5915 titled “Elliptic Curve Private Key Structure”, and also subsequent and related versions of these standards. For use of ECC algorithms, parameters104can specify elliptic curve names such as, but not limited to NIST P-256, sect283k1, sect283r1, sect409k1, sect409r1, and other possibilities exist as well. Further, elliptic curves that do not depend on curves currently specified by the National Institute of Standards and Technology (NIST) could be utilized as well, such as, but not limited to, Curve22519, curve448, or FourQ. Parameters104can specify domain parameters for nodes in system100to calculate values or numbers in a compatible manner, such as common base point G for use with ECC PKI key pairs and a defining equation for an elliptic curve. Other values within sets of cryptographic parameters104are possible as well, without departing from the scope of the present disclosure. An exemplary set of cryptographic parameters104is depicted and described in connection withFIG.2ebelow, and PKI keys used by device103, server101, and key server102could be associated with a member of the set of cryptographic parameters, such as a single row in the parameters104depicted inFIG.2ebelow. Device103can include an ECC key pair generation algorithm103xand server101can include a compatible ECC key pair generation algorithm101x. A key pair generation algorithm103xor101xcan use (i) a random number generator in order to derive the ephemeral PKI private key and (ii) a selected set of cryptographic parameters104in order to calculate the ephemeral PKI public key. In exemplary embodiments, a random number for the ephemeral PKI private key multiplies the base point G from the parameters104in order to obtain the corresponding ephemeral PKI public key. Other possibilities exist as well for the algorithms103xand101xto derive an ephemeral ECC PKI key pair without departing from the scope of the present disclosure. A key pair generation algorithm103xfor device103can output an ephemeral ECC PKI pair comprising device ephemeral public key Ed103aand device ephemeral private key ed103b. A key pair generation algorithm101xfor server101can output an ephemeral ECC PKI pair comprising server ephemeral public key E1101aand server ephemeral private key e1101b. As contemplated in the present disclosure, the use of a capital letter as the first character for a PKI key can represent a public key, the use of a lower case letter as the first character for a PKI key can represent a private key. As contemplated in the present disclosure, the second letter for a PKI key can represent the entity the key is associated with or belongs to (e.g. “d” for device103and “1” for server101). Device103can also record a device static PKI key pair103pin nonvolatile memory or within a secure processing environment within device103. The key pair103pcan be either (i) generated by device103during device manufacturing or device distribution, or (ii) generated externally from device103and written to device103in a secure manner during device manufacturing or device distribution. The PKI key pair103pcan comprise a device static private key d1103dand a device static public key D1103c. The keys d1103dand D1103ccould be formatted and compatible with the set of cryptographic parameters104. In exemplary embodiments, public key D1103ccan be recorded in an X.509 certificate from a certificate authority. As depicted inFIG.1a, server101can include a server identity101i, a key pair generation algorithm101x, a set of cryptographic parameters104, a server database101d, and a server certificate101c. Server identity101ican comprise a name or number to uniquely identify server101in network105and/or IP network107. In exemplary embodiments, server identity101ican comprise a domain name service (DNS) name, which could comprise a string of characters and/or numbers. Server identity101icould be associated with an IP address, such that the exemplary data106from device103could be routed to server101via the IP network107. Server identity101icould also comprise a MAC address, and a server identity101icould comprise multiple different values such as all of a MAC address, a DNS name, and virtual instance identity if server101operates as a virtual server. In summary, server identity101ican allow (a) a plurality of different devices103to (b) select and route data106to server101from a potential plurality of different servers and nodes. Server identity101icould also comprise a server name indication (SNI) value. Other possibilities exist as well for the format, structure, or value for a server identity101iwithout departing from the scope of the present disclosure. A key pair generation algorithm101xfor server101was described above in connection with key pair generation algorithm103xfor device103. Key pair generation algorithm101xcan derive ephemeral ECC PKI keys for server101to use with ECDHE key exchanges for a plurality of different devices103. Note that although a single ECC PKI key pair of public key E1101aand private key e1101bis depicted for system100, server101could derive and operate with a plurality of different keys E1101aand e1101bwith different devices103. The plurality of different keys E1101aand e1101bfor communicating with different devices103could be recorded in a server database101das depicted and described in connection withFIG.2dbelow. The set of cryptographic parameters104for server101can be equivalent to or a superset of the cryptographic parameters104used by device103. The description above for a set of parameters104used by a device103is also applicable to a set of parameters104used by a server101. Server database101dfor server101can comprise a database or memory recording data for server101to communicate with both a plurality of devices103and also at least one key server102. An exemplary server database101dis depicted and described in connection withFIG.2dbelow. Server database101dcan record values for PKI keys, derived shared secrets, derived symmetric ciphering keys, random numbers used in secure sessions, and related values in order to support the communications with both device103and key server102. Server certificate101ccan comprise a certificate formatted according to the X.509 family of standards and include a static server101public key PK.S1101p. Server certificate101ccan include a signature from a certificate authority for server public key PK.S1101p. Although not depicted inFIG.1a, server101can also record and operate with a private key corresponding to public key PK.S1101p. As depicted inFIG.1a, key server102can include a key server identity102i, a set of cryptographic parameters104, a network static private key SK.network102b, and a key server database102d. Key Server identity102ican comprise a name or number to uniquely identify key server102in network105and/or IP network107. Key Server identity102ican be similar to server identity101i, except using a different value, name, or number in order to uniquely identify key server102within network105. The set of cryptographic parameters104for server102can be equivalent to or a superset of the cryptographic parameters104used by device103and parameters104was also described above for device103. In exemplary embodiments, the parameters104used by both key server102and server101can be fully compatible, such as using the same ECC named curve, key lengths, encoding rules, etc. Server database102dfor key server102can comprise a database or memory recording data for key server102to (i) communicate with a plurality of servers101and (ii) support server101communicating with a plurality of devices103. Key server database102dcan be similar to server database101ddepicted inFIG.2d, except that key server database102dcan record values and data calculated by key server102. Key server database102dcan record values for PKI keys, derived shared secrets, and related values in order to support the communications between (i) network105and/or server101and (ii) device103. As depicted inFIG.1a, key server database102dcan record sets of data for different devices103, where each set can comprise a row in a table with a device identity103i, the network static public key value PK.network102a, and the network static private key SK.network102b. Although not depicted inFIG.1a, a key server database102dcould also record or store a secure hash value for the network public key102a, where the algorithm for the secure hash value could be specified in a member of the set of cryptographic parameters104. For some exemplary embodiments, (i) a device identity103icould be omitted from a key server database102dor (ii) the device identity103icould comprise a secure hash value over either the network public key102aor the device static public key103c. As depicted for a key server database102dinFIG.1a, some devices103could share the same keys102aand102b, which could comprise shared keys102zfor the devices103as depicted and described in connection withFIG.1cbelow. Other devices103could record unique keys102v, where devices103record a value for the network static public key PK.network102athat is uniquely recorded in each device. A key server database102dcould record and track the associated network private and public keys for each device. In other exemplary embodiments, a key server102could omit recording device identities103iin a database102d, and key server102could associate and use a network static private key SK.network102bwith a particular server101(e.g. all data from a server101could use or be associated with the private key SK.network102b). Other possibilities exist as well for the mapping of network static private keys to either servers101or devices103without departing from the scope of the present disclosure. Also, although a single value for SK.network102bis depicted as associated with a device103, a key server102could also use multiple different values of SK.network102b, such as (i) different values for SK.network102bfor different parameters104(e.g. different named curves), or (ii) separate values for SK.network102bfor digital signatures and ECDH key exchanges. In other words, a device103could also record the corresponding different multiple values for PK.network102a, and select and use the public keys depending on requirements such as parameters104used or if the network public key will be used for verifying digital signatures or conducting ECDH key exchanges. Key server102can record at least one network static private key SK.network102b, which can be the private key corresponding to the network static public key PK.network102arecorded by device103and described above for device103. In exemplary embodiments and as depicted inFIG.1aand alsoFIG.2abelow, key server102may not communicate with device103directly, but rather communicates with server101through a private network107a. Although not depicted inFIG.1a, a network105could operate a firewall in order to prevent packets or data from the public Internet (other than server101) from reaching key server102. In this manner by isolating key server102from IP network107, security for the key server102and the network static private key SK.network102bcan be enhanced, since only authenticated and authorized nodes within network105and connected to private network107acould communicate with server102. IP network107could be either a Local Area Network (LAN) or a Wide Area Network (WAN), or potentially a combination of both. IP network107could include data links supporting either IEEE 802.11 (WiFi) standards. Device103also utilize a variety of WAN wireless technologies to communicate data106with server101, including Low Power Wide Area (LPWA) technology, 3rd Generation Partnership Project (3GPP) technology such as, but not limited to, 3G, 4G Long-Term Evolution (LTE), or 4G LTE Advanced, NarrowBand-Internet of Things (NB-IoT), LTE Cat M, proposed 5G networks, and other examples exist as well. Server101can connect to the IP network107via a wired connection such as, but not limited to, an Ethernet, a fiber optic, or a Universal Serial Bus (USB) connection (not shown). IP network107could also be a public or private network supporting Internet Engineering Task Force (IETF) standards such as, but not limited to, such as, RFC 786 (User Datagram Protocol), RFC 793 (Transmission Control Protocol), and related protocols including IPv6 or IPv4. A public IP network107could utilize globally routable IP addresses. Private IP network107acould utilize private IP addresses which could also be referred to as an Intranet. Other possibilities for IP Network107and Private Network107aexist as well without departing from the scope of the disclosure. FIG.1b FIG.1bis a graphical illustration of hardware, firmware, and software components for a server, in accordance with exemplary embodiments.FIG.1bis illustrated to include several components that can be common within a server101. Server101may consist of multiple electrical components in order to communicate with a plurality of devices101and a key server102. In exemplary embodiments and as depicted inFIG.1b, server101can include a server identity101i, a processor101e(depicted as “CPU101e”), random access memory (RAM)101f, an operating system (OS)101g, storage memory101h(depicted as “nonvolatile memory101h”), a Wide Area Network (WAN) interface101j, a LAN interface101k, a system bus101n, and a user interface (UI)101m. Server identity101icould comprise a preferably unique alpha-numeric or hexadecimal identifier for server101, such as an Ethernet MAC address, a domain name service (DNS) name, a Uniform Resource Locator (URL), an owner interface identifier in an IPv6 network, a serial number, an IP address, or other sequence of digits to uniquely identify each of the many different possible nodes for a server101connected to an IP network105. Server identity101icould comprise a server name indicator (SNI). Server identity101ican preferably be recorded in a non-volatile memory and recorded by a network105upon configuration of a server101. Server identity101imay also be a number or string to identify an instance of server101running in a cloud or virtual networking environment. In exemplary embodiments, server101can operate with multiple different server identities101i, such as a first server identity101icomprising a DNS name and a second server identity101icomprising an IP address and a port number. A different server101could be associated with a different IP address and port number for a network105. The CPU101ecan comprise a general purpose processor appropriate for higher processing power requirements for a server101, and may operate with multiple different processor cores. CPU101ecan comprise a processor for server101such as an ARM® based process or an Intel® based processor such as belonging to the XEON® family of processors, and other possibilities exist as well. CPU101ecan utilize bus101nto fetch instructions from RAM101fand operate on the instruction. CPU101ecan include components such as registers, accumulators, and logic elements to add, subtract, multiply, and divide numerical values and store or record the results in RAM101for storage memory101h, and also write the values to an external interface such as WAN interface101jand/or LAN interface101k. In exemplary embodiments, CPU101ecan perform the mathematical calculations for a key pair generation step101xand also an ECDH key exchange algorithm220depicted inFIG.2a,FIG.2b, etc., below. CPU101ecan also contain a secure processing environment (SPE)101sin order to conduct elliptic curve cryptography (ECC) operations and algorithms, such as an ECC point addition step214depicted inFIG.2cbelow, as well as deriving ephemeral ECC PKI keys such as with key generation step101xdepicted and described in connection withFIG.1aabove. SPE101scan comprise a dedicated area of silicon or transistors within CPU101ein order to isolate the ECC operations from other programs or software operated by CPU101e, including many processes or programs running operating system101g. SPE101scould contain RAM memory equivalent to RAM101fand nonvolatile memory equivalent to storage memory101h, as well as a separately functioning processor on a smaller scale than CPU101e, such as possibly a dedicated processor core within CPU101e. SPE101scan comprise a “secure enclave” or a “secure environment”, based on the manufacturer of CPU101e. In some exemplary embodiments, an SPE101scan be omitted and the CPU101ecan conduct ECC operations or calculations without an SPE101s. RAM101fmay comprise a random access memory for server101. RAM101fcan be a volatile memory providing rapid read/write memory access to CPU101e. RAM101fcould be located on a separate integrated circuit in server101or located within CPU101e. The RAM101fcan include data recorded in server101for the operation when communicating with a plurality of devices103or a key server102. The system bus101nmay be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures including a data bus. System bus101nconnects components within server101as illustrated inFIG.1b, such as transferring electrical signals between the components illustrated. Server101can include multiple different versions of bus101nto connect different components, including a first system bus101nbetween CPU101eand RAM101f(which could be a memory bus), and a second system bus101nbetween CPU101eand WAN interface101jor LAN interface101k, which could be an I2C bus, an SPI bus, a PCI bus, or similar data busses. In exemplar embodiments, RAM101foperating with server101can record values and algorithmic steps or computer instructions for conducting an ECDH key exchange, including a key pair generation step101x, a secret X1211a(depicted inFIG.2bbelow) and also a secret X2212a(depicted inFIG.2bbelow). The depicted values and algorithms can be recorded in RAM101fso that CPU101ecan conduct ECC operations and calculations quickly using the values. The depicted values could also be recorded in other locations for longer-term or nonvolatile storage, such as within a server database101d. Additional or other values besides the ones depicted inFIG.1bcan also be recorded in RAM101fin order to support server101conducting the communications, steps, and message flows depicted inFIG.2abelow and other Figures herein. The operating system (OS)101gcan include Internet protocol stacks such as a User Datagram Protocol (UDP) stack, Transmission Control Protocol (TCP) stack, a domain name system (DNS) stack, a TLS stack, a DPP stack, etc. The operating system101gmay include timers and schedulers for managing the access of software to hardware resources within server101, where the hardware resources managed by OS101gcan include CPU101e, RAM101f, nonvolatile memory101h, and system bus101n, and well as connections to the IP network107via a WAN interface101j. The operating system shown of101gcan be appropriate for a higher power computing device with more memory and CPU resources (compared to a device103). Example operating systems101gfor a server101includes Linux or Windows® Server, and other possibilities exist as well. Although depicted as a separate element within server101inFIG.1b, OS101gmay reside in RAM101fand/or nonvolatile memory101hduring operation of server101. As depicted inFIG.1b, OS101ginFIG.1bcan contain algorithms, programs, or computer executable instructions (by processor101eor SPE101s) for an ECDH key exchange algorithm220(depicted and described inFIG.2bandFIG.2ebelow), a key derivation function (KDF)216(depicted and described inFIG.2bandFIG.2ebelow), and also an ECC point addition operation214(depicted and described inFIG.2bandFIG.2ebelow). The algorithms could be included either (i) within the kernel of OS101g, or (ii) as a separate program or process loaded by OS101gand operated by OS101g. OS101gcan also read and write data to a secure processing environment SPE101s, if CPU101econtains SPE101s. Nonvolatile memory101hor “storage memory”101h(which can also be referred to herein as “memory101h”) within server101can comprise a non-volatile memory for long-term storage of data, including times when server101may be powered off. Memory101hmay be a NAND flash memory or a NOR flash memory and record firmware for server101, such as a bootloader program and OS101g. Memory101hcan record long-term and non-volatile storage of data or files for server101. In an exemplary embodiment, OS101gis recorded in memory101hwhen server101is powered off, and portions of memory101hare moved by CPU101einto RAM101fwhen server101powers on. Memory101h(i) can be integrated with CPU101einto a single integrated circuit (potentially as a “system on a chip”), or (ii) operate as a separate integrated circuit or a removable card or “disk”, such as a solid state drive (SSD). Storage memory101hcan also comprise a plurality of spinning hard disk drives in a redundant array of independent disks (RAID) configuration. Memory101hmay also be referred to as “server storage” and can include exemplary file systems of FAT16, FAT 32, NTFS, ext3, ext4, UDF, or similar file systems. As contemplated herein, the terms “memory101h”, “storage memory101h”, and “nonvolatile memory101h” can be considered equivalent. As depicted inFIG.1b, non-volatile memory101hcan record a server database101d, a device static public key D1103c, and cryptographic parameters104. Exemplary data within a server database101dis depicted and described in connection withFIG.2dbelow. Although depicted inFIG.1bas recorded within memory101h, a server database101dcould also operate as a separate server than server101in a network105, and server101could query the server database101dusing a private network107a. The device static public key D1101ccould be received by server101from a device manufacturer or a device owner, or directly from device103through IP network107. In addition, as depicted inFIG.1b, memory101hcan record the parameters104which were depicted and described in connection withFIG.1aabove and alsoFIG.2dbelow. Server101can include a WAN interface101jto communicate with IP network107and a plurality of devices103, as depicted inFIG.1aabove (whereFIG.1adepicts a single device103). WAN interface101jcan comprise either a wired connection such as Ethernet or a wireless connection. For wireless configurations of server101, then WAN interface101jcan comprise a radio, which could connect with an antenna in order to transmit and receive radio frequency signals. For a wireless configuration of server101, WAN interface101jwithin server101can provide connectivity to an IP network107through 3GPP standards such as 3G, 4G, 4G LTE, and 5G networks, or subsequent and similar standards. In some exemplary embodiments, server101can comprise a “g node b” or gNb in a 5G network (or equivalent functionality in 6G or subsequent networks), and WAN interface101jcan comprise a 5G radio access network (RAN) interface. WAN interface101jcan also comprise a wired connection such as digital subscriber line (DSL), coaxial cable connection, or fiber optic connection, and other possibilities exist as well without departing from the scope of the present disclosure. Server101may also operate a LAN interface101k, where LAN interface101kcan be used to connect and communicate with other servers in a network107, such as key server102through private network107a. LAN interface101kcan comprise a physical interface connected to system bus101nfor server101. In exemplary embodiments, LAN interface101kcan comprise an Ethernet or fiber optic wired connection. In other words, (i) LAN interface101kcan connect server101to private network107a(which could comprise an IP network with private IP addresses that are not globally routable), and (ii) WAN interface101jcan comprise an interface for communicating with a plurality of devices103through insecure networks such as the globally routable public Internet. The use of a separate WAN interface101jand LAN interface101kcan increase the security of operation for server101. However, the use of separate physical interfaces for LAN interface101kand WAN interface101jcan be omitted, and a single physical interface such as Ethernet or fiber-optic could be used by server101to communicate with both devices103and key server102. Server101may also optionally include user interface101mwhich may include one or more sub-servers for receiving inputs and/or one or more sub-servers for conveying outputs. User interfaces are known in the art and may be simple for many servers101such as a few LED lights or and LCD display, and thus user interfaces are not described in detail here. User interface101mcould comprise a touch screen or screen display with keyboard and mouse, if server101has sophisticated interaction with a user, such as a network administrator. Server101can optionally omit a user interface101m, if no user input or display is required for establishing communications within a network105and/or IP network107. Although not depicted inFIG.1b, server101can include other components to support operation, such as a clock, power source or connection, antennas, etc. Other possibilities exist as well for hardware and electrical components operating in a server101without departing from the scope of the present disclosure. Using the electrical components depicted inFIG.1b, a server101could send and receive the data106inFIG.1ain an encrypted and secure manner after conducting the authenticated ECDHE key exchange as contemplated herein, in order to derive a symmetric ciphering key to encrypt and decrypt messages within data106with a plurality of devices103. Although not depicted inFIG.1b, devices103such as the device103depicted inFIG.1aabove can include (a) equivalent internal electrical components depicted for a server101in order to (b) operate as devices103. A device103inFIG.1acould include a processor similar to CPU101e, with primary differences for the processor in a device being reduced speed, a smaller memory cache, a smaller number and size of registers, with an exemplary use of 32 bits for datapath widths, integer sizes, and memory address widths, etc., for a device103. In contrast, an exemplary 64 bit datapaths could be used for CPU101ein server101(although device103could also use 64 bit wide datapath widths if device103comprises a mobile phone such as a smart phone). For embodiments where device103comprises a transducer device for sending and receiving transducer data with a network105, then a CPU in device103could comprise an exemplary 32 bit processor, although other possibilities exist as well. Similarly, RAM in a device103could be a RAM similar to RAM101fin server101, except the RAM in a device103could have fewer memory cells such as supporting exemplary values less than or equal to an exemplary 4 gigabytes, while RAM103fin server101could support more memory cells such as greater than or equal to an exemplary 8 gigabtyes. In exemplary embodiments, the electrical and physical components of a key server can be equivalent to the electrical components for a server101inFIG.1b, with different data recorded in RAM101ffor a key server102, as well as different data recorded in memory101hfor a key server102. For example, a key server102could record the network static private key SK.network102bin memory101h, which could include secure disk storage using disk or file encryption. FIG.1c FIG.1cis an illustration of exemplary network static public keys recorded by a plurality of devices, in accordance with exemplary embodiments.FIG.1cdepicts PKI keys recorded for an exemplary three different devices103, although a system100and other systems herein could operate with potentially millions or more devices103. The data depicted for each device inFIG.1ccan comprise exemplary data for a network public key table103tfor a device103, which is also depicted and described in connection withFIG.1aabove. The exemplary values recorded for network static public keys depicts different embodiments where both (i) a device103can record a network static public key PK.network102athat is shared with other devices103, and (ii) the network static public key PK.network102arecorded by device103could be unique for device103(e.g. not shared with other devices103in a system100above or a system200below, as well as other systems herein). A network public key table103tfor device103can record values of a key identity, a network name for network105, an identity for server101comprising ID.server101i, and also a value for the network static public key PK.network102a. As depicted inFIG.1c, a device103can record multiple different values for use with multiple different networks105and/or servers101. The first two entries for network static public keys PK.network102afor a first device103(1) and a second device103(2) inFIG.1cdepicts the same alphanumeric values for basE91 binary to text encoding for an exemplary network static public keys PK.network102ain a first device103(1) and a second device103(2), where the key value is depicted for a network105of “Network A”. Likewise, the second two entries for network static public keys PK.network102afor a first device103(1) and a second device103(2) inFIG.1cdepicts the same alphanumeric values for basE91 binary to text encoding for an exemplary network static public key PK.network102ain a first device103(1) and a second device103(2). Note that although a single value is depicted for PKI keys in a network public key table103t, the values or numbers for keys recorded could comprise a point on an ECC curve with both an X coordinate and a Y coordinate. For illustration purposes inFIG.1c, only the X coordinate is displayed and the Y coordinate could be calculated from the X coordinate using the equation for an ECC curve in a set of cryptographic parameters104afor the PKI keys. The depiction of these keys PK.network102aillustrates the use of shared keys102zfor a plurality of different devices103. Although only two devices are depicted with shared keys102z, many more devices could also record the same shared keys for PK.network102a. Each of the shared keys102zis associated with a different network105, identified with an exemplary different network name. In this manner, a plurality of different devices103can record and use the same value for a network static public key PK.network102a. As described above, the value in a table103tincluding network static public key PK.network102acould be written in device before the device sends the first message203inFIG.2abelow. The data could be recorded by a device manufacturer, a device distributor, or a device owner, and other possibilities exist as well for the source of PK.network102awithout departing from the scope of the present disclosure. The same values for shared keys102zacross different devices103could be recorded in device103during manufacturing or before distribution to end users of device103. In this manner, devices103could be received by end users in a “partially configured” yet secure state, such that a device103could use the recorded keys PK.network102awith a server101and/or network105, where a server101does not operate or record the corresponding network static private key SK.network102b. As depicted and described in connection withFIGS.2a,2b, etc. below, a key server102could record and operate with the corresponding network static private key SK.network102band thus the key SK.network102bcan remain secured and not distributed out or sent to a server101. In this manner, encrypted communications for data106inFIG.1acan be transferred between device103and server101without server101recording the key SK.network102b. This increases the security of a system100and other systems herein, because server101may be exposed to an IP network107while key server102recording the SK.network102bcan be connected to a private network107a. By using a set of shared keys102zacross a plurality of device103, a key server102or a network105can control access of the devices103as a group. For example, a network105could deny access to the private key corresponding to the public key for the first depicted value of PK.network102ain a first device103(1). That action by network105would also deny a second device103(2) access to the private key corresponding to the public key for the first depicted value of PK.network102ain the second device103(2). In this manner, network105could control access to a plurality of different devices103by controlling access to a single value of SK.network102b, where (i) the plurality of different devices103record the corresponding PK.network102aas shared keys102z. Other benefits for using shared keys102zcan be available as well, such as simplifying manufacturing or distribution, since the same key value for PK.network102acould be recorded with multiple different devices103. In other words, a device manufacturer or device distributor would not need to keep track of which value for PK.network102abelongs with which device103for embodiments where shared keys102zare utilized. However, the use of shared keys102zfor multiple different devices103is not required for some exemplary embodiments. In exemplary embodiments, network static public keys PK.network102acan also comprise a unique key for each device103in a system100and other systems herein. Thus, some exemplary embodiments also support the use of a network static public key PK.network102athat is not shared across multiple different devices103. For these exemplary embodiments, and as depicted inFIG.1c, a device103can record a unique key102v(depicted as “Per Device or Unique Network Static Public Keys102v” inFIG.1c). For example, the depicted value for the third key for device103(1), (2), and (3) inFIG.1cis shown as unique for each device. A key server102could also record the corresponding network static private key SK.network102bthat is unique for each device in a key server database102das depicted for unique keys102vinFIG.1a. In this manner, a network105can control access to server101and/or network105on a per-device basis using the unique key102v. For example, key server102could deny access to device103(3) (while continuing to allow service for device103(1) and103(2)), by denying access or cryptographic operations with the secret key SK.network102bin a key server102corresponding to the public key PK.network102arecorded by device103(3). Other benefits for recording network static public keys PK.network102aas unique keys102vfor devices103exist as well without departing from the scope of the present disclosure. Although not depicted inFIG.1c, each row or network static public key PK.network102acould also be stored with a set of cryptographic parameters104a, such as specifying an ECC named curve associated with the public key102a. A network105or a server ID101icould be associated with multiple different network static public keys PK.network102a, where the different keys102afor the same network105or server ID101iare associated with different parameters104a. Although depicted as alphanumeric values for the network static public key PK.network102a, a network public key table103tcould store the public key102aas separate certificates for the public keys. In addition, a network public key table103tcould store a secure hash value for the network static public key PK.network102a, where the secure hash algorithm104dfor the secure hash value could be specified by parameters104, as depicted and described in connection withFIG.2dbelow. In addition, a table103tcould include a key server identity102iassociated with the network static public key PK.network102a. FIG.2a FIG.2ais a simplified message flow diagram illustrating an exemplary system with exemplary data sent and received by a device, a server, and a key server, in accordance with exemplary embodiments. System200can include a device103, server101, and a key server102. Device103was depicted and described in connection withFIG.1a, andFIG.1cabove. Server101and key server102were depicted and described in connection withFIG.1aabove, and server101was depicted and described in connection withFIG.1babove. Server101can record and operate a server database101d, and key server102can record and operate a database102d. Individual steps and components used in system200inFIG.2aare also additionally depicted and described in subsequentFIGS.2b,2c, and2d. Before starting the steps and message flows depicted inFIG.2a, device103can securely receive and record a network public key PK.network102a, which was also depicted and described in connection withFIG.1aandFIG.1c. The corresponding private key for PK network102acan be securely recorded in key server102within network105as SK.network102b. For system200, server101and key server102may establish a secure session221, which could comprise establishing a secure communications link between the two servers using protocols such as TLS, IPSec, a virtual private network (VPN), a secure shell (SSH), or similar networking, transport, or application layer technologies in order to establish secure communications between key server102and server101. Secure session221can utilize certificates for the two servers in order to provide mutual authentication and mutual key derivation for a symmetric encryption key in secure session221. Secure session221can also be conducted over private network107a, although the secure session221could also be established or conducted through an IP network107such as the globally routable Public Internet. Other possibilities exist as well for establishing a secure session221between server101and key server102without departing from the scope of the present disclosure. Although not depicted inFIG.2a, firewalls between server101and key server102could also be utilized in order to establish or conduct secure session221. At step201b, server101can begin listening for incoming messages from a device103using a physical network interface such as WAN interface101jthat provides connectivity to the IP network107and server101can use a specific port number such as, but not limited to, TCP port443to listen for incoming data106from a device103. At step201a, device103can be powered on and begin operating, in order to establish connectivity with an IP network107. At step202, device103can read an address for server101from memory or a network public key table103t, and the address can comprise a DNS name or an IP address for server101. The DNS name or IP address for server101could be recorded or received along with the key PK.network102a, or device103could conduct a DNS query to obtain the address. At step202, device103can also read the set of cryptographic parameters104and select a subset of the cryptographic parameters104ain order to establish communications with server101. An exemplary subset of cryptographic parameters104ain a step202can comprise a member of the set the cryptographic parameters104depicted and described in connection withFIG.2dbelow (e.g. one line of values in cryptographic parameters104inFIG.2dbelow). In step202, device103can select a subset of cryptographic parameters104athat is compatible with PK.network102a. The subset of cryptographic parameters104athat are compatible with PK.network102acould also be recorded in nonvolatile memory in device103along with network public key PK.network102aat the time PK.network102awas recorded or received by device103. A step202can also comprise device103also using a random number generator in order to output a random number202afor use in subsequent communications with server101. Although the term “random number” is described herein, a random number could comprise a pseudo random number processed by device103using information entropy available to device103. The random number202aprocessed in a step202could contain the number of bits specified by a selected subset of cryptographic parameters104, such as a random length104g. Random number202agenerated or derived by a device103in a step202could also comprise a “number used once” (nonce). Device103can then conduct a key pair generation step103xas depicted and described in connection withFIG.1aabove using the selected subset of cryptographic parameters104a. The parameters104could specify a named curve and parameters to derive a device ephemeral private key ed103band a device ephemeral public key Ed103a. The device ephemeral private key ed103bcan comprise a random number generated using a random number generator. The device ephemeral public key Ed103acould be derived using (i) ECC point multiplication from a base point G for a named curve within cryptographic parameters104aand (ii) the device ephemeral private key ed103b. Other possibilities exist as well for the steps a device103can use in a key pair generation step103xwithout departing from the scope of the present disclosure. Device103can then use (i) the recorded address for server101(possibly from a table103t) and (ii) connectivity to IP network107from step202to send a message203to server101. Message203and other messages contemplated herein can be sent as either TCP or UDP messages, and other possibilities exist as well for the formatting and transfer of messages without departing from the scope of the present disclosure. In exemplary embodiments, device103both uses an IP address and port number as a source IP address and port to send message203to server101and then also the same IP address and port number to listen for responses or messages from server101. In this manner, device103can send a message203and receive a response message206cbelow through an IP network107, where intermediate nodes on the IP network107may conduct network address translation (NAT) routing. Message203can include the random number random1202afrom a step202, the device ephemeral public key Ed103a, and the subset of cryptographic parameters104a. Message203may also optionally include a device identity of ID.device103i, but the device identity of ID.device103ican also be omitted from a message203in some exemplary embodiments. For embodiments where message203optionally excluded device identity ID.device103i, then an identity for device103ican optionally be transmitted in later messages. Omitting ID.device103ifrom message203can increase security for message203since an identity for device103would not be sent as plaintext in a message203. Although not depicted inFIG.2a, message203could also optionally include an identity for key server102comprising ID.key-server102i, such that server101can determine which key server102ishould be associated with message203. Note that an identity for key server102of ID.key-server102ican be omitted from a message203, and server101can select a key server102from other means in a step205bbelow. As depicted inFIG.2a, message203could also optionally include a secure hash value250(also depicted inFIG.2dbelow) such as, but not limited to, SHA-256 of the network static public key PK.network102a. Device103can send the hash value250of key102ato server101in a message203, in order for server101to identify which of a plurality of possible key servers102could be used to process data within message203, which is further described for a step205bbelow. For embodiments where a secure hash value250of key102ais included in a message203, then the message203could optionally exclude the selected subset of cryptographic parameters104aassociated with keys PK.network102aand Ed103a. For other embodiments, a key identity for key102acould be selected by device103from a table103tand the key identity for key102acould be sent in a message203instead of a hash value205for key102a. A server101and key server102could store the key identity for key102aand select the key102ausing the key identity for key102a. Server101receiving the message203with the hash value250could determine the set of parameters104ato use for key Ed103abased on the hash value250. For example, and as depicted inFIG.2dbelow, a server database101dcould maintain mapping of hash values250and parameters104a, and server101could conduct a query of database101dusing the received hash value250in order to select the parameters104afor further processing and cryptographic operations with key Ed103a. Or, in an exemplary embodiment cryptographic parameters104aas transmitted via an IP network107or private network107acould comprise the secure hash250of key102a, where the secure hash250of key102acan specify which subset of a set of cryptographic parameters104to utilize for ECC operations (in other words the subset of parameters104can comprise parameters104a). For embodiments where device103uses a unique key102v, then the secure hash value250can also comprise a device identity103i(since the secure hash value250would be unique for device103). Secure hash value250could also be omitted from message203in some exemplary embodiments. Server101can receive message203and begin conducting steps in order to process the message. At step204, server101can read the subset of cryptographic parameters104ain the message203and being using the subset of cryptographic parameters. Or, for embodiments that include hash value250, then parameters104acould be omitted from message203and server101could select the parameters104afrom a server database101dusing the hash value205, such as with the exemplary server database depicted inFIG.2dbelow. At step204, server101can comprise a public key validation step on received device ephemeral public key Ed103ain order to ensure the key is valid and on the selected curve in parameters104a. Step204by server101can comprise conducting the steps for an ECC Full Public-Key Validation Routine in section 5.6.2.3.2 of FIPS publication SP 800-56A (revision 2) for the received device ephemeral public key Ed103a. Alternatively, step204can comprise server101performing the steps ECC Partial Public-Key Validation Routine in section 5.6.2.3.3 of the same FIPS publication. Other example steps within a public key validation step204can comprise (i) verifying the public key is not at the “point of infinity”, and (ii) verifying the coordinates of the point for the public key are in the range [0, p-1], where p is the prime defining the finite field. Other possibilities exist as well for evaluating and validating a received public key is cryptographically secure in a public key validation step204, without departing from the scope of the present disclosure. As contemplated in the present disclosure a device103, server101, and key server102can conduct a public key validation step204each time a public key or a point on an elliptic curve is received. At step205aand after a key validation step204, server101can record the data received from the message203in a server database101d. Exemplary values and data for a server database101dare depicted and described in connection withFIG.2dbelow. At step205a, server101can record in server database101dthe values of random number202a, device ephemeral public key Ed103a, and the subset of cryptographic parameters104d. For embodiments where device identity ID.device103iis also received in message203, then server101can also record device identity ID.device103iin server database101d. A step205acan also include (i) storing both Ed101aand random1202ain database101d, and (ii) confirming that Ed101aand random1202aare not reused. Security of a system200and system100and other systems herein can be increased through prohibiting the reuse of ephemeral PKI key pairs and also random numbers. If numbers or keys are reused, then server101could respond with a request for device103to generate a new ephemeral PKI key pair and/or random number202abefore proceeding to further steps. For embodiments requiring higher security, then hash values for received keys Ed101acould be stored in a database101d(instead of the value of key Ed101a), and a new Ed101areceived by server101could be determined as new or reused by calculating a hash value for the received key Ed101aand comparing with stored values for Ed101a. At step205a, server103can also record the originating source IP address and port number203a(depicted inFIG.2dbelow) for message203, in order to subsequently transmit a message206cbelow back to the same IP address and port number. In this manner, message206cbelow can be routed by intermediate nodes on IP network107back to the source IP address and port number used by device103to transmit message203. In other words, (i) the destination IP address and port number of a subsequent message206cfrom server101to device103can comprise the source IP address and port number203a(depicted inFIG.2dbelow) received in message203, and (ii) the source IP address and port number203a(depicted inFIG.2dbelow) from message203can be recorded in a server database101d. In this manner, device103can be tracked or identified by server101during the brief period of time of the message flows inFIG.2ausing the source IP address and port number from message203for embodiments where device identity ID.device103iis not included in message203. A step205acan also comprise server101generating a second random number205rusing parameters104afor use in subsequent messages with device103. The first random number can comprise random number random1202aderived by device103. At step205b, server101can select key server102for subsequent communications and processing of the received device ephemeral public key Ed103a. Note that a system100could comprise both a plurality of devices103and a plurality of key servers102. In exemplary embodiments server101should select in step205bthe proper key server102for conducting subsequent steps inFIG.2a. In other words, without data or values from a message203, server101may know which of a possible plurality of key server102may record the network static private key SK.network102bfor use with or associated with device ephemeral public key Ed103a. Server101could use one of several possible methods for selecting key server102in a step205b, including a combination of the following embodiments. A first embodiment for selecting key server102in a step205bcould comprise server101selecting the same key server102for all keys Ed103afrom all devices103. For example for this first method, server101could listen or operate on (i) a specific IP address and port number or (ii) with a specific DNS name or server name indicator (SNI) in step201b, where the use of (i) or (ii) could be specified or associated with network static public key PK.network102a. As mentioned above for a step201a, device103can select the address of server101using the server address of server101recorded with PK.network102a(possibly from a table103tinFIG.1c). Server101could determine that all messages203received using (i) or (ii) are associated with a specific key server102. For this first embodiment of a step205b, a plurality of devices103could store shared keys102vfor PK.network102a, as depicted and described in connection withFIG.1c. A second embodiment of a step205bfor selecting key server102of received device ephemeral public key Ed103acould comprise using an identity of key server102in a message203from device103. As described above for a message203, the message203can optionally include an identity for key server102comprising ID.key-server102i. For these embodiments, server101can select the key server102using the ID.key-server102iin message203. A third embodiment for a step205bof selecting key server102for received device ephemeral public key Ed103acould comprise using an identity of device103in a message203comprising ID.device103i. As described above for a message203, the message203can optionally include an identity for device103, and server101using database101dcould include a table to map ID.device103ito key server102. For this third embodiment of a step205b, server101could conduct a query of server database101dto select the key server102for device103using ID.device103i. A fourth embodiment for a step205bto select a key server102for received device ephemeral public key Ed103acould comprise using the subset of cryptographic parameters104ain a message203from device103. Server101could record that a first subset of cryptographic parameters104aare associated with a first key server102, and a second subset of cryptographic parameters104aare associated with a second key server102, etc. A fifth embodiment for a step205bto select a key server102for received device ephemeral public key Ed103acould comprise message205including a secure hash value250(inFIG.2d) of network static public key PK.network102a, and server101with database103dcould include a table to map the secure hash value250of PK.network102ato key server102. Other possibilities exist as well for server101to conduct a step205bto select a key server102using data in a message203without departing from the scope of the present disclosure. For embodiments of step205b, the selection of key server102can comprise the selection of an identity for key server102of key server identity102i, and subsequent data such as message206acould be sent or routed through IP network107ausing the key server identity102i. After selecting key server102in a step205b, server101can then send key server102a message206athrough the secure session221. Message206acan include an identity for server101comprising ID.server101i, the received device ephemeral public key Ed103a, and the subset of cryptographic parameters104a. For embodiments where device identity ID.device103iwas included in a message203, then ID.device103icould be included in a message206aas well. However, device identity ID.device103icould be omitted from a message203and for these embodiments then message206acan exclude device identity ID.device103ias well. Server identity ID.server103ican be useful for communications between key server102and server101for a system100and system200, since either (i) server101may communicate with a plurality of different key servers102, and/or (ii) key server102may communicate with a plurality of different servers101. Server101can then conduct a key pair generation step101xas depicted and described in connection withFIG.1aabove using the selected subset of cryptographic parameters104a. The parameters104acould specify a named curve and parameters to derive a server ephemeral private key e1101band a server ephemeral public key E1101a. The server ephemeral private key e1101bcan comprise a random number generated using a random number generator. The server ephemeral public key E1101acould be derived using (i) ECC point multiplication from a base point G for a named curve within cryptographic parameters104aand (ii) the server ephemeral private key e1101b. Although message206ais depicted inFIG.2aas transmitted or sent by server101to key server102before server101derives ephemeral server PKI keys in a step101x, a message206acould be sent by server101after server101conducts the step101x. Key pair generation step101xcan also confirm that the server ephemeral PKI key pair for server101is not reused, such as storing hash values for public keys E1101ain a database101dand then comparing the hash value for a new key E1101afrom a step101xwith the stored hash values. If the derived new key E1101amatches a stored hash value101afrom a database101d, then the new key E1101acould be discarded and a different key E1101aderived. Key server102can receive the message206avia the secure session221and conduct a series of steps to process the message and respond. A first step conducted by key server102can comprise a key validation step204, where the key validation step204conducted by key server102can be equivalent or compatible with the key validation step204conducted by a server101as described above. For a key validation step204, a node can reply with a failure or reject message if the key validation step204fails, such as if a received ECC public key fails to fall on the named elliptic curve as specified by a subset of cryptographic parameters104a. At step205c, key server102can use data from message206ain order to select a network static private key SK.network102bfor subsequent steps such as a step211. For embodiments where message206aincludes either (i) an identity for device103such as ID.device103i, or (ii) identifying information for SK.network102bfor key server102to utilize (such as hash250of the public key PK.network102afor SK.network102b), then key server102could use the identifying information in message206ato select the network static private key SK.network102bfrom a key server database102d, where an exemplary key server database102dis depicted and described in connection with inFIG.1aabove. For some exemplary embodiments, the key server database102dcan record a network static private key SK.network102bfor each set of cryptographic parameters104a, and subsequently select the key102busing the parameters104areceived in a message206a. In other words, an identity for device103or hash250of PK.network102acould be omitted, and a key server102could use a step205cto select a network static private key SK.network102busing a set of cryptographic parameters104a. Key server102can then conduct an ECDH key exchange step211using (i) the selected network static private key SK.network102b, (ii) the received device ephemeral public key Ed103a, and (iii) the set of cryptographic parameters104a. Exemplary details for an ECDH key exchange step211are depicted and described in connection withFIG.2bbelow. The output of an ECDH key exchange step211can comprise point X1211a. Key server102can then send server101a message206b, where the message206bincludes point X1211a, as well as an identity for key server102comprising ID.key-server102iand cryptographic parameters104aassociated with point X1211a. Message206bcan be transmitted through secure session221. If device identity103ior other identifying information such as hash250was included in message206a, then message206bcould also include device identity103ior the other identifying information for a device103. Or, both message206aand message206bcan include a transaction identity or session identity, such that server101can associate the received value X1211awith a received device ephemeral public key Ed103a. Server101can receive message206awith point X1211aand conduct a series of steps in order to derive a mutually shared and authenticated key exchange with device103. As contemplated herein, the authentication performed by server101can comprise a “one-way” authentication with device103. Authentication of server101or network105can be provided by the depicted key exchange with steps211and213, since network105from system100with both server101and key server102conducts an ECDH key exchange using at least, in part, the network static private key SK.network102b. The “one-way” authentication from the ECDH key exchange is also not completed until both sides have successfully used a symmetric ciphering key derived from the ECDH key exchange. In other words, a device that successfully mutually derives a symmetric ciphering key with a server101can authenticate that server101has secure access to the network static private key SK.network102b. One benefit of the system depicted inFIG.2ais that the network static private key SK.network102bdoes not need to be recorded by or operated with server101. Further authentication of both parties can be completed via other means including digital signatures in later steps, and the “one-way” authentication in this paragraph refers to the authentication that results from using the ECDH key exchange using at least network static private key SK.network102b. Note that the authenticated ECDH key exchange depicted inFIG.2a, with additional details in subsequent Figures, can solve problems in the art discussed in the Description of Related Art. Specifically, through the use of a PK.network102arecorded by a device and SK.network102brecorded by a network105, combined with the use of ephemeral PKI keys for both device103and server101, the depicted and described ECDH key exchange herein can simultaneously achieve both (i) authentication of a network105with device103and (ii) forward secrecy. As discussed in the Description of Related Art, a device103may not have full access to the Internet (such as other servers or networks besides those for a network105), or other resource limitations such as not storing (x) intermediate certificate authority certificates for servers or (y) compatible parameters or algorithms for intermediate certificate authority certificates for servers, and consequently device103may not be able to readily verify a certificate for server103such as cert.server101cwithout storing and using (x) and (y) above. The mutually authenticated ECDH key exchange with forward secrecy depicted inFIG.2aand subsequent Figures herein supports devices with those limitations. Other benefits are possible as well, such as faster and less resource-intensive authentication of a network105with device103. After receiving message206a, server101can conduct a point validation step204afor received value or point X1211a. Note that point validation step204ais related to a key validation step204and can use several of the same sub-steps depicted and described for a key validation step204for server101above. A point validation step204ais different than a key validation step204since (i) the value X1211ais preferably not used as a public key to be shared with other parties, but rather (ii) represents a point on the ECC curve from parameters104athat will subsequently undergo a point addition operation in order to mutually derive a shared secret with device103. Further, point X1211acan be received through a secure session221with a trusted party comprising key server102, and thus the point X1211acan have a higher level of confidence or trust as being correct and properly formatted than a device ephemeral public key Ed103areceived potentially via the Public Internet. A point validation step204afor server101can comprise verifying that received point X1211ais on the ECC curve as specified in parameters104aand that the point is not the “point at infinity”. Other possibilities exist as well for conducting a point validation step204aon the received point X1211awithout departing from the scope of the present disclosure. After conducting a point validation step204a, server101can then conduct an ECDH key exchange step212, where a key exchange step212is depicted and described in connection withFIG.2bbelow. In summary, server101can input (i) the server derived ephemeral private key e1101bfrom a step101xand (ii) the received device ephemeral public key Ed103afrom message203into an ECDH key exchange algorithm220(inFIG.2b) in order to calculate a point X2212a. Server101can then conduct a key derivation step213as depicted and described in connection withFIG.2bbelow. In summary, server101can conduct an ECC point addition step214(inFIG.2b) using both (i) point X1211afrom message206band (ii) point X2212afrom step212in order to mutually derive a shared secret X3213a. Shared secret X3213acan be input into a key derivation function in order to output a symmetric ciphering key K1216aand also optionally a MAC key. Server101can then conduct a step207ato create a digital signature101s, using an elliptic curve digital signature algorithm (ECDSA) over the values of at least, in part, random number random1202aand random number random2205r. The ECDSA could use (i) the private key corresponding to the public key in certificate cert.server101cas (ii) the private key for creating digital signature101sin a step207a. The ECDSA can be compatible with IETF RFC 6979, IETF RFC 4574, and also related FIPS standards or other standards for digital signatures using ECC PKI keys. Additional data to sign for signature101sin a step207acould comprise the cryptographic parameters104aand the certificate cert.server101c. In addition, other digital signature algorithms besides ECDSA could be used in a step207asuch as the use of RSA based digital signature algorithms, or even post-quantum cryptography algorithms. If other digital signature algorithms besides ECDSA are used in a step207a, then the public key in certificate cert.server101cand corresponding private key can support the other digital signature algorithms. In general, the digital signature algorithms used to create digital signature101scan support cryptographic algorithms and PKI keys that are different than the set of cryptographic algorithms104in order to conduct a mutually authenticated ECDH key exchange with forward secrecy as contemplated herein. Server101can then conduct an encryption step217(i) using the key K1216aoutput from key derivation step213in order to (ii) create a ciphertext1217b. Exemplary details for an encryption step217is depicted and described in connection withFIG.2cbelow, and an encryption step217can use a symmetric ciphering algorithm. The plaintext within ciphertext1217bcan comprise at least, in part, the random number random1202aand random number random2205r. Other data could be included in plaintext for ciphertext217bsuch as the certificate cert.server101c, digital signature101s, as well as parameters104a, without departing from the scope of the present disclosure. For some exemplary embodiments the use or inclusion of a certificate cert.server101cand digital signature101sfor plaintext in ciphertext217bcould be omitted, since the mutually derived symmetric ciphering key K1216acan be derived with authentication of server101and network105to device103. Server101can then send device103a message206c, where the destination IP address and port number of message206ccan comprise the source IP address and port number203areceived with message203and recorded in server database101d. Message206ccan include the server ephemeral public key E1101aand the ciphertext1217b, as depicted inFIG.2a. The value “K1216a” depicted inFIG.2ais shown to illustrated that the derived symmetric ciphering key216afrom a key derivation step213is used to encrypt ciphertext1217b(indicated by the brackets shown inFIG.2afor message206c), and the value K1216ais not normally transmitted as plaintext or ciphertext in message206c. Ciphertext1217bcan include plaintext values of random number random1202a, parameters104a, certificate cert.server101c, random number random2205r, and signature101s. Other data could be included as plaintext in ciphertext217bsuch as extensions for a TLS or DTLS handshake, data supporting an application for device103, and other possibilities exist as well. As depicted inFIG.2a, the series of steps and messages beginning with step201afor device103though the receipt of message206cby device103can comprise a step222, where the combined step222can be used in additional embodiments depicted below. As contemplated in the present disclosure, a message such as message206cand also other messages such as message203, message206a, etc. can be transmitted or sent in parts, where the data for the message can be transmitted and received in separate datagrams or portions over time. For these embodiments, the message can comprise the collection of separate datagrams or portions transmitted or sent separately. For example, with separate datagrams or portions for a message206cinFIG.2a, a first datagram or portion for message206ccould comprise server ephemeral public key E1101a, which could be sent (i) after a key pair generation step101x, and (ii) before receiving message206afrom key server102. A second datagram or portion for message206ccould comprise ciphertext1217b, which could be sent after server101receives message206afrom key server102. In this manner, by sending message206cas a first portion and a second portion, the overall speed of conducting a step223for device103could be increased. For example, by receiving the first portion of message206ccomprising key E1101a, device103could then (a) begin conducting steps below of204and218, while (b) waiting for the second portion of message206ccomprising ciphertext1217bto be sent separately and after the first portion. By increasing the overall speed for conducting a step223for device103, then electrical power consumption or battery usage for device103can be reduced. Other possibilities and benefits from sending a message in the present disclosure as a first portion and a second portion, without departing from the scope of the present disclosure. Messages depicted and described herein may be sent and received as multiple portions over time, where the message can comprise the collection of the multiple portions. Device103can then receive message206cand conduct a series of steps in order to process the message. Device103can conduct a key validation step204in order to verify that server ephemeral public key E1101ain message206cis properly formatted and is a valid point on the named curve for parameters104a. Validation step204for device103can be equivalent to the validation step204for server101described above. Device103can then conduct an ephemeral ECDH (ECDHE) key exchange step218in order to mutually derive symmetric ciphering key K1216a. Details for an ECDHE key exchange step218is depicted and described in connection withFIG.2cbelow. In summary, device103, using parameters104a, can perform an elliptic curve point addition operation on (i) the server ephemeral public key E1101areceived in message206cand (ii) the recorded network static public key PK.network102a. Device103can input (i) the point derived from ECC point addition and (ii) the device ephemeral private key ed103binto an ECDH key exchange algorithm in order to mutually derive shared secret key X3215with server101. The mutual derivation of shared secret key X3215by server101is depicted and described in connection with key exchange step213for server101inFIG.2bbelow. Device103can input shared secret key X3215into a key derivation function in order to mutually derive symmetric ciphering key K1216a. Note that a MAC key could also be derived in step218. Device103can then perform a decryption step219in order to decrypt ciphertext1217bfrom message206cusing the derived symmetric ciphering key K1216afrom the key exchange step218, where symmetric ciphering key K1216awas derived as described in the paragraph above. A decryption step219is also depicted and described in connection withFIG.2cbelow. Device103can then read the plaintext within ciphertext1217b, as well as verifying message integrity of ciphertext1217busing a MAC key derived in a step218. Device103in a decryption step219can read the plaintext values of random number random1202a, random number random2205r, and certificate cert.server101c, as well as a digital signature101s. Note that digital signature101scan be over at least the random number random1202athat device103sent in a message203. At step208, device103can conduct a verification step for the plaintext certificate cert.server101cin order to validate the certificate. Device103in a step208can verify a signature from a certificate authority for the server static public key PK.server101pin the certificate (plus any intermediate certificate signatures) using a root certificate for the certificate authority. The root certificate for the certificate authority could be recorded in a nonvolatile memory for device103. Device103can verify both the certificate authority signature in cert.server101cusing an elliptic curve digital signature algorithm (ECDSA). The ECDSA could use a certificate authority public key for from a root certificate for verifying the certificate authority signature in a certificate cert.server101c. The ECDSA can be compatible with IETF RFC 6979, IETF RFC 4574, and also FIPS 186-4 standards or related and subsequent standards for digital signatures using ECC PKI keys. Note that a certificate cert.server101ccould also specify parameters different than the use of an ECC algorithm, such as using RSA based signatures. For these embodiments using RSA based keys for digital signatures, device103could use a digital signature algorithm (DSA) and server static public key PK.server101pcan comprise an RSA-based key. Note that in some exemplary embodiments, the use of a server certificate cert.server101ccould be omitted, since device103can authenticate server101using the authenticated ECDH key exchange step218(where successful decryption of ciphertext1217bproves to device103that server101has access to SK.network102b). Further, a server certificate cert.server101ccould be included in a message206cand ciphertext1217b, but device103could omit a separate certificate verification step208and still trust the server public key PK.S1101pin a cert.server101c. In other words, successful decryption of the cert.server101cwith the symmetric ciphering key K1216acan signal or indicate that cert.server101ccan be trusted using the stored PK.network102a, since the cert.server101ccould only be encrypted by a server101with access to SK.network102b. After a step208to verify certificate cert.server101c, device103can conduct a signature verification step209ato verify signature101s. For a step209, device103could use the server static public key PK.server101pfor server101from certificate cert.server101cand an ECDSA signature algorithm in order to verify signature101s. The signed data verified by a signature verification step209acan comprise at least, in part, both random number random1202afrom device103and random number random2205rfrom server101, as well as other data within message206csuch as certificate cert.server101c. If the signature verification step209afails, then device103can stop further processing of message206cand return an error message. Device103can conduct a signature creation step207bin order to create digital signature103sover data received in message206c. The data signed by a signature creation step207bfor signature103scan comprise at least, in part, random number random2205r. A set of parameters104acan specify values and settings to utilize with an ECDSA in a step209a, such as a secure hash algorithm to utilize, the use of a deterministic ECC signature algorithm (avoiding the need to include a unique random number from device103with the signature103s), padding rules, encoding rules, etc. Device103can use device private key d1101din order to create signature103s. Device103can then conduct an encryption step217c, where encryption step217ccan use the exemplary encryption step217depicted and described below inFIG.2cwith different plaintext data than the depicted data for a step217inFIG.2c. The encryption key for a step217ccan comprise the symmetric ciphering key K1216aderived by device103above in a step218, and a MAC key216b(fromFIG.2cbelow) can also be utilized. In some exemplary embodiments, the encryption step217ccan use a different symmetric ciphering key K1216athan key K1216aused by server101to encrypt ciphertext1217b. In other words, different symmetric ciphering keys could be used by (i) server101to encrypt ciphertext1217band (ii) device103to encrypt a ciphertext217d. However, both server101and device103can mutually derive the different symmetric ciphering keys using at least the mutually derived shared secret X3215. For some exemplary embodiments, the key K1216afrom a KDF216can comprise two portions, where (i) a first portion is used by server101to encrypt data and device103to decrypt data and (ii) a second portion is used by device103to encrypt data and server101to decrypt data. The plaintext data for an encryption step217ccan comprise at least, in part, an identity for device103of ID.device103i, and the random number random2205rfrom server101. Other data could be included in the plaintext for an encryption step217cwithout departing from the scope of the present disclosure, such as, but not limited to, data from a transducer connected to device103. In addition, the device103static public key D1103c, or a certificate for device103with public key D1103ccould be included as plaintext data for an encryption step217c. The output of an encryption step217ccan comprise ciphertext2217d, as depicted inFIG.2a. As depicted and described in connection withFIG.2cbelow, the output of an encryption step217ccould also include an initialization vector and a MAC code, which could be included as metadata or plaintext along with ciphertext2217din a message210a. The initialization vector can be used to chain blocks in order to scramble data across the multiple blocks and the MAC code can be used to confirm message integrity using a MAC key output from key exchange algorithm218. For embodiments where server101could store or receive device static public key D1103cbefore receiving a message210a(such as receiving the key D1103cfrom a server associated with device103), then key D1103cand/or a certificate for device103could be omitted from ciphertext2217dand a message210a. After step217c, device103can send server101a message210a, where message210acan include ciphertext2217c. In exemplary embodiments, message210ais transmitted by device103using the same source IP address and port number as message203. In addition, message210ais transmitted by device103using the same destination IP address and port number for server101as message203. Although the signature103sis depicted inFIG.2aas being internal to ciphertext2217c, in some exemplary embodiments signature103scan be external to ciphertext2217c. Likewise, although a signature101sis depicted as within a ciphertext217bfrom server101, in some embodiments a signature101scould be external to ciphertext217bin a message206c. Server101can receive message210aby listening to the same local IP address and port number used to receive message203above. After server101receives message210a, server101can conduct a series of steps in order to process the message. Server101can conduct a decryption step219a, which can comprise a decryption step219depicted and described below in connection withFIG.2c, but with different ciphertext data. The ciphertext data for a decryption step219acan comprise the ciphertext2217creceived by server101in message210a. A decryption step219acan also use an initialization vector and MAC code received along with ciphertext2217cin message210a. After conducting a decryption step219a, server101can read the plaintext data within ciphertext2217c. In exemplary embodiments, the plaintext data can include an identity for device103of ID.device103i, the device static public key D1103c, and also the random number random2205r. Although not depicted inFIG.2a, ciphertext2217aas received by server101can include input from a transducer or sensor operated by device103, such as, but not limited to, keyboard input, temperature data from a thermocouple or thermistor, pressure data from a transducer, the state of an actuator, the state of an electronic switch, gate, or relay, etc. operated by device103. Other possibilities exist as well for transducer data in ciphertext2217awhich is decrypted into plaintext by server101in a decryption step219awithout departing from the scope of the present disclosure. At step210b, server101can process the plaintext data output from a decryption step219a. Server101can read and record the device identity ID.device103ifor use in subsequent messages. Server101can read the value for random number random2205rto confirm the value or number equals the random number random2205rsent above in message206c. In exemplary embodiments, server101can record the plaintext data decrypted from ciphertext2217cin a server database101dalong with a timestamp, after completing the signature verification step209c. Server101can conduct a signature verification step209bfor signature103susing the same signature verification algorithm and parameters as signature verification step209a, except using the device static public key D1103c. Parameters104can specify settings or values for conducting a signature verification step209a. In exemplary embodiments, signature verification step209bcomprises an ECDSA signature verification for digital signature103susing key D1103c. Note that signature103sis over data that includes at least random number random2205rsent by server101in message206c. Device static public key D1103ccould be recorded in nonvolatile memory or disk storage of server101as depicted inFIG.1babove. Upon successful completion of a signature verification step209bfor digital signature103s, server101and device103can conduct additional steps to securely transfer data106between the two nodes. Although not depicted inFIG.2a, server101could send device103commands, files, configuration data, or other data using ciphertext encrypted with derived symmetric ciphering keys. Server101and device103could also update key K1216aor rotate key K1216ausing a key derivation function (such as key derivation function216depicted inFIG.2bandFIG.2cbelow). As depicted inFIG.2a, after a step210band a step209b, server101can send key server102a message210b, where message210bcan include the device identity ID.device103iand an “OK” message, where the “OK” signals to key server102that server101and device103have successfully derived and used symmetric ciphering key216ausing PKI keys and an ECDH point addition of shared secret X1211aand X2212a. As depicted inFIG.2a, the series of steps beginning with a step204for device103through the receipt of message210bcan collectively comprise a step223. FIG.2b FIG.2bis a flow chart illustrating exemplary steps for conducting a key exchange using PKI keys in order to derive shared secrets, and for conducting a key derivation function using the derived shared secrets, in accordance with exemplary embodiments. Key server102can conduct a key exchange step211in order to derived a secret key X1211a. Server101can conduct a key exchange step212in order to derive a secret key X2212a. Server101can receive the secret key X1211ain a message206bfrom key server102inFIG.2aabove through a secure connection221. Server101can then conduct a key derivation function213using shared secrets X1211aand X2212ain order to derive a symmetric ciphering key K1216a. Using the methods and ECC PKI keys described in the present disclosure, a device103can also derive the same symmetric ciphering key K1216aas depicted and described below for a key exchange step218inFIG.2c. In other words, for exemplary embodiments (i) the corresponding key exchange step218(inFIG.2cbelow) for a device103by network105can be (ii) shared or distributed between a server101and key server102in order to secure or isolate network static private key SK.network102b. The processes and operations, described below with respect to all of the logic flow diagrams and flow charts may include the manipulation of signals by a processor and the maintenance of these signals within data structures resident in one or more memory storage devices. For the purposes of this discussion, a process can be generally conceived to be a sequence of computer-executed steps leading to a desired result. These steps usually require physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic, or optical signals capable of being stored, transferred, combined, compared, or otherwise manipulated. It is convention for those skilled in the art to refer to representations of these signals as bits, bytes, words, information, elements, symbols, characters, numbers, points, data, entries, objects, images, files, or the like. It should be kept in mind, however, that these and similar terms are associated with appropriate physical quantities for computer operations, and that these terms are merely conventional labels applied to physical quantities that exist within and during operation of the computer. It should also be understood that manipulations within the computer are often referred to in terms such as listing, creating, adding, calculating, comparing, moving, receiving, determining, configuring, identifying, populating, loading, performing, executing, storing etc. that are often associated with manual operations performed by a human operator. The operations described herein can be machine operations performed in conjunction with various input provided by a human operator or user that interacts with the device, wherein one function of the device can be a computer. In addition, it should be understood that the programs, processes, methods, etc. described herein are not related or limited to any particular computer or apparatus. Rather, various types of general purpose machines may be used with the following process in accordance with the teachings described herein. The present invention may comprise a computer program or hardware or a combination thereof which embodies the functions described herein and illustrated in the appended flow charts. However, it should be apparent that there could be many different ways of implementing the invention in computer programming or hardware design, and the invention should not be construed as limited to any one set of computer program instructions. Further, a skilled programmer would be able to write such a computer program or identify the appropriate hardware circuits to implement the disclosed invention without difficulty based on the flow charts and associated description in the application text, for example Therefore, disclosure of a particular set of program code instructions or detailed hardware devices is not considered necessary for an adequate understanding of how to make and use the invention. The inventive functionality of the claimed computer implemented processes will be explained in more detail in the following description in conjunction with the remaining Figures illustrating other process flows. Further, certain steps in the processes or process flow described in all of the logic flow diagrams below must naturally precede others for the present invention to function as described. However, the present invention is not limited to the order of the steps described if such order or sequence does not alter the functionality of the present invention. That is, it is recognized that some steps may be performed before, after, or in parallel other steps without departing from the scope and spirit of the present invention. The processes, operations, and steps performed by the hardware and software described in this document usually include the manipulation of signals by a CPU or remote server and the maintenance of these signals within data structures resident in one or more of the local or remote memory storage devices. Such data structures impose a physical organization upon the collection of data stored within a memory storage device and represent specific electrical or magnetic elements. These symbolic representations are the means used by those skilled in the art of computer programming and computer construction to most effectively convey teachings and discoveries to others skilled in the art. A key exchange step211for key server102to derive a secret key X1211acan utilize a selected set of cryptographic parameters104aas depicted and described in connection withFIG.1aandFIG.2aabove. As depicted inFIG.2b, an key exchange algorithm220in step211for key server102can receive input both of device ephemeral public key Ed103aand network static private key SK.network102b. The key exchange algorithm220could comprise a Diffie Hellman key exchange (DH), an Elliptic Curve Diffie Hellman key exchange (ECDH), and other possibilities exist as well without departing from the scope of the present invention. A key exchange algorithm220can support either PKI keys based on elliptic curves or RSA algorithms, although support of elliptic curves may be preferred in some exemplary embodiments due to their shorter key lengths and lower computational processing requirements. A summary of ECDH as a key exchange algorithm220is included in the Wikipedia article titled “Elliptic Curve Diffie-Hellman” from Mar. 9, 2018, which is herein incorporated by reference. An exemplary embodiment of key exchange algorithm220could comprise a “One-Pass Diffie-Hellman, C(1, 1, ECC CDH)” algorithm as described in section 6.2.2.2 on page 81 of the National Institute of Standards and Technology (NIST) document “NIST SP 800-56A, Recommendation for Pair-Wise Key Establishment Schemes Using Discrete Logarithm Cryptography” from March, 2007 which is hereby incorporated by reference its entirety. Other key exchange algorithms in NIST SP 800-56A could be utilized as well for a key exchange algorithm220inFIG.2aandFIG.2bwithout departing from the scope of the present disclosure. Example calculations for an ECDH key exchange for a key exchange algorithm220are shown below inFIG.2c. Other algorithms to derive a secret keys using public keys and private keys may also be utilized in a key exchange algorithm220, such as, but not limited to, the American National Standards Institute (ANSI) standard X-9.63. Cryptographic parameters104acan also include information, values, or settings for conducting (i) a key exchange algorithm220in step211and step212and (ii) a key derivation function216in order to derive a commonly shared symmetric encryption key K1216a. As contemplated herein, the terms “selected set of cryptographic parameters104a” and “cryptographic parameters104a”, and “parameters104a” can be equivalent, and can also comprise a subset of exemplary cryptographic parameters depicted and described in connection withFIG.1aandFIG.2dbelow. Parameters104ainput into a key exchange algorithm220can include a time-to-live for a key K1216athat is derived, a supported point formats extension, where the supported point formats extension could comprise uncompressed, compressed prime, or “compressed char2” formats, as specified in ANSI X-9.62. In other words, (i) an ECC keys input into a key exchange algorithm220and (ii) secret keys output from key exchange algorithm220may have several different formats and a set of parameters104acan be useful to specify the format. As depicted inFIG.2b, the output of a key exchange algorithm220in a step211, such as an ECDH key exchange, can comprise a secret value X1211a. In exemplary embodiments, secret value X1211acan comprise a point on an elliptic curve, where the equation and values for the elliptic curve can be specified in parameters104a. As contemplated herein, the secret value X1211a(as well as X2212abelow) comprises both an X coordinate and a Y coordinate, in order to support subsequent ECC point addition operations. Key exchange step212for a sever101depicted inFIG.2acan correspond to key exchange212inFIG.2b. Key exchange step212can comprise inputting or using the device ephemeral public key Ed103a(from message203inFIG.2a) and the server ephemeral private key e1101b(from a key generation step101x) into a key exchange algorithm220, which can comprise the same or equivalent key exchange algorithm220depicted and described in connection with key exchange step211described above. Other elements or algorithms within a key exchange step212can be equivalent to a key exchange step211above, including the use of shared parameters104a. The output of a key exchange algorithm220in a step212can comprise a secret key or value X2212a. In exemplary embodiments, secret value X2212acan comprise a point on an elliptic curve, where the equation and values for the elliptic curve can be specified in parameters104a. Exemplary numeric values for using a key exchange algorithm220are depicted and described below, and key exchange algorithm220can utilize an ECC point multiplication of a public key by the scalar value of a private key. In exemplary embodiments, a server101can record the value X2212aderived from a step212and also the value X1211areceived in a message206bin a server database101d. The time the values are stored in a server database101dcan be minimized in order to increase security, and, for example, the recording of the values can be deleted before server101sends the “OK” message210bto key server102inFIG.2a. A key derivation step213for server101can (i) combine the output of key exchange steps211and212in order to calculate or derived the shared secret X3215and then (ii) perform a key derivation function step216on the derived or calculated shared secret X3215in order to determine or calculate shared secret key K1216a, which can comprise a symmetric ciphering key. Note that shared secret key K1216acan be also mutually derived by device103, where device103uses the key exchange step218depicted and described in connection withFIG.2cbelow. In exemplary embodiments, a server101can conduct the key derivation step213using (i) the value X1211areceived from key server102(where receipt of X1211aby server101can be in a message206bas shown inFIG.2aabove), and (ii) the value or key X2212aoutput from a key exchange step212for server101in the paragraph above. As contemplated herein, the values of X1211a, X2212a, and X3215may be described as either “shared secrets” or “shared secret keys”. Although the values may not normally be used as a key directly with a symmetric ciphering algorithm, these values and the output of a key exchange algorithm220can comprise a secret or a key. Key derivation step213for server101can comprise two primary steps. A first step in key derivation213can comprise an ECC point addition214on the value X1211aand the value X2212a. The result of the ECC point addition will be equal to the value X3215. Note that device103can also derive the same value for value X3215(in step218below) without ECC point addition214using a step218. In other words, although (a) the related key exchange step218for device103may include a point addition for public keys, (b) the key exchange step218for device103will not use ECC point addition for points derived from two separate private keys in two separate servers (e.g. X1211auses private key SK.network102band X2212auses private key e1101b). Exemplary calculations for an ECC point addition214can comprise the calculations shown for point addition in the Wikipedia article for “Elliptic Curve Point Multiplication” dated May 15, 2018, which is herein incorporated by reference in its entirety. As depicted inFIG.2b, (a) the calculation of X3215by server101using an ECC point addition214over X1211aand X2212awill equal (b) the value for X3215calculated by device103using a key exchange algorithm220in a step218fromFIG.2cbelow. A second step in key derivation step213as depicted inFIG.2bcan comprise a key derivation function step216using (a) input from ECC point addition step214(e.g. value X3215output from step214), where (b) the output of key derivation function step216can comprise key K1216aand also an associated MAC key216b. In exemplary embodiments, the X coordinate from shared secret X3215can be used with key derivation function216. By server101conducting a key derivation step213as depicted inFIG.2b(where key server102conducts the calculations for step211using the network static private key SK.network102b), (i) sever101can calculate symmetric ciphering key K1216awithout recording or operating on the network static private key SK.network102b. In this manner, the security of a system100or system200can be significantly enhanced, since the network static private key102bdoes not need to be recorded or operated by server101, which can communicate with a plurality of devices103over an IP network. In other words, by server101(i) using the ECC point addition over key X1211ainstead of (ii) conducting a key exchange220directly with SK.network102b, then server101does not need to record or operate with the network static private key SK.network102b, thereby increasing security. Also, since (i) key X1211acan be the equivalent of an ECC public key as a point on an elliptic curve, and (ii) it is not computationally feasible to determine network static private key SK.network102bfrom key X1211a, then key X1211adoes not reveal meaningful information about network static private key SK.network102b. Many benefits can be achieved by server101conducting a key derivation step213using key X1211ainstead of recording and operating with network static private key SK.network102b. As one example, the corresponding network static public key PK.network102acould potentially be both (i) recorded in millions of distributed devices connecting to server101through many different physical locations and networks, and (ii) used for a decade or longer. Keeping network static private key SK.network102bsecure for this embodiment could be economically essential, since a compromise of network static private key SK.network102bmay (i) render the devices103insecure (or unable to authenticate network105using an ECDHE key exchange), and (ii) require the secure distribution or re-installation of a new, different network static public key SK.network102ain the devices, which may not be economically feasible due to the prior distribution of devices. Exemplary data and numbers can be provided to demonstrate the calculations for (i) key exchange step211, (ii) key exchange step212, and (iii) key derivation step213using an ECC point addition214. The exemplary data can comprise decimal numbers for the example ECC PKI keys and exchanged keys listed in “Test vectors for DPP Authentication using P-256 for mutual authentication” on pages 88 and 89 of the DPP specification version 1.0. Parameters104acan comprise the elliptic curve of “secp256r1” with key lengths of 256 bit long keys. The network static private key SK.network102bcan comprise the exemplary following number, and can be recorded in key server102:38358416135251014160802731750427376395128366423455574545250035236739593908128 The server ephemeral private key e1101bcan comprise the exemplary following number, and can be recorded by server101:111991471310604289774359152687306247761778388605764559848869154712980108827301 The device ephemeral public key Ed103acan comprise the following exemplary values with X and Y numbers (or “coordinates”) of: X: 61831688504923817367484272103056848457721601106987911548515219119661140991966 Y: 436821274116052626307636850969789027573720854595612820926922498255090826944 Key exchange step211for an ECDH algorithm key exchange220by key server102can input the device ephemeral public key Ed103aand the network static private key SK.network102b(both with numbers above) in order to calculate a secret X1211a. An exemplary number or value for secret X1211afrom the values above using parameters104acan be: X: 11490047198680522515311590962599671482029417064351337303313906642805743573119 Y: 27933966560238204731245097943399084523809481833434754409723604970366082021855 Key exchange step212for an ECDH algorithm key exchange220by server101can input the device ephemeral public key Ed103aand the server ephemeral private key e1101b(both with numbers above) in order to calculate a secret X2212a. An exemplary number or value for key X2212afrom the values above using parameters104acan be: X: 78944719651774206698250588701582570633503182903415394243006529481189158194650 Y: 11227712702924684581834935828837489140201820424536062912051086382324589445237 An ECC point addition213for the above two derived points (or “keys”) X1211a(from keys Ed103aand SK.network102b) and X2212a(from keys Ed103aand e1101b) will result in the following point that also equals X3215. X: 113734500629065545557893524064610113740858966831672649615565042035695230713090 Y: 68961429691307429166796760881095689348088875771334970644593306388375741965262 Note that the same numeric value for key X3215can also be derived by device103from a key exchange step218below using ECDH key exchange algorithm220a. For exemplary embodiments, although private key SK.network102band ephemeral private key e1101bare recorded and operated by physically separated devices, device101can record and operate on the corresponding public keys PK.network102aand ephemeral public key E1101a(at the same physical location as device103). After an ECC point addition213, for a key derivation step218inFIG.2b, server101can input the shared secret key X3215, where key X3215was output from the ECC point addition214, into a key derivation function216. The key derivation function216can comprise the same key derivation function216used by a device103in a step218below. The output of a key derivation function216can comprise both (i) a symmetric ciphering key K1216aand (ii) a MAC key216b. MAC key216bcan be used with a symmetric ciphering algorithm in order to generate a MAC code, such that the other party using the same key K1216aand MAC key216bcan process the ciphertext and calculate the same MAC code in order to verify message integrity. Key derivation function216can use a secure hash function such as, but not limited to, SHA-256, SHA-384, SHA-3, etc. and additional values such as a text string with secret X3215. The specification of a secure hash algorithm and the text string for use with a key derivation function216could be commonly shared between server101and device103by commonly shared parameters104a. In some exemplary embodiments, the text string for use with secret X3215can be from data, text, or values transmitted in (i) message203(for KDF216by server101in step213) and/or (ii) message206c(for KDF216by device103in step218). The output of a secure hash algorithm within a key derivation function216could have a subset of bits selected or possibly a secure hash expanded in order to obtain the number of bits required for a symmetric key with a symmetric ciphering algorithm, such as key K1216a. A key derivation function (KDF)216could comprise a KDF compatible with or specified by ANSI standards for “X9.63 Key Derivation Function”. Other possibilities exist for a key derivation function216to convert a secret X3215into a symmetric ciphering key K1216aand a MAC key216bwithout departing from the scope of the present disclosure. As contemplated in the present disclosure, although an ECC public key such as secret X3215can comprise a coordinate with an X value and a Y value, in exemplary embodiments a single number comprising the X value can be selected and input into a key derivation function216. In addition, the key K1216acan comprise two portions, where (i) a first portion can be a key for encrypting data by server101and decrypting the data by device103and (ii) a second portion can be a key for encrypting data by device102and decrypting the data by server101. FIG.2c FIG.2cis a flow chart illustrating exemplary steps for conducting a key exchange using PKI keys in order to derive a shared secret key, and for using the derived shared secret key to encrypt and decrypt data, in accordance with exemplary embodiments. Exemplary steps for a device103to mutually derive a shared secret X3215and symmetric key216acan comprise a key exchange step218. Exemplary steps inFIG.2cfor a server101to encrypt plaintext data using the mutually derived symmetric key216acan comprise an encryption step217. Exemplary steps inFIG.2cfor a device103to decrypt ciphertext data using the mutually derived symmetric key216acan comprise a decryption step219. The use of the steps for a key exchange218, encryption217, and decryption219were also depicted and described in connection withFIG.2aabove. Note that steps inFIG.2cand the steps inFIG.2bcan share some algorithms and values, and the descriptions for the algorithms and values inFIG.2abcan be applicable forFIG.2c. For example, the key exchange algorithm220acan comprise an ECDH key exchange equivalent to key exchange step220. The set of parameters104adepicted and described inFIG.2bcan also be used inFIG.2c. A device103can conduct a key exchange step218. At step218, (i) a combination of a recorded network static public key PK.network102aand received server ephemeral public key E1101a, and (ii) the derived device ephemeral private key ed103bcan be input into an ECDH key exchange algorithm220ain order to calculate the shared secret X3215. The recorded network static public key PK.network102aand received server ephemeral public key E1101acan be combined via elliptic curve point addition. Exemplary data and numbers can be provided to demonstrate the calculations for (i) key exchange step218. The exemplary data can comprise decimal numbers for the example ECC PKI keys and exchanged keys listed in “Test vectors for DPP Authentication using P-256 for mutual authentication” on pages 88 and 89 of the DPP specification version 1.0. Parameters104acan comprise the elliptic of “secp256r1” with key lengths of 256 bit long keys. The device ephemeral private key ed103bcan comprise the exemplary following number, and can be recorded in device103after a key pair generation step103xfromFIG.1aabove:9814229718244518553550958692061024829480281279450793086167684747145642004923 The network static public key PK.network102acan comprise the exemplary values with X and Y numbers (or “coordinates”) of: X: 4419807000381358656111506147651622980270029110554119329493335953912822452287 Y: 37427159939572325965354914097696269740713866333885143374269952770772578794844 The server ephemeral public key E1101acan comprise the following exemplary values with X and Y numbers (or “coordinates”) of: X: 42629956901026513598149966301519681371972968598637962756879877886841583606416 Y: 20486612594265388212565154850034967164732043090221075006612427172869133074917 An ECC point addition for the above two keys E1101aand PK.network102awill result in the following exemplary point. which comprises (a) both E1101aand PK.network102afor a key exchange step218then (b) input into an ECDH key exchange algorithm220a: X: 2811461365732647553134637541685353169648905058941523144737599092152119800180 Y: 93903335977032690345879985966890561591048675256101157964834025539587687968435 The above combination of both E1101aand PK.network102afor a key exchange step218via an ECC point addition operation is depicted inFIG.2cwith the “+” symbol between the public 35 keys. The output of the above ECC point addition for public keys E1101aand PK.network102acan be input into ECDH key exchange algorithm220ausing parameters104a. All of the exemplary calculations for a key exchange step218can use the exemplary subset of cryptographic parameters104a. An ECDH algorithm key exchange220ain key exchange step218can input (i) the exemplary point immediately above from the ECC point addition operation on the public keys101aand102aand (ii) the device ephemeral private key ed103binto the ECDH key exchange220a, and output the point X3215. Note that the secret X3215as derived by device103in a key exchange step218equals or is the same numeric value as the secret X3215derived by server101in a key derivation step213inFIG.2b. An exemplary number or value for secret X3215calculated by device103using a key exchange step218using the above exemplary numeric values for ed103b, PK.network102a, and E1101awould be: X: 113734500629065545557893524064610113740858966831672649615565042035695230713090 Y: 68961429691307429166796760881095689348088875771334970644593306388375741965262 AlthoughFIG.2cdepicts an ECC point addition operation over public keys E1101aand PK.network102a, the same shared secret value X3215could be generated or derived by conducting (i) a first ECC point multiplication operation with the server ephemeral public key E1101aand the device ephemeral private key ed103bto derive a first point, and (ii) a second ECC point multiplication operation with the network ephemeral public key PK.network102aand the device ephemeral private key ed103bto derive a second point, and (iii) an ECC point addition operation with the first point and the second point to derive the shared secret value X3215. In other words, the value X3215can be calculated as either: X3 215=[E1 101a+PK.network102a]*ed103b, or (i) X3 215=[E1 101a*ed103b]+[PK.network102a*ed103b](ii) For a key derivation step218, derived shared secret key X3215can be input into a key derivation function216where the key derivation function216can be equivalent to the key derivation function216depicted and described in connection withFIG.2babove for a key derivation step213. Note that for key derivation steps in the present disclosure, the X coordinate of a derived shared secret can be taken or used as input into the key derivation function. The output of a key derivation function216can comprise both (i) a symmetric ciphering key K1216aand (ii) a MAC key216b. MAC key216bcan be used with a symmetric ciphering algorithm in order to generate a MAC code, such that the other party using the same key K1216aand MAC key216bcan process the ciphertext and calculate the same MAC code in order to verify message integrity. The use of key K1216aand MAC key216bare described in connection with encryption step217and decryption step219. Server101can conduct an encryption step217, where the use for an encryption step217is depicted and described in connection withFIG.2aabove. Plaintext217ain a step217can comprise the first random number random1202afrom device103, the second random number random2205r, and the server certificate cert.server101c. Other or different exemplary data could be included as plaintext217ain an encryption step217, including extensions for a TLS or DLTS handshake. The symmetric ciphering key for encryption step217can comprise symmetric key K1216afrom a key derivation step213and a MAC key216bcan be input into a symmetric ciphering algorithm225as well. Encryption step217and decryption step219can use a common symmetric ciphering algorithm225, which could comprise the Advanced Encryption Standard with Synthetic Initialization Vectors (AES-SIV) (and deciphering algorithm) also with a common set of symmetric ciphering parameters104ffrom a set of cryptographic parameters104. Other or different symmetric ciphering algorithms225could be utilized as well, such as, but not limited to such as AES, Triple Data Encryption Standard (3DES), Blowfish, or related algorithms. A mutually derived symmetric ciphering key K1216acan comprise two portions, where a first portion is used by server101for encryption and a second portion is used by device103for encryption. At least the first portion of key K1216acan be used in an encryption step217. Symmetric ciphering parameters104fcan also specify the use of a block chaining mode such as cipher block chaining (CBC), counter mode (CTR), or Galois/Counter mode (GCM) and other possibilities exist as well. In addition, symmetric ciphering parameters104fcould specify a mode for message authentication, which could comprise a CMAC mode as specified in NIST publication SP-800-38B. In some exemplary embodiments, a symmetric ciphering algorithm225can comprise the AES-SIV algorithm as specified in IETF RFC 5297. The output from an encryption step217using a symmetric ciphering algorithm225and the depicted values input can be ciphertext217b, as depicted inFIG.2c. A decryption step219can be performed by device103. A decryption219step converts the ciphertext217breceived in a message206cfromFIG.2ainto plaintext217a. Decryption step219can utilize a symmetric decryption algorithm225, which could comprise the same algorithm used in symmetric encryption algorithm225except the algorithm being used for decryption instead of encryption. Note that the same values are input into symmetric decryption algorithm225as symmetric encryption algorithm225above, such as symmetric encryption key K1216a(or the first portion of key K1216aif a second portion of key K1216ais used by device103for encryption) and parameters104fin order to convert ciphertext217bback into plaintext217a. Additional data input into symmetric decryption algorithm211bcan comprise an initialization vector217iand MAC code216cwhich could be sent along with ciphertext217b. Device103can the read and process plaintext217aafter a decryption219step. The plaintext217aas read by device103can comprise the first random number random1202afrom device103, the second random number random2205r, and the server certificate cert.server101c. In exemplary embodiments, the successful decryption of a ciphertext into a plaintext using decryption algorithm225supports one-way authentication of the server101and/or network105, since successful decryption by device103can only take place when the server101has access to network static private key SK.network102b. In other words, only the nodes could mutually derive key K1216ainFIG.2bandFIG.2cby (i) device103recording PK.network102aand (ii) server101having access to SK.network102b(via key server102). Thus, data that is successfully encrypted by the server101and decrypted by the device103using key K1216awould confirm the server101is authenticated. As depicted and described in connection withFIG.2a, server101or device103can also conduct both an encryption step217and a decryption step219. The steps for server101to conduct a decryption step219for can comprise step219aas depicted and described inFIG.2a. When server101conducts decryption step219ausing symmetric encryption key K1216a, the ciphertext and plaintext will comprise different values than those depicted inFIG.2c, where the ciphertext for a decryption step219acan comprise ciphertext2217d. Further, a device103can conduct an encryption step217cin with key K1216ain order to create ciphertext217d, as depicted inFIG.2a. FIG.2d FIG.2dis an illustration of an exemplary server database and an exemplary set of cryptographic parameters, in accordance with exemplary embodiments. A server database101ddepicted and described above in connection with system100and system200can record data for server101to work with a plurality of devices103and at least one key server102. A server database101dcould record in at least one set of values, keys, and/or numbers for a plurality of devices103. Other possibilities exist as well for the organization, tables, and recorded data within a server database101dwithout departing from the scope of the present disclosure. Data within server database101dcould be encrypted using a symmetric key. Although system100and system200depict a server database101das operating or recorded within a server101, a server database101dcould comprise a separate server within a network105and communicating with server101via a secure session221or a private network107a. Further, a server database101d, when operating or recorded in a separate server than server101, then server database101dcould contain electrical components equivalent to a server101depicted and described in connection withFIG.1b. Server database101dcan record values or numbers for a first random number random1202a, received device ephemeral public key Ed103a, a selected set of cryptographic parameters104a, a source IP address and port number203areceived for message203, a secure hash value over PK.network102acomprising H(PK.network102a)250, and identity for key server102comprising ID.key-server102i, an ECC point value X1211a, a server ephemeral public key E1101a, a server ephemeral private key e1101b, an ECC point value X2212a, an ECC point value X3215, a derived symmetric ciphering key K1216a, and a second random number random2205r. In exemplary embodiments, the values depicted in the first row of server database101dcould comprise data recorded by a server101while conducting the series of steps for a step222and step223depicted and described in connection withFIG.2aabove with a first device103. The values depicted in the second row of server database101dcould comprise data recorded by a server101while conducting the series of steps for a step222and step223depicted and described in connection withFIG.2aabove with a second device103, etc. In exemplary embodiments for a server database101d, a first device103could send server101a first value for device ephemeral public key Ed103a, and the first value is depicted inFIG.2das “103a-1”. Since server101could communicate with a plurality of devices103, the second row in the depicted server database101dcould comprise data for the equivalent steps conducted with a second device103, such as recording a second value for device ephemeral public key Ed103afor the second device. The second value for device ephemeral public key Ed103awith the second device103is depicted inFIG.2das “103a-2”. Equivalent notations for other keys or values are applicable as well, such as server database101drecording a first secret X1211adepicted as “211a-1” for a first device103, and then recording a second secret X1211adepicted as “211a-2”. Thus, as depicted a server database101dcan record and operate with a plurality of different values for a key, where each are utilized by a different device. Although not depicted inFIG.2d, a server database could record device identity ID.device103ias well. For embodiments where a device identity103iis not available, then server101could keep track of different devices103for conducting the steps inFIG.2aby the source IP:port number203a. In some exemplary embodiments, a message203can include a secure hash value H(PK.network102a)250, as described for a message203inFIG.2aabove. The receipt of a secure hash value H(PK.network102a)250could be mapped to or associated with a key server102via a key server identity ID.key-server102i, where the mapping of H250to ID.key-server102icould be recorded in a server database101dbefore device103sends a message203. For these embodiments and after receipt of message203, server101could conduct a query of server database101dusing the received H250in a message203in order to select a key server102with ID.key-server102iin order to send the message206ato key server102. In this manner, server101can communicate with a plurality of different key servers102, and the destination of a message206a(or key server102) can be selected by the value H250received in a message203. In other words, for a plurality of different devices103communicating with a server101, a first subset of devices103could record and use a first network static public key PK.network102a, and a second subset of devices103could record and use a second network static public key PK.network102a. By receiving a value or identifier of the first or second key102ain message203(such as H(PK.network102a)250), server101could use the data depicted for a server database101dto select or identify the correct key server102in order to (i) send a message206aand (ii) receive the correct secret X1211afor the device103using a particular PK.network102a. Although the value H(PK.network102a)250is depicted as recorded in a server database101dinFIG.2d, a different value or identifier for the PK.network102acould be recorded and utilized as well. In an exemplary embodiment, server101could receive the plaintext PK.network102ain a message203and record the plaintext PK.network102ain a server database101d(instead of a hash value H250). In another exemplary embodiment, an identity for key server102(such as ID.key-server102i) could be selected or determined by server101using the selected set of cryptographic parameters104areceived in message203and recorded in a database101d. For these embodiments, a first selected set of cryptographic parameters104acould be associated with a first key server102(and first ID.key-server102i) and a second set of cryptographic parameters104acould be associated with a second key server102(and second ID.key-server102i). Other possibilities exist as well for a server database101dto record data in order to select a key server102for sending message206awith device ephemeral public key Ed103abased on data received in message203, without departing from the scope of the present disclosure. As one example, the identity for key server102of ID.key-server102icould be included in message203and the value for ID.key-server102icould be recorded in a server database101dby server101. In a server database101d, although separate values are depicted for some data, such as values “102i-1” and “102i-2” for identities of key servers102, some of the exemplary values can comprise identical strings or numbers. For example, data for two different devices103in a server database101dcould record the same name or value of “102i-2” for a single key server102to be associated with the two different devices103. Likewise, two different devices103could share the same network static public key PK.network102a, and thus H250can be the same value of an exemplary “250-2” for two different devices103. A server database101dcould also record additional data and values than those depicted inFIG.2dfor some exemplary embodiments. For example, server database101dcould record timestamps for when messages are transmitted or received, such that stale or data older than a specified range could be purged. Server database101dcould also record data received from device103in a message210a, which could include data from a transducer operated by device103. Some data within a server database101dcould be recorded and operated on separately by server101, such as server101not recording secrets such as X1211aor X2212a, etc. in a database101d, but rather server101could record the values in volatile memory101fof server101. In exemplary embodiments, server database101dcould also operate in a distributed or “cloud” configurations such that multiple different servers101could query and record data in server database101d, where data for server database101dis recorded in multiple, physically separated servers. Cryptographic parameters104can specify sets of cryptographic parameters that are supported by server101in order to process message203and send response message206cfromFIG.2a. Cryptographic parameters104can be recorded in a server database101d, or in other locations within a system100and system200. As depicted inFIG.1a, each of device103, server101, and key server102can record and operate with a set of cryptographic parameters104. Cryptographic parameters104can record a collection of cryptographic algorithms or specifications such as a set identifier104a, a key length104b, an ECC curve name104c, a hash algorithm104d, symmetric ciphering key length104e, settings for a symmetric ciphering algorithm104f, and a random number length104g. As contemplated herein, when a selected set of cryptographic parameters such as using the words or description “parameters104a” or “cryptographic parameters104a” can specify a row of parameters or values in a set of cryptographic parameters104, such that the collection of values in the row can be used with key pair generation functions101xand103x, ECDH key exchange220, and other cryptographic operations and steps as contemplated herein. Set identifier104acan be an identity for a row or set of values for cryptographic parameters104. For example, set “A” can comprise cryptographic suite 1 as specified in section 3.2.3 of DPP specification version 1.0. Key length104bcan be the length of keys in bits for PKI keys used in system100and system200. ECC Curve name104ccan be a name for an ECC curve used with PKI keys and key exchange algorithms in system100and system200. Hash algorithm104din cryptographic parameters104can be the name of a secure hash algorithm, such as the exemplary SHA-256 algorithm depicted, which may also be referred to as “SHA-2”. Hash algorithm104dcan also be used in a key derivation function (e.g. KDF216above inFIG.2bandFIG.2c) and also with digital signature steps207aand209a. Settings for a symmetric ciphering algorithm104fcan specify the identity or name of a symmetric ciphering algorithm225such as “AES”, “AES-SIV”, 3DES, Blowfish, etc. Random length104gcan specify the length in bits for random numbers or “nonces” generated by both device103and server101, where the nonces can be used to prevent replay attacks and require messages transmitted and received to be unique. Other possibilities exist as well for data within cryptographic parameters104, such as the specification of point compression, encoding rules such as distinguished encoding rules (DER), ASN or CSN syntax notation, padding rules, etc. FIG.2e FIG.2eis a flow chart illustrating exemplary steps for conducting a key exchange using PKI keys in order to derive a shared secret key using ECC point multiplication, in accordance with exemplary embodiments. An ECDH key exchange step218ncan be conducted by a device103, and use the steps for an ECDH key exchange step218, with the additional steps of conducting an ECC point multiplication using numbers N1298and N2299. A key derivation step213ncan be conducted by a server101, and use the steps for a key derivation step213, with the additional steps of conducting an ECC point multiplication using the same numbers N1298and N2299. In other words, (i) ECDH key exchange step218can comprise the depicted ECDH key exchange step218nwhere the numbers for N1298and N2299are equal to the value of “1”, and (ii) key derivation step213can comprise the depicted key derivation step213nwhere the numbers for N1298and N2299are also equal to the value of “1”. In some exemplary embodiments, (i) an ECDH key exchange step218depicted and described in connection withFIG.2afor device103can comprise the ECDH key exchange step218nwith point multiplication, and key derivation step213for server101can comprise the key derivation step213nwith point multiplication. The set of parameters104afrom figures above, such as withFIG.2a, can be used with both ECDH key exchange step218nand key derivation step213n. A device103can conduct a key exchange step218n. At step218n, a device103can conduct a first ECDH key exchange step220and a second ECDH key exchange step220. For a step218n, a first ECDH key exchange step220can be conducted by device103with (i) the server ephemeral public key E1101areceived in a message206cfromFIG.2aand (ii) the recorded device ephemeral private key ed103b, and the resulting point multiplied by the number N1298. Note that the ECC point resulting from the first ECDH key exchange220in the previous sentence will also equal the point X2212amultiplied by the number N1298, where the calculation of point X2212ais depicted and described in connection with a key exchange step212inFIG.2b. Continuing with step218n, a device103can conduct the second ECDH key exchange step220with (i) the network static public key PK.network102arecorded in device103and (ii) the recorded device ephemeral private key ed103b, and the resulting point multipled by the number N2299. Note that the ECC point resulting from the second ECDH key exchange220in the previous sentence will also be equal to the point X1211amultiplied by the number N2299, where the calculation of point X1211ais depicted and described in connection with key exchange step211inFIG.2b. Continuing with step218n, a device103can conduct an ECC point addition operation on the two points resulting from (i) the first ECDH key exchange step220multiplied by N1298and (ii) the second ECDH key exchange step multiplied by N2299. In other words, a device103can conduct an ECDH point addition operation with (i) the value X2212amultiplied by N1298and (ii) the value X1211amultiplied by the value N2299, in order to derive a secret X3′215athat is mutually shared with server101. Exemplary data and numbers can be provided to demonstrate the calculations for (i) key exchange step218nand (ii) key derivation step213n. The exemplary data can comprise decimal numbers for the example ECC PKI keys and exchanged keys described above inFIG.2b. The first ECDH key exchange220for device103using (i) the exemplary numerical value for device ephemeral private key ed103binFIG.2cand (ii) the exemplary numerical value for server ephemeral public key E1101ainFIG.2c, using parameters104a, will result in the exemplary number or value for secret X1211a, where parameters104acan comprise the elliptic curve of “secp256r1” with key lengths of 256 bit long keys: X: 11490047198680522515311590962599671482029417064351337303313906642805743573119 Y: 27933966560238204731245097943399084523809481833434754409723604970366082021855 For an exemplary value of “3” for N1298, the resulting ECC point multiplication of X1211aby N1298with the value of “3” will result in the following point “3×X1”: X: 60742753813277956134086722801387134015749233649228884236187651653814176225536 Y: 58611335288463132268275870174894337145888786863441350683708443176926328298969 The second ECDH key exchange220for device103in a step218nusing (i) the exemplary numerical value for device ephemeral private key ed103binFIG.2cand (ii) the exemplary numerical value for network static public key PK.network102ainFIG.2c, using parameters104a, will result in the exemplary number or value for secret X2212a: X: 78944719651774206698250588701582570633503182903415394243006529481189158194650 Y: 11227712702924684581834935828837489140201820424536062912051086382324589445237 For an exemplary value of “7” for N2299, the resulting ECC point multiplication of X2212aby N2299with the value of “7” will result in the following point “7×X2”: X: 97872096638582215727304642389226702208575594850473136075994007337240867556563 Y: 30901113762050629628611789412759390525616003079040872429940997779854500728255 An ECC point addition for the two points “3×X1” and “7×X2” will result in the following point, which can equal the shared secret X3′215afor a key exchange step218n: X: 107460308686621111684900795619695874701132258776388121688297958325813410507748 Y: 104797039912644919810998853512360434930336867141382017165496514798694755489900 The above values for N1298and N2299are exemplary, and any numeric value less than the large prime number p for a named elliptic curve could be selected for both N1298and N2299. Continuing with step218n, derived shared secret key X3′215acan be input into a key derivation function216where the key derivation function216can be equivalent to the key derivation function216depicted and described in connection withFIG.2babove for a key derivation step213. Note that for key derivation steps in the present disclosure, the X coordinate of a derived public key can be taken or used as input into the key derivation function. The output of a key derivation function216can comprise both (i) a symmetric ciphering key K1216aand (ii) a MAC key216b. The use of key K1216aand MAC key216bare described in connection with encryption step217and decryption step219inFIG.2c. For a key derivation step213nby server101, server101can conduct the equivalent steps as key derivation step213inFIG.2b, with point multiplication operations depicted inFIG.2e. Server101can perform an ECC point addition and point multiplication step214ausing the values X1211aand X2212a, as well as the numbers N1298and N2299. The value X1211acould be received by server101from key server102in message206a. Note that the value X1211ais derived by key server102using an ECDH key exchange step211as depicted and described in connection withFIG.2b. A server101could calculate the value for X2212ausing an ECDH key exchange step212inFIG.2b. The value or point X1211acan be multiplied by number N2299. The value or point X2212acan be multiplied by the number N1298. An ECC point addition can be performed on the two ECC points obtained in each of the previous two sentences in order to calculate a value X3′215a. The exemplary calculations for point multiplication on X1211a(with N2299) and X2212a(with N2298) by device103would also be calculated by server101. In other words, the exemplary data and numbers depicted above for the calculations by device103could also be calculated by server101in order to mutually derive the same value for X3′215a. The mutually derived value for X3′215acan be input into key derivation function216in order to calculate a symmetric ciphering key K1216aand a MAC key216b, which can comprise the same numbers as calculated by device103in a step218n. The source of values for N1298and N2299for both device103and server101could be mutually obtained in several ways. N1298and N2299could be recorded and shared with a set of cryptographic parameters104, such that selecting a subset of the cryptographic parameters104acould determine the values or numbers to use for N1298and N2299. In another exemplary embodiment, N1298and/or N2299could comprise pre-shared secret values or keys, such that device103receives the values in a secure manner before sending message203, such as, but not limited to, recording the values at functionally the same time network static public key PK.network102ais recorded in device103. Server101could receive the values N1298and N2299in a secure manner, such as from key server102in a secure session221. Other possibilities exist as well for a device103and a server101to obtain the numbers N1298and N2299without departing from the scope of the present disclosure. In exemplary embodiments, the number for N1298or N2299can be either equal, or the numbers could comprise different values. A device103and a server101could also conduct a number derivation step297in order to obtain the numbers N1298and N2299, which is also depicted inFIG.2e. For a number derivation step297, a static public key can be input into a secure hash algorithm291, such as SHA-256. The static public key can be any public key shared between a device103and server101(e.g. where one node records the public key and the other node records the corresponding private key). In exemplary embodiments depicted inFIG.2e, the public key for a number derivation step297can comprise the network static public key PK.network102a, where a server101can derive or calculate the network static public key can be derived from the network static private key SK.network102busing parameters104. Other exemplary public keys shared between device103and server101can comprise any of public keys Ed103a, E1101a, D1103c, etc. The node recording the corresponding private key can calculate the public key using the parameters. The output of the secure hash algorithm291can be input into a select digits function292. The select digits function292could take a subset of the hash value resulting from hash291, such as leading digits for N1298and trailing digits for N2299. Or, a number N1298could be derived from a select digits function292over a hash291of the X coordinate of a public key and the number N2299could be derived by a select digits function292over a hash291of the Y coordinate of the same public key. Other subsets or logic for the select digits function292using the hash value from hash algorithm291can be used as well, without departing from the scope of the present disclosure. The output of the select digits function292can comprise the value N1298and N2299. Since both device103and server101and/or network105can securely share PK.network102a, then the same calculations for a number derivation step297can be performed by the nodes in order to mutually obtain the numbers N1298and N2299. The values for N1298and N2299can be used by (i) device101when conducting the key exchange step218nand (ii) server101when conducting the key derivation step213n. FIG.3a FIG.3ais a simplified message flow diagram illustrating an exemplary system with exemplary data sent and received by a mobile device, a g node b, and a key server, in accordance with exemplary embodiments. System301can include a mobile device103′, a “next generation node b”101′, and a key server102. Mobile device103′ can comprise a smart phone, a device for the “Internet of Things” (IoT), a tablet with a modem, or possibly a fixed station device that connects with a 5G or 6G network. Mobile device103′ can operate similar to a device103, with the additional functionality of connecting to a wireless network, where the network supports 3GPP standards and can also comprise a wide area network such as a public land mobile network. A “next generation node b”101′ (depicted as gNb101′) can contain the equivalent electrical components as those depicted for a server101inFIG.1b, except gNb101′ can also operate as a base transceiver station to send and receive data wirelessly with mobile device103′. The key server102could operate as part of an Authentication Server Function (AUSF) or equivalent functionality. Note that the distributed nature of the ECDH key exchanges as depicted inFIG.2aandFIG.2bandFIG.2chave benefits for the wireless WAN architecture inFIG.3a, SK.network102bfor a mobile device103′ does not need to be recorded or operated by a gNb101′ In exemplary embodiments, a mobile device103′, a gNb101′, and a key server102can conduct a step222′, where a step222′ can comprise primarily the step222as depicted and described inFIG.2a. There can be some differences between a step222and a step222′. Note that before the steps222′ depicted inFIG.3a, a mobile device103′ and a gNb101′ could conduct steps to establish communications between the nodes, such as recording parameters for RF communications by the mobile device103′ in a SIM card or eUICC. A mobile device103′ could also conduct steps to authenticate the network105operating a gNb101′. For a step222′, a mobile device103′ can send message203with the device ephemeral public key Ed103aand also an obfuscated identity for device103′, where the obfuscated identity can also comprise a temporary identity for device103. A gNb101′ can use the obfuscated identity to track the device103from a potential plurality of devices103communicating over a wireless network. The gNb101′ can forward the device identity and the received device ephemeral public key to the key server102. The key server102can look up a unique key102vfor device103for the network static private key102bcorresponding to the network static public key102arecorded by the device103. The key server102can calculate value X1211aas depicted inFIG.2b, and send the gNb101′ the value X1211aover a secure session. The gNb101′ can conduct an ECDH key exchange step212and calculate value X2212a, using the received device ephemeral public key Ed103aand the derived server ephemeral private key e1101b. The gNb101′ can calculate the value X3215via ECC point addition over X1211aand X2212a. The gNb101′ can calculate a symmetric ciphering key K1216ausing the value X3215and a KDF216. The gNb101′ can send the mobile device103′ the derived server ephemeral public key E1101in a message206cfrom a step222. Note that some data within ciphertext217bcan be omitted from a message206cin a step222′, where step222′ is depicted inFIG.3aand comprises equivalent steps as a step222inFIG.2a. The mobile device103′ can receive the message206cfrom a step222′. The mobile device103′, gNb101′, and key server102can conduct a step223, where a step223was depicted and described in connection withFIG.2aabove. The mobile device103′ can send gNb101′ a message210awith ciphertext217d, where ciphertext217dcan include a device identity ID.device103ias plaintext encrypted in the ciphertext217d. The ciphertext217dcan be encrypted with the derived symmetric ciphering key K1216aand a symmetric ciphering algorithm225, where key K1216awas derived by mobile device103′ in a step222′. The identity for the mobile device103ican comprise a subscription permanent identifier (SUPI), and by transmitting the SUPI within a ciphertext217d, the SUPI can remain confidential and not transmitted in the clear through a wireless network. Other possibilities for the use of a step222′ and a step223between a mobile device103′ and gNb101′ exist without departing from the scope of the present disclosure. FIG.3b FIG.3bis a simplified message flow diagram illustrating an exemplary system with exemplary data sent and received by a client, a server, and a key server, in accordance with exemplary embodiments. System302can include a client103′, a server comprising server101, and a key server102. In exemplary embodiments, client103′ can comprise a client using security steps as described in by transport layer security (TLS) sessions version 1.3 and also subsequent and related versions of IETF RFC standards. Client103′ can also comprise a client using security steps as described in datagram transport layer security (DTLS) RFC 6347 and subsequent versions that incorporate ECDH key exchanges. Although depicted inFIG.3bas a client103′, the client103′ could also comprise a device103, where the device103can conduct the steps of a client103′ at the networking, transport, and application layer of the traditional Open Systems Interconnection (OSI) model. Client103′ can comprise a computing device that records a network static public key PK.network102a. Note that TLS version 1.3 and DTLS version 1.3 contemplate that the client and a server can use ephemeral ECDH key exchanges (one on the client and one on the server) in order to establish a mutually derived secret shared key for a symmetric ciphering algorithm. The difference between (i) a client103′ (which can comprise a device103supporting TLS or DTLS standards) and (ii) a client for TLS or DTLS standards can be that client103′ can record a network static public key PK.network102a. As depicted inFIG.1c, the network static public key PK.network102acould comprise either (i) a shared key102zacross a plurality of different devices103(or clients103′), or (ii) a unique key102v, where the network static public key PK.network102ais a unique number or string or point for client103′. The key PK.network102acould be received by client103′ in a secure manner before a client103′ conducts a step222with server101. In exemplary embodiments, PK.network102acould be received in the form of a certificate with PK.network102afrom a prior TLS or DTLS session before client103′ begins the TLS or DTLS session depicted inFIG.3b. Or, PK.network102acould be recorded with a set of certificate authority certificates stored with installation of an operating system for device103. The use of a network static public key PK.network102aby client103′ in a step222to conduct an ECDHE key exchange with server101can have many benefits. The standard handshake as currently proposed for TLS version 1.3 as of June 2018 assumes that a client103′ and a server101have no prior relationship. However, for many instances of communication between a client103′ and a server101, the client103′ may have previously communicated with another server on a network105other than server101. For example, with web browsing a web browser client such as a client103′ will often revisit the same web sites over time, such as a first web site for social networking, a second web site for a search engine, a third web site for news, etc. A TLS or a DTLS session could utilize the fact that the same sites are often re-visited in order to increase security, using the depicted steps of222and223for a client103′, server101, and key server102. Steps222inFIG.3bcan comprise the set of steps222depicted and described in connection withFIG.2a, and steps223inFIG.3bcan also comprise the set of steps223depicted and described in connection withFIG.2a. Before conducting step222inFIG.3b, a client103′ could receive key PK.network102afrom another server in network105, such as a different web server providing functionality equivalent to server101. PK.network102acould also be stored or recorded by a client103′ along with a set of certificate authority certificates (including root certificates) for an operating system of a device operating the client103′. Or, PK.network102acould be securely received in a previous TLS or DTLS session, such as receiving PK.network102ain a certificate verified by client103′ before client103′ conducts a step222inFIG.3b. The certificate could be verified by client103′ using a certificate authority root certificate, including verification through any intermediate certificate authority certificates. The client103′ could record the network static public key PK.network102ain a table103talong with parameters104aassociated with PK.network102a. In exemplary embodiments, a table103tcould include certificates such as X.509 v3 certificates for the network static public keys PK.network102a, where the certificates include digital signatures from a certificate authority. The key PK.network102acould also be recorded with a URL or domain name (e.g. a server name indication), such that the client103′ would use the key PK.network102awhen establishing a subsequent TLS or DTLS session with server101, where server101uses the recorded URL or domain name. Further, server101could be configured so that any key Ed103areceived from IP network107on an IP address and/or port number used by server101would be forwarded to key server102, where key server102could record and operate with the SK.network102bcorresponding to the public key for PK.network102arecorded by client103′. Server101could also operate such that a URL is associated with a key server102and/or PK.network102a, such that a call or request of the URL could be used to select the key server102and/or PK.network102a. For a step222, a client103′ can (i) derive a device ephemeral public key Ed103aand private key ed103busing parameters104astored with PK.network102aand (ii) send server101a message203. The message203can include the key Ed103aand the set of cryptographic parameters104aassociated with Ed103a. In some exemplary embodiments client103′ implements TLS or DTLS, and message203can optionally omit a device identity ID.device103i. Server101could operate in a manner such that (i) Ed103ais forwarded to key server102, and (ii) server101derives an ephemeral PKI key pair. Key server102can conduct an ECDHE key exchange as depicted for a step222inFIG.2ausing a step211in order to calculate the secret value X1211a. Key server102can send server101the value X1211a. Server101can use the value X1211a, along with the derivation of a second secret X2212ain order to calculate a symmetric ciphering key K1216a, using the key derivation step213with ECC point addition214over X1211aand X2212a. Thus, by using the embodiment depicted inFIG.3b, a transport layer security session can have security increased, where (a) the ECDHE key exchange contemplated by TLS v1.3 (which would be key exchange212inFIG.2b) can also add (b) the additional key exchange step211aby a key server102. Note that the mutual derivation of symmetric ciphering key K1216aby client103′ and server101can comprise a one-way authentication of server101, since server101can only derive the key K1216aif server101operates in a network105that also records and operates with key SK.network102b. The server101can send the client103′ the derived server ephemeral public key E1101ain a message206cfrom a step222. Key E1101acould be derived by a step (ii) in the above paragraph. Message206ccould comprise a “Server Hello” according to TLS v1.3 in the document “draft-ietf-tls-tls13-28”. The ciphertext in the Server Hello can be ciphertext217bas depicted inFIG.2a, where the ciphertext217ais encrypted with the mutually derived symmetric ciphering key K1216a. Note that a step222forFIG.3bincreases security for a TLS session, since an active attacker could operate as a “man in the middle” between a real client or “true client” and the server101, where the “man in the middle” could derive its own key Ed103aand substitute that for the real key Ed103afrom the real client or “true client”. Without use of a PK.network102a, a “man in the middle” (deriving and substituting a key Ed103a) could (a) mutually derive a symmetric ciphering key similar to K1216awith server101and then (b) receive and decrypt the ciphertext217b. However, the use of PK.network102acan stop a “man in the middle” attack since a “man in the middle” cannot derive key K1216awithout also recording the SK.network102b, which can remain secret and not available to the “man in the middle”. The client103′ can receive the message206cfrom a step222from a server101. The client103′, server101, and key server102can conduct a step223, where a step223was depicted and described in connection withFIG.2aabove. The client103′ can derive the same key K1216cusing a step218and the PK.network102a. The client103′ can decrypt ciphertext217busing key K1216a. The client103′ can process the plaintext data, such as recording a certificate for server101(e.g. cert.server101cfromFIG.2a), and verifying a signature101sfrom server101. The client can also read a random number transmitted in the ciphertext217band create a digital signature over the random number. The client can encrypt a ciphertext217dwith data to respond to server101. The ciphertext217dcan be encrypted with the derived symmetric ciphering key K1216band a symmetric ciphering algorithm211a, where key K1216awas derived by client103′ in a step223. Other possibilities exist for the use of a step222and a step223between a client103′ and server101without departing from the scope of the present disclosure. For the exemplary embodiment depicted inFIG.3bfor support of TLS and DTLS secured data sessions, a message203can comprise a “client hello” message, a message206ccan comprise a “server hello” message, and message210acan comprise a “finished” message from the client103′. For exemplary embodiments, message203as a “client hello” message can omit a device identity103i(such as a permanent identifier for client103′ or device103, but the “client hello” message could include other identifying information for client103′ such as (i) an originating IP address and source port number for message203, (ii) an obfuscated and/or temporary identity such as a random number for a session, and other possibilities exist as well without departing from the scope of the present disclosure. In addition, embodiments depicted inFIG.3bsolve a significant challenge for resource constrained devices to fully authenticate a certificate cert.server101c. There could be many layers of intermediate certificates between cert.server101cand a certificate authority root certificate stored in device103. Checking for certificate validity for all intermediate certificates and for revocation or OSCP signatures and/or stapling could add many levels of signature verifications. ARM reported a 32 Cortex M4 processor with 32 bits and operating at 84 Mhz requires ˜420 ms for a single ECDSA signature verification (secp521r1) (“Performance of State-of-the-Art Cryptography on ARM-based Microprocessors”, Jul. 21, 2015). There could be 8 or more signatures to be verified for a full certificate chain verification of cert.server101cand related OSCP signatures. A device could conduct the single authenticated key exchange step218in less than 15% of the time and power required for the full, traditional certificate chain verification. Also, there are reduced chances for errors due to unsupported parameters for (x) a single authenticated ECDH key exchange step compared to (y) multiple certificate verifications steps with OSCP verification. Consequently, the communications for a TLS session or DTLS session can remain secured more efficiently using a step222and step223, while recording and using (i) SK.network102bwith network105and (ii) PK.network102awith client103′, compared to traditional TLS or DTLS implementations with multiple layers of certificate authorities through root certificates. FIG.3c FIG.3cis a simplified message flow diagram illustrating an exemplary system with exemplary data sent and received by an initiator, a responder, and a key server, in accordance with exemplary embodiments. System303can include an initiator103′, a responder101′ and a key server102. Initiator103′ can comprise a computing device103, with the specific additional functionality of an initiator according to the DPP Specification Versions 1.0 from the WiFi Alliance. Responder101′ can comprise a device with (i) electrical components similar or equivalent to a server101depicted inFIG.1babove, and (ii) the specific additional functionality of a responder according to the DPP Specification Version 1.0 of the WiFi Alliance. For example, initiator103′ and responder101′ can communicate via a WiFi network on a LAN between the two devices, which could also comprise the IP network107. Responder101′ can operate in a networked configuration to communicate with key server102via a private network107aor a secure session221as depicted inFIG.2a. In some embodiments, responder101′ can communicate with key server102via an IP network107, where the use of secure session221can create a private network107abetween responder101′ and key server102. An initiator103′, responder101′ and a key server102can conduct a step222, where a step222is depicted and described in connection withFIG.2aabove. An initiator103′, responder101′ and a key server102can then conduct a step223, where a step223is depicted and described in connection withFIG.2aabove. As depicted inFIG.3c, several PKI keys within a DPP specification version 1.0 can have corresponding keys for a step222and step223. Note that additional steps in addition to those depicted inFIG.3ccan be conducted by an initiator103′ and a responder101′, such as responder101′ deriving PKI keys in a step101xfromFIG.1aand also conducting additional ECDH key exchanges in order to derive a symmetric ciphering key ke in addition to symmetric ciphering key K1216a. In other words, initiator103′ and responder101′ could perform additional ciphering than that depicted for a step222inFIG.2a, but for exemplary embodiments such as that depicted inFIG.3cthe initiator103′ and responder101′ could conduct at least the steps depicted in order to mutually derive a symmetric ciphering key K1216aand use the key to create a ciphertext217bby responder101′ and decrypt the ciphertext217bby initiator103′. As depicted inFIG.3c, the device ephemeral public key Ed103acan comprise the initiator protocol public key Pi303a. The device ephemeral private key ed103bcan comprise the initiator protocol private key pi303b. The server ephemeral public key E1101acan comprise the responder protocol public key Pr301a. The server ephemeral private key e1101bcan comprise the responder protocol private key pr301b. The network static public key PK.network102acan comprise the responder bootstrap public key302a. The network static private key SK.network102bcan comprise the responder bootstrap private key302b. As described below, other steps fromFIG.2acan be equivalent to those depicted inFIG.3c. For a message203sent from initiator103′ to responder101′, the message203with the key Pi303acan also include a ciphertext. The message203in a step222can comprise a “DPP Authentication Request” message from the DPP v1.0 standard. Responder101′ can communicate with key server102and receive the value X1211a. Responder101′ can also derive the server ephemeral public key E1101(comprising the responder protocol public key Pr301a) and the server ephemeral private key e1101b(comprising the responder protocol private key pr301b). The Responder101′ can use KDF216to convert X1211ainto a symmetric encryption key (which can be different than key K1216afrom Figures above). Responder101′ can use the symmetric encryption key from X1211ato decrypt the ciphertext with a message203. Responder101′ can then conduct the key exchange step212and step213, along with modified versions of KDF216in order to derive a key ke. Responder101′ can encrypt data with the key ke and send initiator103′ a message206cwith the encrypted data. The message206ccan comprise a “DPP Authentication Response” message from the DPP v1.0 standard. Initiator103′ can then send responder101′ a “DPP Configuration Request” message, which could comprise message210ain a step223as depicted inFIG.2a. A benefit for the use of a step222and step223for an initiator103′ and a responder101′ is that the responder bootstrap private key br302bcan remain securely recorded in a network105and does not need to be recorded and operated by responder101. In this manner, the responder bootstrap public key Br302acan be freely shared with multiple different initiators103′, including recording the key Br302ain a plurality of initiators103′ in the form of a shared key102zas depicted inFIG.1c. The use of a shared key102zwith multiple different initiators103′ (while keeping SK.network102bor key br302bsecurely recorded in a key server102) simplifies the distribution of key Br302ato multiple different initiators103′. For exemplary embodiments, the initiators103′ could have a key Br302arecorded during manufacturing or distribution of the computing device operating initiator103′. In other words, a device manufacturer upon device manufacturing with initiator103′ may not know which responder101′ may communicate with initiator103′ during a subsequent DPP session. However, a manufacturer of device with initiator103′ could record a plurality of different keys Br302afor different networks105(similar to different keys PK.network102ain for a table103tFIG.1c), and in this manner initiator103′ can have a higher probability of successfully using a pre-recorded key Br302a(or key PK.network102a) in order to conduct a DPP session without requiring a separate or different additional step of acquiring the key Br302a“out of band”. Thus, the use of the embodiment for an initiator103′ and a responder101′ can simplify the use and deployment of DPP sessions, while simultaneously increasing the securing of the session, since the responder bootstrap private key br302b(in the form of SK.network102b) can remain securely recorded within a network105on a key server102. CONCLUSION Various exemplary embodiments have been described above. Those skilled in the art will understand, however, that changes and modifications may be made to those examples without departing from the scope of the claims. | 165,957 |
11943344 | DETAILED DESCRIPTION The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions. A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured. A system for committing event data is disclosed. The system comprises an interface and a processor. The interface is configured to receive input data and a client key. The processor is configured to generate an Nth sequence number; determine an Nth event hash using the input data, an N-1signature, and the Nth sequence number; encrypt the Nth event hash with the client key to generate an Nth signature; generate an Nth event from the input data, the N-1signature, the Nth sequence number, and the Nth signature; in response to an aggregate N-1of one or more prior events being valid, apply Nth event onto aggregate N-1. In some embodiments, the aggregate N-1comprises an aggregate that has N-1events, where N is an integer. A system for querying a state of aggregate N is disclosed. The system comprises an interface and a processor. The interface is configured to receive request to query the state of the aggregate N and receive a client key. The processor is configured to rehash each event input data of the aggregate N with its corresponding sequence number and a prior event signature to generate a hash value; reencrypt the hash value using the client key to create a check signature; determine whether the check signature is equal to the prior event signature; in response to each check signature being equal to the prior event signature, replay the events of the aggregate N to generate and provide the state of the aggregate N; and in response to a check signature not being equal to the prior event signature, indicate that the aggregate N is not valid. In some embodiments, the aggregate N comprises an aggregate that has N events, where N is an integer. A system for creating a projection is disclosed. The system comprises an interface and a processor. The interface is configured to receive request to create a projection up to a target event in an aggregate N and receive a client key. The processor is configured to rehash each event input data of the aggregate N with its corresponding sequence number and a prior event signature to generate a hash value; reencrypt the hash value using the client key to create a check signature; determine whether the check signature is equal to the prior event signature; in response to each check signature being equal to the prior event signature, replay the events of the aggregate N to generate and provide the projection; and in response to a check signature not being equal to the prior event signature, indicate that the aggregate N is not valid. In some embodiments, the aggregate N comprises an aggregate that has N events, where N is an integer. A system for auditing event data is disclosed. The system comprises an interface and a processor. The interface is configured to receive an audit query request and a client key. The processor is configured to determine whether the audit query request is valid; determine whether a chain of events is stored in an audit store, wherein the chain of events is associated with the audit query request; and provide data for the audit query request in response to determining that the chain of events is stored in the audit store. The disclosed system provides proof that the events of a particular object have not been altered or tampered with in anyway using cryptography. To ensure the data integrity of a large database, data can be stored as a sequential ledger of immutable events (e.g., as in blockchain technology). Before exposing this data to an end user (i.e., a client), the ledger of events is verified (e.g., using cryptography) to provide proof that the data comprising a particular event or series of events (i.e., an object), have not been altered or tampered with in anyway in order to resolve the object's true and current application state (i.e., the entire history of the events comprising a particular object). An event is a data entry that represents a change in an application state (i.e., a mutation). In a financial system, examples of events include TradeOrderFilled, ClientNotified, AccountBalanceUpdated, or any other appropriate type of event. An application state does not change until a proposed mutation is ‘persisted’ into a database (e.g., stored in an event store). In some embodiments, the database comprises an append-only write database. In computing, a persistent data structure is a data structure that always preserves the previous version of itself when it is modified. Such data structures are effectively immutable, as their operations do not update the object in-place, but instead always yield a new updated object. In event sourcing, every change in state of an object is stored as an event. Events are stored in sequence and can be “replayed” in order to derive the current state of an application. The capabilities that this persistence model affords are:Auditability: In some embodiments, events can be stored with other metadata about the change, such as who performed the action and when it was performed. This, effectively, becomes a robust audit log that cannot be circumvented without affecting the active state of the application.Complete Rebuild: In various embodiments, the database system (i.e., the system for committing event data) may decide to cache or pre-compute the current state of the application for traditional query purposes. We call this a ‘projection’ of the event store. These projections are “copies” of the current state and should not be treated as a source of truth. A litmus test to see if event sourcing is being done correctly is the ability to discard these projections without any loss of data. This allows data to be represented in different ways (e.g., row/column, graph, key/value, etc.) to optimize performance for different access patterns, all without touching the actual source of truth. In some embodiments, the projection includes incorporating modification and/or deletions as of an effective moment/date of an entry in the database system even if the entry is later in the entry log (e.g., the entry or actual date of the entry is after the effective moment/date).Temporal Queries: In some embodiments, certain use cases will want to execute based on a past state of the system's data (e.g., re-running a report as of a date in the past). For example, the persistence model allows rebuilding the state as of an arbitrary date in the past and running business logic against it.Event Replay: In some embodiments, event replay provides for looking at a past event, modifying it, and replaying after to see the consequences of that change. For example, this can be helpful for hypothetical scenario modeling or to understand the impact of a bug or flaw in the system. To ensure that an application state has not been tampered with (e.g., by a malicious actor injecting a false event within the event chain), the disclosed system uses a combination of hashing and encryption. The characteristics of a good hashing algorithm are that it is deterministic, fixed length, and sufficiently difficult to calculate to prevent brute force attack (e.g., SHA-1, SHA-2, SHA-256, SHA5, etc.). In some embodiments, the hash is salted with a random number to provide additional security against precomputation attacks, such as Rainbow Table attacks. Once an event is hashed, the resultant fixed-length hash value represents all of the details about that event. For a next event, the hash value from the previous N-1event is inserted into the new event (N) and a new hash is generated for the event. The new hash implicitly includes the hash from the prior event. That means if a malicious hacker wanted to alter one of the events within an event chain, they would also have to alter every event after that to keep the hashing chain consistent. This adds a layer of complexity which can slow down a bad actor. For added security, the disclosed system adds an additional step by using an encryption algorithm to encrypt the hash that is stored with the event. The encryption algorithm uses a private key which is stored in a separate secured system (e.g., a key management service). In some embodiments, the key is accessible only by the client. No other key can be used to get the encrypted hashes to match. To perform an attack, a malicious hacker would have to get the underlying non-encrypted hashes to match AND successfully hack into the key management service. In some embodiments, the key management service has access logging that is sent to a secure account and/or any other appropriate levels of security. As legacy systems are becoming increasingly difficult to protect from bad actors, internal or external, the disclosed system improves upon the current art by providing both a high level of data granularity and a fully-auditable data model by proving that the events of a particular object have not been altered or tampered with in any way. For example, compliance and regulations (e.g., within the financial services industry) will be easier to comply with by showing a higher level of data integrity. In some embodiments, the system improves the computer by providing security for stored entries in a database. The data in a stored entry is secured using the system to prevent alteration and tampering of the stored entry by linking the stored entry to other entries. This makes the computer better by enabling proof of the integrity of the data used for processing. In various embodiments, the disclosed system is used to build one or more Application Programming Interfaces (APIs). In some embodiments, the APIs are focused on a particular domain (e.g., the financial services domain). A domain (or ‘application domain’) defines a set of common requirements, terminology, and functionality for any software program constructed to solve a problem in the area of computer programming (e.g., how to securely track and query financial transactions). In some embodiments, the disclosed system utilizes Domain-Driven Design (DDD) to design and generate one or more APIs (e.g., for a client). DDD is an approach to software development for complex needs by connecting the implementation to an evolving model. DDD is predicated on: (i) placing the primary focus on the core domain and domain logic; (ii) basing complex designs on a model of the domain; and (iii) initiating a collaboration between technical and domain experts to iteratively refine a conceptual model that addresses particular domain problems. A key outcome of practicing DDD is a common framework for discussing a domain between domain experts, product managers and developers. A common framework allows product engineers to spend most of their time building and documenting a domain model (e.g., an object-oriented Python data model), and as little time as possible converting that model into code. In some embodiments, the common framework utilizes a data modeling library. In some embodiments, the data modeling library is used to help translate rudimentary concepts and client requirements (i.e., an ‘offline domain model’) into API code (e.g., Python code). In various embodiments, the data modeling library is used to enrich the domain model with metadata, both general and domain-specific (e.g., additional information about the field types, user interface hints, validation information, etc.). In some embodiments, a persistence library is used to simplify generating API code (e.g., by concealing the complexity of event sourcing). A persistence library together with a request processing scheme (i.e., a persistence system) allows developers to write additional persistence-time logic into one place, instead of across a multiplicity of services. In various embodiments, the persistence system is also used to store logs, metrics, and/or encrypt data, or any other appropriate data. FIG.1is a block diagram illustrating an embodiment of a system for committing event data, for querying the state of an aggregate, and for creating a projection. In the example shown, Database System100is connected to Client104and Application Development System106via Network102. In some embodiments, Network102comprises a communication network. In various embodiments, Network102comprises wired and/or wireless communication networks comprising standard, hybrid, and/or proprietary networks (e.g., a local area network, a wide area network, a virtual private network, etc.), proxy servers, and data centers. In some embodiments, Network102comprises a Content Distribution Network. In some embodiments, Network102comprises components of a cloud computing platform—for example, comprising a front end platform (e.g., a fat client, a thin client, etc.), a back end platform (e.g., servers, data storage, etc.), a cloud based delivery, and a network (e.g., Internet, Intranet, Intercloud, etc.). In the example shown, Application Development System106comprises Processor108, Memory110, Interface112, Data Storage114, I/O Interface116, User Input118, and Display120. In various embodiments, Application Development System106is used to generate Application Programming Interfaces (APIs) for responding to commands and/or queries (e.g., from a user using Client104). In various embodiments, Application Development System106is used to generate APIs for responding to commands and/or queries in a secure manner—for example, using cryptography (e.g., to provide proof that the data comprising a particular event or series of events has not been altered or tampered with in anyway). In some embodiments, Application Development System106is used to generate APIs for responding to a command to commit new event data to an existing aggregate of events, to read event data in an existing aggregate of events, to create new aggregate(s) of events, to modify events in an existing aggregate of events, and/or to delete events or aggregate events. In some embodiments, an other API is generated that is used to construct an API (e.g., for a client) within a particular domain (e.g., the financial services domain). In some embodiments, the other API comprises a domain model—for example, an object-oriented data model (e.g., written in the Python programming language). In some embodiments, a user (e.g., a computer programmer) inputs code (e.g., via User Input118and I/O Interface116) to run on Processor108(e.g., to generate and/or test one or more domain models). In some embodiments, code is stored in Memory110(e.g., temporarily, long term, short term, etc.) for use by Processor108, and/or stored in Data Storage114(e.g., for later retrieval and continued development). In various embodiments, APIs generated within Application Development System106are transmitted to Database System100(e.g., via Interface112). In some embodiments, Interface112is used to transmit requests and responses (e.g., using the hypertext transfer protocol (HTTP)) to Application Development System106—for example, requests and responses related to the use of Database System100(e.g., by a client). In various embodiments, a dynamic data query and manipulation language (e.g., the GraphQL language) is used to request and/or respond to the generation of one or more APIs. A dynamic API language allows clients to ask for data (e.g., referential data) while specifying what shape the data can take. This allows for a more flexible API model and one that is easier to maintain (e.g., compared to architectural API protocols such as REST, XML, and SOAP). In some embodiments, a user using Client104provides a command to Database System100. Database System100receives the command at an API (e.g., an API generated using Application Development System106) and processes the command. In various embodiments, the command adds to data stored by Database System100, reads data stored by Database System100, determines the application state of data stored by Database System100, determines projections of data stored by Database System100, and/or modifies copies of data stored by Database System100, or any other appropriate action relating to data and/or metadata in Database System100. In some embodiments, to add data to stored data in Database System100the input data and/or command are validated. In response to the input data and/or command being valid, a new event data set is formed (i.e., a proposed mutation) to apply to an existing aggregate of events in Database System100. Prior to applying the proposed mutation, the existing aggregate of events is retrieved (e.g., from a storage device within Database System100) and validated. For example, event data is committed to Database System100by receiving input data and a client key from Client104. A processor in Database System100is then used to generate an Nth sequence number; determine an Nth event hash using the input data, an N-1signature, and the Nth sequence number; encrypt the Nth event hash with the client key to generate an Nth signature; generate an Nth event from the input data, the N-1signature, the Nth sequence number, and the Nth signature; and in response to an aggregate N-1of one or more prior events being valid, apply the Nth event onto the aggregate N-1for storage within Database System100. In some embodiments, the aggregate N-1comprises an aggregate that has N-1events, where N is an integer. In some embodiments, the sequence number starts with 0. In various embodiments, event data is added including one or more metadata regarding the event date—or example, time/date of submission, effective time/date (e.g., the effective moment) that the data should be considered effectively having been submitted, submission user, user system location, client name, user name, or any other appropriate metadata. In the disclosed system, modifying or deleting data from a source of truth within Database System100is not allowed (e.g., input data from an event within an aggregate of events). Any modifications to existing data are treated as a new event in the event chain. Any deletions are generated as a new event marking the deletion or invalidation of the aggregate. In some embodiments, the effective time/date of an event entry indicates a back dating of a submitted event data or a forward dating of a submitted data—for example, a purchase price of an item can be corrected if erroneously previously entered or a purchase price of an item can be set to be purchased in the future. In some embodiments, to provide for corrections to data, an effective moment or time/date is stored for each event (e.g., as event metadata). The effective moment is the timestamp of when the event is desired to be thought of as having been or will be executed. This can be the same or different from the time/date that the event is recorded in the database. In the case where the event needs to act as if it was produced earlier (e.g., when determining the state of one or more events within an aggregate), the time stamp is back-dated (e.g., to the position with an event chain that contains the incorrect data). When events are replayed to determine the corrected application state, the effective moment is used to rebuild the projections (e.g., by replaying or positioning the corrected event in its effective time/date rather than the submitted time). This allows the hashed event chain to maintain its integrity while being able to correct data. In some embodiments, the system is able to show the event data with and without modification/deletion as provided by the effective moment mechanism. In some embodiments, displaying or reporting of data later modified/deleted is marked with an indication providing the user with transparency of the modification/deletion using the effective moment mechanism. In some embodiments, to read data stored in Database System100a query request is validated. In response to the query request being valid, a query-related existing aggregate of events in Database System100is retrieved (e.g., from a storage device within Database System100) and validated (e.g., by a processor within Database System100). In various embodiments, input data from one or more events is provided (e.g., a list of one or more financial transactions and their dates, etc.). In various embodiments, the validated aggregate of events is replayed, in part or in its entirety, to generate the queried state of the aggregate (e.g., by a processor in Database System100). The state of an aggregate differs from its contained input data in that it considers the cumulative impact of events on a particular set of tracked data (e.g., as in providing the cash or share balance of a financial account at a given date). In various embodiments, the queried state of the aggregate is determined from a projection of a prior or current state of the aggregate (e.g., retrieved from a storage device or computer memory within Database System100). In various embodiments, the queried state of the aggregate is generated by replaying one or more new events using a projection of a prior state as the initial state (e.g., to save computational time and/or resources). In response to the queried state of the aggregate being generated, it is provided—for example, via an interface within Database System100(e.g., to Client104via Network102). For example, a system for querying a state of an aggregate N (e.g., an aggregate that has N events) comprises an interface and a processor. The interface is configured to receive request to query the state of the aggregate N and receive a client key. The processor is configured to rehash each event input data of the aggregate N with its corresponding sequence number and a prior event signature to generate a hash value; reencrypt the hash value using the client key to create a check signature; determine whether the check signature is equal to the prior event signature; in response to each check signature being equal to the prior event signature, replay the events of the aggregate N to generate and provide the state of the aggregate N; and in response to a check signature not being equal to the prior event signature, indicate that the aggregate N is not valid. In some embodiments, to generate a prior or current state of an aggregate of events stored in Database System100, a projection request is validated. In response to the projection request being valid, a projection-related aggregate of events in Database System100is retrieved and validated. In response to the aggregate of events being valid, the events are replayed, in part or entirety, to generate the aggregate's state at the requested point of projection (i.e., corresponding to a particular event within the aggregate's event chain). In various embodiments, the aggregate's state at the requested point of projection is cached or stored in Database System100(e.g., in computer memory or in a storage device). In various embodiments, the aggregate's state is provided—for example, via an interface within Database System100(e.g., via Network102to Client104or the user of Application Development System106). For example, a system for creating a projection comprises an interface and a processor. The interface is configured to receive request to create a projection up to a target event in an aggregate N and receive a client key. The processor is configured to rehash each event input data of the aggregate N with its corresponding sequence number and a prior event signature to generate a hash value; reencrypt the hash value using the client key to create a check signature; determine whether the check signature is equal to the prior event signature; in response to each check signature being equal to the prior event signature, replay the events of the aggregate N to generate and provide the projection; and in response to a check signature not being equal to the prior event signature, indicate that the aggregate N is not valid. In some embodiments, to modify the input data of one or more events within an aggregate of events stored in Database System100, a modification request for modeling is validated. In response to the modification request for modeling being valid, a modification-related aggregate of events for modeling in Database System100is retrieved and validated. A modification-related aggregate of events for modeling comprises a validated copy of the aggregate of interest. In response to the aggregate of interest being valid, the one or more events requested to be modified are replaced with modified input data for modeling. In various embodiments, the modified aggregate of events is replayed, in part or entirety, to generate one or more new aggregate states (e.g., to see the consequences of the one or more changes). This can be helpful for hypothetical scenario modeling or to understand the impact of a bug or flaw in the system. In various embodiments, the one or more new aggregate states are provided—for example, via an interface within Database System100(e.g., via Network102to Client104or the user of Application Development System106). For example, a system for querying a state of an aggregate N comprises an interface and a processor. The interface is configured to receive request to query the state of the aggregate N and receive a client key. The processor is configured to rehash each event input data of the aggregate N with its corresponding sequence number and a prior event signature to generate a hash value; reencrypt the hash value using the client key to create a check signature; determine whether the check signature is equal to the prior event signature; in response to each check signature being equal to the prior event signature, replay the events of the aggregate N to generate the aggregate N and insert any appropriate modifications and provide the state of the modified aggregate N; and in response to a check signature not being equal to the prior event signature, indicate that aggregate N is not valid. In some embodiments, the modifications are placed using an effective moment associated with the modification data. In some embodiments, modifications to stored events are entered into the database by adding an event that is appended to the database chain and includes an effective moment metadata that indicates where in the sequence of the data that the modification is to be used. In some embodiments, deletions of stored events are entered into the database by adding an event that is appended to the database chain and includes an effective moment metadata that indicates where in the sequence of the data that the deletion is to be applied. In some embodiments, in response to the check signature not being equal to the prior or persisted event signature, a verification status (e.g., verified: TRUE or verified: FALSE) is associated with each event of an aggregate as well as an overall verification status for the aggregate as a whole (e.g., all events verified: TRUE or any event verified: FALSE). In some embodiments, a service is informed by the query and mutation process that events have been deemed to be invalid. The service then updates a flag to indicate that a particular event is invalid, and thus the Aggregate by definition is also invalid. In some embodiments, the service provides messages to a notification service to notify an administrator and/or user that an invalid event/aggregate has been detected. In various embodiments, after review, the service provides a convenient set of remediation features such as, rolling back the aggregate to the last valid state or performing a manual adjustment to “correct” the affected aggregate. In some embodiments, the service provides an audit log of all events for visual reporting purposes to an administrator and/or user. In various embodiments, in response to the check signature not being equal to the prior or persisted event signature, an indication is provided that there was an error (e.g., that the aggregate is compromised), an alert is provided (e.g., an alert via email), an entry in a log (e.g., a SEVERE log of an internal monitoring system), or any other appropriate indication or notification. In some embodiments, in response to the check signature not being equal to the prior or persisted event signature, the system reverts data back to a state that is not compromised (e.g., a state before any tampering is detected). For example, if the 7thevent in an aggregate fails the check signature test, the system reverts the data back to the 6th event state. In some embodiments, the system regenerates the 7thevent using a stored copy of the input data for the 7thevent. FIG.2is a block diagram illustrating an embodiment of a database system. In some embodiments, Database System200corresponds to Database System100ofFIG.1. In the example shown, Database System200comprises Interface202, Processor204, Storage206, Memory208, and Random Number Generator209. In various embodiments, Interface202is used to transmit requests (e.g., from a client) and responses (e.g., as generated by Processor204) to users of a Client or an Application Development System. In various embodiments, requests and responses (e.g., adding data, reading data, determining an application state, determining a projection, and/or modifying copies of data) are stored in Storage206(e.g., for long term storage) and/or Memory208(e.g., to provide instructions to Processor204). In the example shown, Processor204comprises API Gateway210. In various embodiments, API Gateway210manages communications to and from Domain Model API212, Client APIs214, and/or Projection Generator API216. In some embodiments, Domain Model API212is used to generate one or more Client APIs (e.g., according to directions transmitted by an Application Development System). For example, Client APIs214comprise various tracking and query functions associated with a particular domain (e.g., tracking and responding to account balance queries within a financial domain). In various embodiments, Domain Model API212and/or Client APIs214serve one or more Clients. In some embodiments, Projection Generator API216is used to generate an aggregate's state (i.e., corresponding to a particular event within the aggregate's event chain). In various embodiments, the aggregate's state at the requested point of projection is cached (e.g., in Memory208) and/or stored (e.g., in Projection Store218). In some embodiments, the aggregate's state is provided to a user, for example, via Interface202(e.g., to a user of a Client or an Application Development System). In some embodiments, a specific event's data is provided to a user in response to a request instead of a rolled up aggregate, in which case, the event data is still validated within the chain (e.g., by checking that either prior or all of the chain signatures are appropriately linked and correct). In some embodiments, Client Keys220are used (e.g., by Processor204) to encrypt data (e.g., to encrypt a hash stored with an event to generate an event signature). An event signature is uniquely representative of the base event. In some embodiments, a random number (e.g., generated by Random Number Generator209) is used to ‘salt’ the hash used to generate an event signature (e.g., to provide additional security). In some embodiments, Random Number Generator209is a device that generates a sequence of numbers or symbols that cannot be reasonably predicted better than by a random chance. In various embodiments, Random Number Generator209is a true random-number generator, a pseudo-random number generator, or a cryptographically secure pseudo-random number generator. In various embodiments, Random Number Generator209is hardware-based, software-based, or any appropriate combination of hardware and software. In some embodiments, Client Keys220are used to reencrypt data—for example, to generate a check signature. A check signature is the reencrypted hash value from a prior event used to check that an event has not been tampered with by comparing the check signature with the original prior event signature. In some embodiments, Client Keys220comprise one or more private keys as used within asymmetric cryptography (i.e., public-key cryptography). Common asymmetric key encryption algorithms include Diffie-Hellman, RSA (Rivest-Shamir-Adleman), ElGamal, DSA (Digital Signature Algorithm), ECC (Elliptic Curve Cryptography), and PKCS (Public Key Cryptography Standards). A private key ideally is a long, random (i.e., non-deterministic) number (e.g., 1,024-bits, 2048-bits, 3072-bits, etc.) that cannot easily be guessed. The length and randomness of the key depends on the algorithm being used, the strength of security required, the amount of data being processed with the key, and the crypto-period of the key (i.e., the time between key activation and key deactivation). The choice of which algorithms, key lengths, degree of randomness, and crypto-period are determined by those skilled in the art of cryptography and includes assessing the sensitivity of the data, the risk of key compromise, system implementation cost, allowable processing times, etc. In some embodiments, Client Keys220are stored and managed in a secured system (e.g., a key management service). In some embodiments, Client Keys220are located in a separate secured system (i.e., not within Database System200). In some embodiments, Client Keys220are accessible only by the Client (i.e., they are kept secret). In various embodiments, Client Keys220are referenced or retrieved via use of an identifier (i.e., a Client Key ID). In some embodiments, Client IDs are stored in Storage206, or any other appropriate location or storage device. In the example shown, Event Store222is used to store events (e.g., an aggregate of events comprising an event chain). An event encapsulates all data associated with a transaction as well as tracking an integer value representing the sequence number of the event for the aggregate. In some embodiments, the sequence number is gapless, sequential, and unique. In various embodiments, individual events are referenced or retrieved via the sequence number. Events are stored in sequence and can be “replayed” in order to derive the current state of an application. In some embodiments, events are stored with the corresponding event signature and a prior event signature (e.g., to be used in validating the event data). In some embodiments, wherein a random number is used to ‘salt’ the hash used to generate an event signature, the salt value is stored with the event. In some embodiments, events are stored with metadata related to the event (e.g., who added the event, when it was performed, etc.). FIG.3is a block diagram illustrating an embodiment of a system for generating a client API (Application Programming Interface). In some embodiments, Client API306corresponds to one of the one or more Client APIs214ofFIG.2. In the example shown, Data Modeling Framework300is used to generate Client API306in response to Client Request302. Data Modeling Framework300comprises Domain Model API308, Code310, Data Modeling Library312, and Persistence Library314. An API is an interface or communication protocol between different parts of a computer program to simplify the implementation and maintenance of software. An API specification includes specifications for routines, data structures, object classes, variables, and/or remote calls. API is also used herein to refer to a specific kind of interface between a client and a server, which can be described as a “contract” between both—such that if the client makes a request in a specific format, it will always get a response in a specific format or initiate a defined action. This specialized form of API is termed a Web API. In some embodiments, Client API306comprises a Web API. Client Request302is used to initiate the generation of Client API306. In various embodiments, Client Request302comprises a verbal request or a written request from a client (e.g., to one or more users of an Application Development System). In some embodiments, the Application Development System corresponds to Application Development System106ofFIG.1. In various embodiments, Client Request302takes the form of an extended discussion with one or more users of the Application Development System. In some embodiments, the extended discussion relates to specifying an overall system architecture related to a particular domain (e.g., one or more question and answer sessions, design meetings, review meetings, etc.). In some embodiments, Client Request302is a clearly specified request to add to, or upgrade, an existing Client API (e.g., to track a particular piece of event data or metadata that has not been previously tracked). In the example shown, Client Request302is an iterative process with Offline Domain Model304. Offline Domain Model304is used (e.g., by one or more users of an Application Development System) (i) for the early capturing and formation of rudimentary concepts and client requirements; and (ii) to develop and formalize a specification for Domain Model API308. In some embodiments, Client Request302specifies directly Offline Domain Model304without need for iteration. In the example shown, Offline Domain Model304is coded (e.g., using Python code) to generate Domain Model API308. Domain Model API308is used to generate Code310that satisfies Client Request302for Client API306. In some embodiments, Code310is an object-oriented programming language (e.g., Python). In some embodiments, Client API306is a data query and manipulation language (e.g., GraphQL). In some embodiments, Code310is used to generate Client API306. In some embodiments, Client API306is generated directly by Domain Model API308without need for Code310. In the example shown, Domain Model API308utilizes Data Modeling Library312. In some embodiments, Data Modeling Library312is used to help translate Offline Domain Model304into code for Domain Model API308. In various embodiments, Data Modeling Library312is used to enrich the domain model with metadata, both general and domain-specific (e.g., additional information about the field types, user interface hints, validation information, etc.). In the example shown, Persistence Library314is used to simplify generating API code (e.g., by concealing the complexity of event sourcing). In some embodiments, Persistence Library314provides for developers (e.g., one or more users of an Application Development System) to write additional persistence-time logic into one place, instead of across a multiplicity of services. In various embodiments, Persistence Library314is also used to store logs, metrics, or any other appropriate data or information useful in developing Client API306. FIG.4Ais a block diagram illustrating an embodiment of a framework for committing a proposed mutation. In some embodiments, Event Store406corresponds to Event Store222ofFIG.2. In the example shown, the events of Aggregate N-1400(i.e., Event1408athrough Event N-2408band Event N-1408c) are validated prior to being applied to Aggregate N402. In the example shown, Validation410aresults in validated Event1412a, Validation410bresults in validated Event N-2412b, and Validation410cresults in validated Event N-1412c. In response to the events of Aggregate N-1400being validated, a proposed mutation (i.e., Event N404) is accepted as Event N414into Aggregate N402and Event N414is then stored in (i.e., committed to) Event Store406. In some embodiments, a processor (e.g., a processor within a database system) is configured to determine whether the events of Aggregate N-1400are valid by determining whether all events from Event1408athrough Event N-1408care valid, wherein determining whether an event is valid comprises rehashing each event input data of Aggregate N-1400with its corresponding sequence number and prior event signature to generate a hash value; reencrypting the hash value using a client key to create a check signature; and determining whether the check signature is equal to the prior event signature. In some embodiments, the system checks each event with its prior event in turn by rehashing each event input data with its corresponding sequence number and a prior event signature to generate a hash value (e.g., generating an N-M hash value for an N-M event input data within the aggregate N comprises generating the N-M hash value by hashing the N-M event input data with an N-M sequence number and an N-M-1signature, where N-M event indicates the event with sequence number N-M where N and M are integers and N is the most recent or highest sequence number and M is a prior sequence number/event/hash value back from the most recent or highest sequence number/event/hash value). In some embodiments, the system reencrypts the hash value using the client key to create a check signature (e.g., creating an N-M check signature using the N-M hash value comprises creating the N-M check signature by reencrypting the N-M hash value using the client key to create the N-M check signature). In some embodiments, the system determines whether the check signature is equal to the prior event signature (e.g., determining that the check signature is equal to the prior event signature comprises determining that the N-M check signature and an N-M signature are equal for M from 1 to N-2). In some embodiments, the system indicates that the N-L event is not valid in response to the N-L check signature not being equal to the N-L signature for a given L, where L is an integer, where N-L event indicates the event with sequence number N-L where N and L are integers and N is the most recent or highest sequence number and L is a check signature/event back from the most recent or check signature/event. FIG.4Bis a block diagram illustrating an embodiment of a framework for creating a proposed mutation. In some embodiments, the proposed mutation Event N454corresponds to the proposed mutation Event N404ofFIG.4A. In some embodiments, Aggregate N-1420corresponds to Aggregate N-1400ofFIG.4A. Aggregate N-1400comprises Event1438athrough Event N-2438band Event N-1438c. In some embodiments, Event1438athrough Event N-2438band Event N-1438ccorresponds to Event1408athrough Event N-2408band Event N-1408cofFIG.4A. In some embodiments, Aggregate N-1comprises an aggregate with N-1events. In the example shown, Input Data422is received for proposed application to Aggregate N-1420in the form of Proposed Event Data Structure434. In some embodiments, a command is included with (i.e., accompanies) Input Data422—for example, to indicate which Aggregate (e.g., an Aggregate in an Event Store) to apply Input Data422. Input Data422is validated by Validation424. In the example shown, a processor (not shown) (e.g., a processor within a database system) is configured to receive a command accompanying Input Data422and performs Validation424, wherein determining whether the command is valid comprises determining whether the command is valid syntactically and/or whether the data of Input Data422are appropriate for the command including determining one or more of the following: whether the data is within range, whether the data is of appropriate data type, and/or whether the data matches command input parameters. In response to Input Data422being valid, Input Data422is applied to Proposed Event Data Structure434. In some embodiments, Proposed Event Data Structure434is an object-oriented data structure (e.g., referenced by an identifier). In some embodiments, the data structure comprises a format to receive input data, an identifier, a proposed event signature, and a prior event signature. In the example shown, Proposed Event Data Structure434comprises a data structure configured to receive Input Data422, Sequence Number N428, Event Signature N-1430, and Event Signature N446. An identifier of Aggregate N-1420is used by Sequence Generator436to generate Sequence Number N428. In some embodiments, Sequence Generator436is a computer program (e.g., running on a processor). In some embodiments, Sequence Generator436generates a sequence number that is gapless, sequential, and unique. In some embodiments, the sequence number is an object-oriented data structure (e.g., configured to act as an identifier). In some embodiments, the data structure comprises a format to receive one or more sequences. In the example shown, Sequence Number N428and Event Signature N-1430(i.e., the event signature accompanying Event N-1438c) are applied to Proposed Event Data Structure434. Prior to receiving Event Signature N446into Proposed Event Data Structure434, Input Data422, Sequence Number N428, and Event Signature N-1430are hashed (in440). The resultant hash value from440is encrypted in442using Client Key444to generate Event Signature N446. Event Signature N446is then applied to Proposed Event Data Structure434in the space Reserved for Event Signature432to generate Proposed Mutation448. FIGS.5A and5Bare a flow diagram illustrating an embodiment of a method for committing a proposed mutation. In some embodiments, the process ofFIGS.5A and5Bis executed using the processor of Database System200ofFIG.2. In various embodiments, the process ofFIGS.5A and5Bis executed in whole or in part using any appropriate combination of one or more processors. In various embodiments, the process ofFIGS.5A and5Butilizes one or more of Interface202, Storage206, and Memory208of Database System200ofFIG.2. The process flow ofFIG.5Adescribes a method for validating an aggregate of events prior to committing a proposed mutation to the aggregate. In500, Input Data is received with an Included Command. For example, the Included Command comprises a request to apply the proposed mutation, which includes Input Data, to a related aggregate of events (e.g., stored within a Database System). In502, it is determined whether the Included Command is valid. In some embodiments, determining whether the command is valid comprises determining whether the command is valid syntactically and/or whether the accompanying data are appropriate for the command including determining one or more of the following: whether the data is within range, whether the data is of appropriate data type, and/or whether the data matches command input parameters. In response to the Included Command being determined invalid, in504, an exception is returned, and the process ends. In response to the Included Command being determined valid, in506, Aggregate N-1is located within the Database System. In the example shown, Aggregate N-1is the aggregate of events to which the proposed mutation will be applied. In508, a Client Key ID for Aggregate N-1is obtained. For example, the Client Key ID for Aggregate N-1is obtained from a storage device or unit within a database system. In510, a Client Key is retrieved using the Client Key ID. For example, a Client Key is retrieved using the Client Key ID from Storage within a Database System or from a separate secured system (e.g., a key management service). In512, a list of Events is retrieved from Aggregate N-1. For example, Event1through Event N-1is retrieved from Aggregate N-1. In514, an Empty Aggregate N is instantiated. In some embodiments, Empty Aggregate N is an object-oriented data structure. In some embodiments, the data structure comprises a format to receive one or more events. In516, a Next Event is selected from Aggregate N-1for validation. In518, the Contents of Next Event are retrieved. For example, the contents of a Next Event such as input data, an identifier, a proposed event signature, and a prior event signature are retrieved. In520, the Input Data and Sequence Number of Next Event are hashed with the Event Signature from Prior Event. For example, the Input Data and Sequence Number of Next Event are hashed with the Event Signature from Prior Event, wherein the prior event comprises the event immediately prior to the selected Next Event from the list of Events retrieved from Aggregate N-1in512. In522, the Hash Value Output of520is encrypted using the Client Key to generate a Check Signature. In524, it is determined whether the Check Signature is equivalent to the Next Event Signature. In response to the Check Signature being determined invalid, in504, an exception is returned, and the process ends. For example, an indication or status is associated with the Next Event indicating that the Next Event is not valid, an indication is sent or logged indicating that the Next Event is not valid, the system reverts the Event data in the Aggregate to a valid state prior to the invalid event, the system regenerates the Next Event using stored input data for the Next Event, or any other appropriate action following an invalid determination. In response to the Check Signature being determined valid, in526, the Next Event is applied onto Aggregate N. In528, it is determined whether all Events have been validated (i.e., all the Events on the list of Events retrieved from Aggregate N-1in512). In response to determining that all Events have not been validated, the process flows to516to select a Next Event from Aggregate N-1for validation. In response to determining that all Events have been validated, and the control passes to530. For example, in response to all the Events on the list retrieved from Aggregate N-1in512having been validated, and control passes to530. In530, Event Signature N-1is tracked. For example, the Event Signature N-1is stored in processor memory. In532, the validated Aggregate N is provided, and control passes to (A) ofFIG.5B. For example, the validated Aggregate N is provided to processor memory for use by a computer program running on a processor in a Database System. In the example shown inFIG.5B, in540, the validated Aggregate N, Client Key, and Event Signature N-1are received. For example, the validated Aggregate N, Client Key and Event Signature N-1are received from a memory coupled to a processor. In542, a Proposed Event Data Structure is instantiated. In some embodiments, the Proposed Event Data Structure is an object-oriented data structure. In some embodiments, the data structure comprises a format to receive input data, an identifier, a proposed event signature, and a prior event signature. In544, Input Data and Event Signature N-1are applied onto the Proposed Event Data Structure. In546, Sequence Number N is generated and applied onto the Proposed Event Data Structure. For example, a Sequence Generator generates Sequence Number N, wherein Sequence Number N is gapless, sequential to Aggregate N-1, and unique. In548, a Hash Value is generated by hashing Input Data and Sequence Number N with Event Signature N-1. In550, the Hash Value Output of548is encrypted using the Client Key to generate Event Signature N. In552, Event Signature N is applied onto the Proposed Event Data Structure to generate the Proposed Mutation, wherein the Proposed Mutation comprises the Input Data, Sequence Number N, Event Signature N-1, and Event Signature N. In554, the Proposed Mutation is accepted into Aggregate N. In556, Proposed Mutation is stored in an Event Store, and the process ends. In some embodiments, the Event Store is stored within a Database System storage device. FIG.6is a flow diagram illustrating an embodiment of a method for generating the state of an aggregate. In some embodiments, the process ofFIG.6is executed using the processor of Database System200ofFIG.2. In various embodiments, the process ofFIG.6is executed in whole or in part using any appropriate combination of one or more processors. In various embodiments, the process ofFIG.6utilizes one or more of Interface202, Storage206, and Memory208of Database System200ofFIG.2. In600, a Query is received. For example, a query is received to determine the state of an aggregate at a desired event that occurred at a particular date or time. In some embodiments, a Query is received to read input data from one or more events within an aggregate of events. In602, it is determined whether the Query is valid. In various embodiments, determining whether the query is valid comprises determining whether the query is valid syntactically and/or whether the accompanying data are appropriate for the query including determining one or more of the following: whether the data is within range, whether the data is of appropriate data type, and/or whether the data matches command input parameters, or any other appropriate manner of determining whether a query is valid. In response to the Query being determined invalid, in604, an exception is returned, and the process ends. In response to the Query being determined valid, in606, the Desired Aggregate is located within a Database System. In608, a Client Key ID for the Desired Aggregate is obtained. For example, a Client Key ID for the Desired Aggregate is obtained from Storage within a Database System. In610, a Client Key is retrieved using the Client Key ID. For example, a Client Key is retrieved using the Client Key ID from Storage within a Database System or from a separate secured system such as a key management service. In612, the Desired Events are retrieved from the Desired Aggregate. For example, Event1through Event N-1or any other desired set of events are retrieved from the desired Aggregate. In614, an Empty Aggregate N is instantiated. In some embodiments, Empty Aggregate N is an object-oriented data structure. In some embodiments, the data structure comprises a format to receive one or more events. In616, a Next Event is selected from the Desired Events for validation. In618, the Contents of Next Event are retrieved. For example, the Contents of the Next Event such as input data, an identifier, a proposed event signature, and a prior event signature are retrieved. In620, a Hash Value is generated by hashing Input Data and Sequence Number of Next Event with Event Signature from Prior Event. For example, the Input Data and Sequence Number of Next Event are hashed with the Event Signature of the event immediately prior to the selected Next Event. In622, the Hash Value Output of620is encrypted using the Client Key to generate a Check Signature. In624, it is determined whether the Check Signature is equivalent to the Next Event Signature. In response to the Check Signature being determined invalid, in604, an exception is returned, and the process ends. For example, an indication or status is associated with the Next Event indicating that the Next Event is not valid, an indication is sent or logged indicating that the Next Event is not valid, the system reverts the Event data in the Aggregate to a valid state prior to the invalid event, the system regenerates the Next Event using stored input data for the Next Event, or any other appropriate action following an invalid determination. In response to the Check Signature being determined valid, in626, the Next Event is applied onto Aggregate N. In628, it is determined whether all Desired Events have been validated. In response to determining that all Desired Events have not been validated, the process flows to616to select a Next Event from the Desired Events for validation. In response to determining that all Desired Events have been validated, the process flows to630. In630, the Desired Events are replayed to generate the state of Aggregate N. In some embodiments, wherein a Query comprises a request to read input data from one or more events within an aggregate of events,630is omitted. In632, the state of Aggregate N is provided, and the process ends. For example, the state of Aggregate N is provided to a user of a Client or an Application Development System. In some embodiments, wherein a Query comprises a request to read input data from one or more events within an aggregate of events. FIG.7is a flow diagram illustrating an embodiment of a method for generating a projection. In some embodiments, the process ofFIG.7is executed using the processor of Database System200ofFIG.2. In various embodiments, the process ofFIG.7is executed in whole or in part using any appropriate combination of one or more processors. In various embodiments, the process ofFIG.7utilizes one or more of Interface202, Storage206, and Memory208of Database System200ofFIG.2. In700, a Projection Request is received. For example, a projection request is received to determine the state of an aggregate at a desired event that occurred at a particular date or time. In702, it is determined whether the Projection Request is valid. In some embodiments, determining whether the projection request is valid comprises determining whether the projection request is valid syntactically and/or whether the accompanying data are appropriate for the projection request including determining one or more of the following: whether the data is within range, whether the data is of appropriate data type, and/or whether the data matches command input parameters. In response to the Projection Request being determined invalid, in704, an exception is returned, and the process ends. In response to the Projection Request being determined valid, in706, the Desired Aggregate is located within a Database System. In708, a Client Key ID for the Desired Aggregate is obtained. For example, a Client Key ID for the desired aggregate is obtained from Storage within a Database System. In710, a Client Key is retrieved using the Client Key ID. For example, a Client Key is retrieved using the Client Key ID from Storage within a Database System or from a separate secured system such as a key management service. In712, the Desired Events are retrieved from the Desired Aggregate. For example, Event1through Event N-1, or any other desired set of events are retrieved from the Desired Aggregate. In714, an Empty Aggregate N is instantiated. In some embodiments, Empty Aggregate N is an object-oriented data structure. In some embodiments, the data structure comprises a format to receive one or more events. In716, a Next Event is selected from the Desired Events for validation. In718, the Contents of Next Event are retrieved. For example, contents of a Next Event such as input data, an identifier, a proposed event signature, and a prior event signature are retrieved. In720, a Hash Value is generated by hashing Input Data and Sequence Number of Next Event with Event Signature from Prior Event. For example, the Input Data and Sequence Number of the Next Event are hashed with the Event Signature of the event immediately prior to the selected Next Event. In722, the Hash Value Output of620is encrypted using the Client Key to generate a Check Signature. In724, it is determined whether the Check Signature is equivalent to the Next Event Signature. In response to the Check Signature being determined invalid, in704, an exception is returned, and the process ends. For example, an indication or status is associated with the Next Event indicating that the Next Event is not valid, an indication is sent or logged indicating that the Next Event is not valid, the system reverts the Event data in the Aggregate to a valid state prior to the invalid event, the system regenerates the Next Event using stored input data for the Next Event, or any other appropriate action following an invalid determination. In response to the Check Signature being determined valid, in726, the Next Event is applied onto Aggregate N. In728, it is determined whether all Desired Events have been validated. In response to determining that all Desired Events have not been validated, the process flows to716to select a Next Event from the Desired Events for validation. In response to determining that all Desired Events have been validated, the process flows to730. In730, the Desired Events are replayed to generate the state of Aggregate N (i.e., the state of Aggregate N up to the point of the Desired Event). In732, the state of Aggregate N is stored in a Projection Store, and the process ends. FIGS.8A and8Bare a flow diagram illustrating an embodiment of a method for modifying an event. In some embodiments, the process ofFIGS.8A and8Bis executed using the processor of Database System200ofFIG.2. In various embodiments, the process ofFIGS.8A and8Bis executed in whole or in part using any appropriate combination of one or more processors. In various embodiments, the process ofFIGS.8A and8Butilizes one or more of Interface202, Storage206, and Memory208of Database System200ofFIG.2. The process flow ofFIG.8Adescribes a method for validating an aggregate of events prior to modifying an event. In800, Modified Input Data and an Event Modification Request are received. For example, Modified Input Data and an Event Modification Request are received to modify the state of an aggregate at a desired event (e.g., an event that occurred at a particular date or time). This can be helpful for hypothetical scenario modeling or to understand the impact of a bug or flaw in the system. In802, it is determined whether the Event Modification Request is valid. In some embodiments, determining whether the event modification request is valid comprises determining whether the event modification request is valid syntactically and/or whether the accompanying data are appropriate for the event modification request including determining one or more of the following: whether the data is within range, whether the data is of appropriate data type, and/or whether the data matches command input parameters. In response to the Event Modification Request being determined invalid, in804, an exception is returned, and the process ends. In response to the Event Modification Request being determined valid, in806, the Desired Aggregate is located within a Database System. In808, a Client Key ID for the Desired Aggregate is obtained. In some embodiments, the Client Key ID is obtained from Storage within a Database System. In810, a Client Key is retrieved using the Client Key ID. For example, The Client Key is retrieved using the Client Key ID from Storage within a Database System or from a separate secured system (e.g., a key management service). In812, the Desired Events are retrieved from the Desired Aggregate. For example, Event1through Event N-1, or any other desired set of events are retrieved. In814, an Empty Aggregate N is instantiated. In some embodiments, Empty Aggregate N is an object-oriented data structure. In some embodiments, the data structure comprises a format to receive one or more events. In816, a Next Event is selected from the Desired Events for validation. In818, the Contents of Next Event are retrieved. For example, content comprising input data, an identifier, a proposed event signature, and a prior event signature are retrieved. In820, a Hash Value is generated by hashing Input Data and Sequence Number of Next Event with Event Signature from Prior Event. In some embodiments, the prior event comprises the event immediately prior to the selected Next Event. In822, the Hash Value Output of820is encrypted using the Client Key to generate a Check Signature. In824, it is determined whether the Check Signature is equivalent to the Next Event Signature. In response to the Check Signature being determined invalid, in804, an exception is returned, and the process ends. For example, an indication or status is associated with the Next Event indicating that the Next Event is not valid, an indication is sent or logged indicating that the Next Event is not valid, the system reverts the Event data in the Aggregate to a valid state prior to the invalid event, the system regenerates the Next Event using stored input data for the Next Event, or any other appropriate action following an invalid determination. In response to the Check Signature being determined valid, in826, the Next Event is applied onto Aggregate N. In828, it is determined whether all Desired Events have been validated. In response to determining that all Desired Events have not been validated, the process flows to816to select a Next Event from the Desired Events for validation. In response to determining that all Desired Events have been validated, the process flows to830. In830, the validated Aggregate N is provided, and control passes to840. For example, the validated Aggregate N is provided to processor memory for use by a computer program. The process flow ofFIG.8Bdescribes a method for modifying an event within the validated Aggregate N. In840, the validated Aggregate N is received. For example, the validated Aggregate N is received from storage in a memory. In842, the requested Event to Modify is selected from Aggregate N. In844, the Input Data of the Requested Event is replaced with Modified Input Data. For example, Modified Input Data comprises a different financial transaction than what was actually transacted in order to assess the impact of the different transaction on the aggregate's state (e.g., as part of a ‘what-if’ analysis). In846, the Events of Modified Aggregate N are replayed. For example, the events of the Modified Aggregate N are replayed to generate a New State (i.e., the state of Aggregate N comprising the Desired Events but with the Modified Input Data replacing the original input data of the requested Event to Modify). In848, the New State of Aggregate N is provided, and the process ends. For example, the New State of Aggregate N is provided to a user of a Client or an Application Development System. FIG.9is a flow diagram illustrating an embodiment of a method for generating the state of an aggregate using a projection. In some embodiments, the process ofFIG.9is executed using the processor of Database System200ofFIG.2. In various embodiments, the process ofFIG.9is executed in whole or in part using any appropriate combination of one or more processors. In various embodiments, the process ofFIG.9utilizes one or more of Interface202, Storage206, and Memory208of Database System200ofFIG.2. In900, a Query is received for the State corresponding to an N-qthEvent, wherein q is an integer value ranging from 0 to N-1. For example, a query is received to determine the state of an aggregate at a desired event that corresponds to the N-qthEvent (e.g., an event that occurred at a particular date or time). In902, it is determined whether the Query is valid. In some embodiments, determining whether the query is valid comprises determining whether the query is valid syntactically and/or whether the accompanying data are appropriate for the query including determining one or more of the following: whether the data is within range, whether the data is of appropriate data type, and/or whether the data matches command input parameters. In response to the Query being determined invalid, in904, an exception is returned, and the process ends. In response to the Query being determined valid, in906, the Projection of a pthEvent within a Projection Store that is nearest to the N-qthEvent is located, wherein p is an integer value ranging from 1 to N. For example, a processor (e.g., a processor within a Database System) runs a computer program that searches the Projection Store for an event that is closest in date and time to the N-qthEvent. In some embodiments, the computer program searches using sequence number identifiers associated with each projection in the Projection Store. In some embodiments, the computer program references a look-up table (e.g., stored in the Projection Store, stored in another storage device within a Database System, or stored in any other appropriate location) to determine which sequence number corresponds to a projection that is closest to the N-qthEvent. In various embodiments, the look-up table is updated each time a projection is added to the Projection Store, or at any other appropriate interval. For example, a record of the sequence number corresponding to the projected event is added to the look-up table when a projected state of an aggregate is added to the Projection Store. In response to locating the pthevent within the Projection Store, in908, it is determined whether p equals q. For example, it is determined whether the projection of the aggregate's state at the pthEvent corresponds to the requested N-qthEvent. In response to determining that p equals q, in918, the N-qthState is provided, and the process ends. For example, the th state is provided to a user of a Client or an Application Development System. In response to determining that p does not equal q, in910, the Aggregate N that was used to generate the Projection of the pthEvent is located within the Database System. For example, a processor (e.g., a processor within a Database System) runs a computer program that searches the Database System for an aggregate that corresponds to the N-qthEvent. In some embodiments, the computer program searches using sequence number identifiers associated with each aggregate in the Database System. In some embodiments, the computer program references a look-up table (e.g., stored within the Database System, or in any other appropriate location) to determine which sequence number corresponds to the aggregate that was used to generate the Projection of the pthEvent (i.e., Aggregate N). In response to locating Aggregate N, Events p+1 through Event N-q from Aggregate N are retrieved. In914, the Projection of the pthEvent is received. For example, the Projection of the pthEvent is received in processor memory for use by a computer program (e.g., a computer program running on a processor in a Database System). In916, Events starting from the Projection of pthEvent to the N-qthEvent are replayed to generate the N-qthState. For example, the Events are replayed by a computer program running on a processor in a Database System. In918, the N-qthState is provided, and the process ends. For example, the N-qthState of Aggregate N is provided to a user of a Client or an Application Development System. FIG.10is a block diagram illustrating an embodiment of a system for aggregating, auditing, and committing event data; for querying the state of an aggregate; and for creating and writing a projection. In various embodiments, Database System1000, Network1002, and/or Client1004correspond respectively to Database System100, Network102, and/or Client104ofFIG.1. In various embodiments, Database System1000comprises an alternate implementation of Database System200ofFIG.2. In some embodiments, Database System1000is curated by one or more third parties that provide services to the Client (i.e., one or more service providers) comprising the capabilities enabled by the system ofFIG.10. In the example shown, Database System1000is connected to Client1004via DNS Resolver1006and Network1002. DNS Resolver1006is a Domain Name System (DNS) (e.g., a server) used to resolve and route HTTP requests (e.g., from Client1004) to the appropriate API Gateway (e.g., API Gateway1008). In some embodiments, DNS Resolver1006resides within Network1002(e.g., as part of a cloud-based service provider). In some embodiments, DNS Resolver1006is hosted by a server (e.g., a virtual server) external to Network1002(e.g., as provided by an external service provider). In various embodiments, DNS Resolver1006comprises a recursive resolver, a root nameserver, a top-level domain nameserver, an authoritative nameserver, and/or any other appropriate type of nameserver. Network1002comprises a communication network. In various embodiments, Network1002comprises wired and/or wireless communication networks comprising standard, hybrid, and/or proprietary networks (e.g., a local area network, a wide area network, a virtual private network, etc.), proxy servers, and data centers. In some embodiments, Network1002comprises a Content Distribution Network. In some embodiments, Network1002comprises components of a cloud computing platform—for example, comprising a front end platform (e.g., a fat client, a thin client, etc.), a back end platform (e.g., servers, data storage, etc.), a cloud based delivery, and a network (e.g., Internet, Intranet, Intercloud, etc.). In the example shown, Database System1000comprises API Gateway1008, Request Handler1010, User Management Service1014, Permissions Service1016, Key ID Management Service1018, Key Management Service1020, Process Gateway1022, Query Process1024, Mutation Process1026, Audit Process1028, Projection Store1030, Event Store1032, and Audit Store1034. In various embodiments, computer programs used to execute the services provided by the system ofFIG.10are executed in whole or in part using one or more processors (e.g., one or more processors of API Gateway1008, Request Handler1010, User Management Service1014, Permissions Service1016, Key ID Management Service1018, Key Management Service1020, Process Gateway1022, Query Process1024, Mutation Process1026, and Audit Process1028). API Gateway1008manages communications between Request Handler1010and Client1004. For example, API Gateway1008comprises a server that acts as an API front-end, receives API requests, enforces throttling and security policies, passes requests to Request Handler1010, and passes a response from Query Process1024, Mutation Process1026, and/or Audit Process1028via Process Gateway1022and Request Handler1010back to Client1004. In various embodiments, API Gateway1008supports authentication, authorization, security, audit, and/or regulatory compliance. The system ofFIG.10supports requests from Client1004(e.g., HTTP requests) comprising query requests (e.g., for one or more events, for an aggregate of events at the moment, for an aggregate of events at some time in the past, for the state of an aggregate, etc.), mutation requests (e.g., to persist an event, to append an event to an existing aggregate, to assess the impact of a different transaction on the aggregate's state, etc.), and/or writing, checking and/or reading one or more projections. The system provides responses to Client1004by utilizing Query Process1024, Mutation Process1026, and/or Audit Process1028. Query Process1024, Mutation Process1026, and Audit Process1028comprise software programs executing on a processor—for example, Client APIs generated by an Application Development System (e.g., Application Development System106ofFIG.1). Request Handler1010routes the Client Request to the applicable process via Process Gateway1022. In various embodiments, Query Process1024, Mutation Process1026, and Audit Process1028run on one or more processors within or external to Database System1000, or any appropriate combination of one or more processors. In some embodiments, the Client Request includes an access token (e.g., a primary token or an impersonation token) within a request header. In various embodiments, the access token includes security credentials for a login session and identifies the user, the user's groups, and/or the user's privileges. Request Handler1010makes a request to User Management Service1014to validate the access token. In response to the token being valid, User Management Service1014responds with the Requestor's identity information. Request Handler1010uses the identity information to obtain permission levels associated with the Requestor from Permissions Service1016. In response to valid permission levels, Request Handler1010requests a Key ID (e.g., the Client Key) from Key ID Management Service1018. In response to receiving the Key ID, Request Handler1010requests the Client Key from Key Management Service1020using the obtained Key ID. Validator1012performs basic validation of the Client Request, wherein determining whether the request is valid comprises determining whether the request is valid syntactically and/or whether data included with the request is appropriate for the request including determining one or more of the following: whether the data is within range, whether the data is of appropriate data type, and/or whether the data matches the requested input parameters. In some embodiments, the validations of validator1012, query process1024, mutation process1026, and audit process1028serve the same purpose: to check whether the query/mutation/audit request is appropriate before continuing to do any logic. The system can immediately short-circuit the process when it is determined that there is an invalid request, rather than trying to continue the request and realize an issue later. These validations can be thought of in two ways: (1) generic request checking and (2) custom business logic checking. The first is something that can determine across ALL requests, like if the request is well-formed and parsable. The latter is something that each implementation of this system will customize to fit its specific needs, like whether the criteria for a query request makes sense in the business context (e.g., the request tries to set a price to a negative value, the query is for data in a date range that is too wide, etc.). The distinction between the checking at validator1012and the checking at query process1024, mutation process1026, and audit process1028is solely that, for a query, the system is checking that the query is valid, for a mutation, the system is checking that what is about to change is valid, and for audit, that the system is checking that what is being audited is valid. In some embodiments, Validator1012performs sanitization of the input data, wherein sanitization comprises removing data (e.g., hypertext markup language (HTML) tags or special character sequences) that might be misinterpreted as computer instructions or database queries, and/or removing any other unnecessary data that may leave the system vulnerable to unauthorized access and/or manipulation of data—for example, via an SQL (Structured Query Language) injection attack. In response to a valid and/or sanitized request, Request Handler1010routes the Request, along with the Client Key to the specific Process (i.e., Query Process1024, Mutation Process1026, or Audit Process1028), based on the operations required to fulfill the Client Request. Query Process1024provides the requested information to Client1004using data stored in Projection Store1030and/or Event Store1032. In some embodiments, Query Process1024performs query-specific validation based on input data from Client1004's request, wherein determining whether the request is valid comprises determining whether the request is valid syntactically and/or whether data included with the request is appropriate for the request including determining one or more of the following: whether the data is within range, whether the data is of appropriate data type, and/or whether the data matches the requested input parameters. In response to the query-specific validation failing, the process indicates that the request is invalid and the query process is ended. Mutation Process1026applies applicable mutation business logic to the requested aggregate to produce a new Event N and new Aggregate N (e.g., as part of a ‘what-if’ analysis), persists Event N to Event Store1032, serializes Aggregate N to a storage-writable format (e.g., binary, JSON (JavaScript Object Notation), etc.), and writes Aggregate N, along with related identifying and encryption data, to Projection Store1030. In some embodiments, Mutation Process1026performs mutation-specific validation based on input data from Client1004's request, wherein determining whether the request is valid comprises determining whether the request is valid syntactically and/or whether data included with the request is appropriate for the request including determining one or more of the following: whether the data is within range, whether the data is of appropriate data type, and/or whether the data matches the requested input parameters. In response to the mutation-specific validation failing, the process indicates that the request is invalid and the mutation process is ended. Audit Process1028aggregates event data from Event Store1032and/or Audit Store1034, as applicable, and responds with the requested information back to Client1004. In some embodiments, Audit Process1028performs audit-specific validation based on input data from Client1004's request, wherein determining whether the request is valid comprises determining whether the request is valid syntactically and/or whether data included with the request is appropriate for the request including determining one or more of the following: whether the data is within range, whether the data is of appropriate data type, and/or whether the data matches the requested input parameters. In response to the audit-specific validation failing, the process indicates that the request is invalid and the audit process is ended. In response to discovering a problem with the requested data or action (e.g., a corrupted aggregate/event pair, an audit that fails validation, a command that is syntactically invalid, etc.), the system in various embodiments provides an alert to Client1004, provides an alert to a curator (not shown) of Database System1000(e.g., via Network1002), and/or persists a new database record that documents the issue into Audit Store1034. FIG.11is a flow diagram illustrating an embodiment of a method for receiving and validating a client request. In some embodiments, the process ofFIG.11is executed by the system ofFIG.10. In the example shown, in1100, a Client Request and Access Token are received. For example, a client makes a request via a Network (e.g., using HTTP) that is routed (e.g., via a DNS Resolver and an API Gateway) to an appropriate tenanted endpoint (e.g., the Database System curated by the Client's service provider). The Access Token is used to uniquely identify a user and look up their entitlements based on request context. The Access Token is created when the User itself is created and stored in our system. Upon the User authenticating with their username/password, the system replies to a successful authentication with the Access Token. The Access Token is then stored on the client for use in future API calls. When an API Request is made, the system will validate the Access Token, look up the User, look up the User's Entitlements, and verify that the User's Entitlements include the Permission that secures the API Request itself. In various embodiments, the Access Token includes security credentials for a login session and identifies the user (i.e., the requestor), the user's groups, and/or the user's privileges. In various embodiments, the Access Token comprises a delegation or impersonation token that is validated by a security program running on a processor (e.g., an API Gateway server processor and/or a Request Handler processor). In some embodiments, the Client Request includes the Access Token as supplemental data placed at the beginning of the Client Request (i.e., as a header to one or more data blocks comprising the Client Request). In1102, it is determined whether the Access Token is valid. For example, in response to the API Gateway server validating the API Token portion of the Access Token, the API Gateway server routes the Client Request to an appropriate Request Handler. The Request Handler uses the Client Request and Access Token to make a request to a User Management Service to further validate the Access Token. In response to the Access Token being further validated, the User Management Service obtains Client identity information (e.g., requestor or user identity information) based on data within the Client Request and/or Access Token In response to any portion of the Access Token failing to validate, control passes to1104. In1104, an Exception is returned, and the process ends. For example, the service provider is notified (e.g., via log, email, etc.) and an unsuccessful response is returned to the Client (e.g., a message stating that the Access Token failed to validate, and/or including an error code, or any other appropriate information). In response to the Access Token being fully validated, control passes to1106. In1106, identity information for the requestor is received. For example, identity information for the requestor is received by the Request Handler from the User Management Service. In some embodiments, the requestor is one of multiple possible users or requestors that work for the Client. In1108, Permission Levels are obtained for the Requestor. For example, the Request Handler uses identity information received from the User Management Service to obtain user Permission Levels from a Permissions Service. User Permission Levels are claims that allow or disallow user access to performing certain activities (e.g., retrieving keys from a key management service, accessing various databases, etc.). In1110, it is determined whether the Permission Levels are valid. For example, the Request Handler determines whether Permission Levels for the requestor are valid (e.g., by determining if the requestor has the appropriate level of authority to access the information requested in the Client Request). In response to the Permission Levels failing to validate, control passes to1104. In1104, an Exception is returned, and the process ends. For example, the service provider is notified (e.g., via log, email, etc.) and an unsuccessful response is returned to the Client (e.g., a message stating that the requestor does not have permission to access the requested information, and/or including an error code, or any other appropriate information). In response to the Permission Levels being determined as valid, control passes in1112. In1112, the Client Key ID is retrieved. For example, the Request Handler requests and retrieves the Client Key ID from a Key ID Management Service. In some embodiments, the Key ID Management Service provides a secure look-up for Key given a Key ID. In1114, the Client Key is retrieved using the Client Key ID. For example, the Request Handler requests and retrieves the Client Key from a Key Management Service using the obtained Client Key ID. In1116, it is determined whether the Client Request is valid. For example, the Request Handler performs basic input validation based on data within the Client Request. In various embodiments, the Request Handler relies on a dedicated Validator process or processor (e.g., a sub-process and/or a sub-processor) to perform the basic input validation, wherein determining whether the request is valid comprises whether the request is valid syntactically and/or whether data included with the request is appropriate for the request including determining one or more of the following: whether the data is within range, whether the data is of appropriate data type, and/or whether the data matches the requested input parameters. In response to the Client Request failing to validate, control passes to1104. In1104, an Exception is returned, and the process ends. For example, the service provider is notified (e.g., via log, email, etc.) and an unsuccessful response is returned to the Client (e.g., a message stating that the Client Request failed to validate, and/or including an error code, or any other appropriate information). In response to the Client Request being determined as valid, control passes to1118. I n1118, the Client Request is sanitized. For example, the Request Handler performs basic sanitization of the input data within the Client Request. In various embodiments, the Request Handler relies on a dedicated Validator process or processor (e.g., a sub-process and/or a sub-processor) to sanitize the input data, wherein sanitization comprises removing data (e.g., hypertext markup language (HTML) tags or special character sequences) that might be misinterpreted as computer instructions or database queries, and/or removing any other unnecessary data that may leave the system vulnerable to unauthorized access and/or manipulation of data—for example, via an SQL (Structured Query Language) injection attack. In1120, the Client Request and Client Key are provided to the Appropriate Process, and the process ends. For example, the Request Handler routes the Client Request and Client Key to a specific processor (e.g., one or more processors supporting a Query Process, Mutation Process, and/or Audit Process), based on the nature of the requested operation within the Client Request (e.g., a query request, a mutation request, an audit request, etc.). In some embodiments, the Request Handler routes the Client Request and Client Key to the appropriate processor via a Process Gateway. In some embodiments, the Process Gateway is an internal security interface that a request must pass through in order to communicate with a given service. Gateways are used by the system to organize services so that the services stay modular. In some embodiments, a gateway (e.g., the process gateway) acts as a throttle point or an additional security boundary. In some embodiments, the gateways are “private” because they are not exposed to the internet and are only accessible once you get past the system's public gateway. FIG.12is a flow diagram illustrating an embodiment of a method for fulfilling an audit query request. In some embodiments, the process ofFIG.12is executed by the system ofFIG.10. The Audit Process is used to access and deliver validated information contained in an Audit Store, and/or Event Store, as required to respond to the Audit Query Request. The Audit Store comprises a database used to store events, aggregates of events, and/or other data related to the contents of the Audit Store. In various embodiments, other data related to the contents of the Audit Store comprises (i) a listing (e.g., a lookup table) of the Audit Store contents (e.g., as listed by Aggregate ID, Sequence Number, timestamp of entry, and/or any other appropriate type of content identifier), (ii) content status (e.g., active, inactive, valid, corrupt, etc.), and/or (iii) access records (e.g., an access record that identifies what data was accessed, by what user and when, or any other appropriate type of access information). The Event Store comprises a database used to store events and/or other data related to the contents of the Event Store. In various embodiments, other data related to the contents of the Event Store comprises (i) a listing (e.g., a lookup table) of the Event Store contents (e.g., as listed by unique Sequence Number, or any other appropriate type of content identifier), (ii) content status (e.g., active, inactive, valid, corrupt, etc.), and/or (iii) access records (e.g., an access record that identifies what event data was accessed, by what user and when, or any other appropriate type of access information). In some embodiments, the Audit Process comprises a service provider that enables auditors (e.g., human auditors, software audit programs, or any appropriate combination of human auditors and software audit programs) to inspect and audit the contents of the Audit Store (e.g., validating an event chain of an existing aggregate). For example, the Audit Process is used to aggregate and validate a chain of N events prior to responding with the requested information. In some embodiments, a field added to each event of the chain of N events is used to indicate the verification status of each event (e.g., a “Verified: True” field). In some embodiments, a verification field at the aggregate-level indicates the verification status of the aggregate comprising the chain of N events. In the event that the requested information is indicated as previously verified, the Audit Process returns the requested information without further validation (e.g., by accessing and returning the requested information from the Audit Store.) In the event that any of the requested information was not previously verified, (e.g., a verification field either does not exist or is empty, or by the presence of a “Verified: False” field), the Audit Process validates any unverified events using the Client Key prior to responding with the requested information (e.g., by validating each unverified event within an aggregate comprising a chain of N events). In the event that one or more requested events are not listed as available in the Audit Store, the Audit Process accesses and retrieves the requested information from the Event Store. In the event that one or more requested events are not available in either the Audit Store or the Event Store, the Audit Process returns an exception (e.g., by sending a message to the requestor and/or the curator of the Database System). In the example shown, in1200, an Audit Query Request and Client Key are received. For example, an Audit Query Request and Client Key are received by the Audit Process via a Request Handler and/or a Process Gateway. In1202, it is determined whether the Audit Query Request is valid. For example, the Audit Process determines whether the Audit Query Request is valid, wherein determining whether the request is valid comprises determining whether the request is valid syntactically and/or whether data included with the request is appropriate for the request including determining one or more of the following: whether the data is within range, whether the data is of appropriate data type, and/or whether the data matches the requested input parameters. In response to the Audit Query Request failing to validate, control passes to1204. In1204, an Exception is returned, and the process ends. For example, the service provider is notified (e.g., via log, email, etc.) and an unsuccessful response is returned to the Client (e.g., a message stating that the Client Request failed to validate, and/or including an error code, or any other appropriate information). In response to the Audit Query Request being validated, control passes to1206. In1206, it is determined whether the chain of events is stored in the Audit Store. For example, the data associated with the Audit Query Request is stored in the Audit Store. In response to determining that the chain of events is not stored in the Audit Store, control passes to1206. In1208, the chain of events is retrieved and stored from the Event Store, and control passes to1212. For example, the data associated with the Audit Query Request is located, retrieved, verified, and/or aggregated from the Event Store by checking that the chain of events passes its signature checks. For example, verifying the chain comprises selecting an event, retrieving contents of an event, generating a hash value by hashing data and a sequence number of an event with an event signature from a prior event, encrypting the hash value using the client key to generate a check signature, and determining whether the check signature is equivalent to a next event signature. In some embodiments, the Audit Process retrieves the Requested Data using an Aggregate ID (e.g., a Sequence Number that uniquely identifies a particular aggregate) that is included in the Audit Query Request (e.g., as a header to one or more data blocks comprising the Audit Query Request). In response to determining that the chain of events is stored in the Audit Store in1206, control passes to1210. In1210, it is determined whether the chain of events is verified as true. In response to determining that the chain of events is verified as true, control passes to1212. In1212, the requested data is provided to the client, and the process ends. For example, the data requested in the Audit Query Request is provided to the client (e.g., the user associated with the client request). In some embodiments, the Audit Process responds with an aggregate, and/or the state of the aggregate, comprising a validated chain of N events. In response to determining that the chain of events is not verified as true, control passes to1214. In1214, the chain of events is verified, and control passes to1212. For example, the chain of events stored in the Audit Store is verified by checking that the chain of events passes its signature checks. For example, verifying the chain comprises selecting an event, retrieving contents of an event, generating a hash value by hashing data and a sequence number of an event with an event signature from a prior event, encrypting the hash value using the client key to generate a check signature, and determining whether the check signature is equivalent to a next event signature. In some embodiments, the Audit Process is used by the curator of the Database System to access other information related to the contents of the Audit Store (e.g., user access records, corrupt data, exception notifications, etc.). For example, the curator of the Database System may need to investigate causes of corrupt data, improper software operation, improper database access; to generate related reports; and/or access any other appropriate information required to maintain and support the Database System. FIGS.13A and13Bare a flow diagram illustrating an embodiment of a method for fulfilling a query request. In some embodiments, the process ofFIGS.13A and13Bare executed by the system ofFIG.10. The Query Process is used to access and deliver validated information contained in an Event Store, and/or Projection Store, as required to respond to the Query Request. The Event Store comprises a database used to store events and/or other data related to the contents of the Event Store. In various embodiments, other data related to the contents of the Event Store comprises (i) a listing (e.g., a lookup table) of the Event Store contents (e.g., as listed by unique Sequence Number, or any other appropriate type of content identifier), (ii) content status (e.g., active, inactive, valid, corrupt, etc.), and/or (iii) access records (e.g., an access record that identifies what event data was accessed, by what user and when, or any other appropriate type of access information). The Projection Store comprises a database used to store, cache, and/or pre-compute current and/or frequently accessed forms of an aggregate useful for performance boosting. A projection is a copy of an aggregate's current state. In some embodiments, the projection includes incorporating modification and/or deletions as of an effective moment/date of an entry in the database system even if the entry is later in the entry log (e.g., the entry or actual date of the entry is after the effective moment/date). For example, the Query Process is used to retrieve events, and/or the state of an aggregate comprising a chain of N events, at a particular date and time (i.e., an ‘As-of’ date) or as of the current moment. In some embodiments, the state of an aggregate comprising a chain of N events is stored in the Projection Store to provide an improved Client response time and reduced computational complexity and cost. In the example shown, in1300, a Query Request and Client Key are received. For example, a Query Request and Client Key are received by the Query Process via a Request Handler and/or a Process Gateway. In1302, it is determined whether the Query Request is valid. For example, the Query Process determines whether the Query Request is valid. In some embodiments, determining whether the request is valid comprises determining whether the request is valid syntactically and/or whether data included with the request is appropriate for the request including determining one or more of the following: whether the data is within range, whether the data is of appropriate data type, and/or whether the data matches the requested input parameters. In response to the Query Request failing to validate, control passes to1304. In1304, an Exception is returned, and the process ends. For example, the service provider is notified (e.g., via log, email, etc.) and an unsuccessful response is returned to the Client (e.g., a message stating that the Query Request failed to validate, and/or including an error code, or any other appropriate information), and the process ends. In response to the Query Request being valid, control passes to1306. In1306, it is determined whether the Query Request is for the Current Moment. For example, the Query Process determines whether the Query Request is for the Current Moment (e.g., via a field indicator in the Query Request that indicates the requested information is to be provided as of the time that the Query Request is processed by the Query Process). In response to the Query Request being for the Current Moment, the process flows to (C) inFIG.13A. In response to Query Request not being for the Current Moment, the control passes to1307. In1307, it is determined whether the Query Request includes an As-of Value. In various embodiments, determining whether the Query Request includes an As-of Value comprises determining whether an As-of field in the Client Request (e.g., incorporated into a header of one or more data blocks comprising the Query Request) is non-empty, and/or whether the As-of Value indicates a date/time outside the range of data stored in the Event Store, and/or Projection Store. In response to the Query Request not including an As-of Value, the control passes to1304. In1304, an Exception is returned, and the process ends. For example, the service provider is notified (e.g., via log, email, etc.) and an unsuccessful response is returned to the Client (e.g., a message stating that the Query Request is invalid, and/or including an error code, or any other appropriate information), and the process ends. In response to the Query Request including an As-of Value, the process flows to (B) inFIG.13B. In the example shown, in response to the Query Request being for the Current Moment, the control passes to1308. In1308, the Desired Aggregate is located in the Event Store using the Aggregate ID included in the Client Request. For example, the Query Process locates the requested Aggregate ID (e.g., a Sequence Number that uniquely identifies a particular aggregate) by examining a look-up table within the Event Store. In1310, the Events for the Desired Aggregate are retrieved. For example, the Query Process retrieves the Events for the Desired Aggregate from the Event Store. In1312, an Empty Aggregate N is instantiated. For example, the Query Process instantiates an Empty Aggregate N. In some embodiments, Empty Aggregate N is an object-oriented data structure. In some embodiments, the data structure comprises a format to receive one or more events. In1314, a Next Event is selected from the Desired Aggregate. For example, the Query Process selects the Next Event from the Desired Aggregate comprising a chain of N events. In1316, the Contents of the Next Event are retrieved and held in memory. For example, the Query Process retrieves the Contents of the Next Event from the Desired Aggregate and holds them in memory (e.g., computer memory associated with the Query Process processor or any appropriate memory allocated to the Database System). In1318, it is determined whether the timestamp of the Next Event is greater than the As-of Value. For example, the Query Process determines whether the timestamp of the Next Event is greater than the As-of Value. In response to the timestamp of the Next Event not being greater than the As-of Value, the process flows to1314. In response to the timestamp of the Next Event being greater than the As-of Value, control passes to1320. In1320, a Next Event is selected from Memory for validation. In1322, the Contents of the Next Event are retrieved. For example, the contents of a Next Event such as input data, an identifier, a proposed event signature, and a prior event signature are retrieved. In1324, a Hash Value is generated by hashing the Input Data and Sequence Number of the Next Event with the Event Signature from the Prior Event. For example, the Input Data and Sequence Number of the Next Event are hashed with the Event Signature from the Prior Event, wherein the prior event comprises the event immediately prior to the selected Next Event from the Events retrieved in1310. In1326, the Hash Value Output of1324is encrypted using the Client Key to generate a Check Signature. In1328, it is determined whether the Check Signature is equivalent to the Next Event Signature. In response to the Check Signature being determined invalid, control passes to1330. In1330, a record of the Corrupt Event is stored in the Audit Store. For example, an indication or status is associated with the Next Event indicating that the Next Event is not valid, an indication is sent or logged indicating that the Next Event is not valid, the system reverts the Event data in the Aggregate to a valid state prior to the invalid event, the system regenerates the Next Event using stored input data for the Next Event, or any other appropriate action following an invalid determination. In1304, an exception is returned, and the process ends. For example, the service provider is notified (e.g., via log, email, etc.) and an unsuccessful response is returned to the Client (e.g., a message stating that the Client Request failed to validate, and/or including an error code, or any other appropriate information). In response to the Check Signature being equivalent to the Next Event Signature, control passes to1332. In1332, the Next Event is applied onto Aggregate N. In1334, it is determined whether all Desired Events have been validated (i.e., all the Events on the list of Events retrieved from the Desired Aggregate in1310). In response to determining that all Desired Events have not been validated, the process flows to1320to select a Next Event from Memory for validation. In response to determining that all Desired Events have been validated, the control passes to1336. For example, in response to all the Events retrieved from the Desired Aggregate in1310having been validated, the control passes to1336. In1336, the Requested Data is provided to the Client, and the process ends. For example, the Query Process responds with an aggregate of events, and/or the state of the aggregate, comprising a validated chain of N events as of a requested date and time. In the example shown inFIG.13B, in1340, an Aggregate ID is received. For example, the Aggregate ID is received from a memory coupled to a processor. In1342, it is determined whether a Projection of the Aggregate associated with the Aggregate ID exists in the Projection Store. In response to the Aggregate ID not existing in the Projection Store, control passes to (C) ofFIG.13A. In response to the Aggregate ID existing in the Projection Store, control passes to1344, and the Projection of the Aggregate associated with the Aggregate ID is obtained from the Projection Store. For example, a Projection of the Aggregate associated with the Aggregate ID is obtained for a requested date/time (i.e., the As-of Value). In1346, the Aggregate associated with the Aggregate ID is obtained from the Event Store. For example, an Aggregate comprising a chain of N events is obtained from the Event Store by the Query Process. In1348, the Event N Signature and Aggregate Parent Signature are retrieved from the Aggregate. For example, the signature from the Nthevent within the Aggregate comprising a chain of N events, and the parent signature of the Aggregate, are retrieved by the Query Process. A Parent Signature comprises an encrypted and hashed concatenation of a Projection Signature with the latest signature of an event from an aggregate comprising a chain of N events (i.e., the Event N Signature). In various embodiments, the concatenation process is indicated by a standard, unambiguous delimiter (e.g., a vertical bar ‘|’, a ‘+’ glyph, or any other appropriate delimiter). A Projection Signature comprises an encrypted hash value of the input data included in a Projection, wherein the hash value is encrypted using a Client Key. In1350, a Hash Value is generated by hashing the Projection Input Data. In some embodiments, the hash is salted with a random number to provide additional security against precomputation attacks, such as Rainbow Table attacks. In1352, the Hash Value Output is encrypted using the Client Key to generate a Check Projection Signature. In1354, the Check Projection Signature is concatenated with the Event N Signature. In1356, a Hash Value is generated by hashing the concatenated Check Projection Signature and Event N Signature. In1358, the Hash Value Output is encrypted using the Client Key to generate a Check Parent Signature. In1360, it is determined whether the Check Parent Signature is equivalent to the Aggregate Parent Signature. In response to determining that the Check Parent Signature is not equivalent to the Aggregate Parent Signature, control passes to1362, and a Record of the Corrupt Projection is stored in the Audit Store. For example, an indication or status is associated with the Projection indicating that the Projection is not valid, an indication is sent or logged indicating that the Projection is not valid, the system reverts the Projection to a valid state prior to the invalid event, the system is flagged for a manual control process for review prior to the Projection being reverted, the system regenerates the Projection using stored input data in the Event Store, or any other appropriate action following an invalid determination. In response to determining that the Check Parent Signature is equivalent to the Aggregate Parent Signature, control passes to1364, and the Projection Data is deserialized. Deserialization transforms the Projection Data to a representation of the object that is executable. A serialized object which was used for communication cannot be processed by a computer program. An unmarshalling interface (e.g., within the Query Process) takes the serialized object and transforms it into an executable form. In1366, the Projection is hydrated into memory. For example, the Projection is hydrated into computer memory associated with the Query Process processor or any appropriate memory allocated to the Database System, wherein hydration comprises the extraction of the deserialized Projection Data to populate the data into a data object for communication to the Client. In1368, the Requested Data is provided to the Client, and the process ends. FIGS.14A and14Bare flow diagrams illustrating an embodiment of a method for fulfilling a mutation request and writing a projection. In some embodiments, the process ofFIGS.14A and14Bare executed by the system ofFIG.10. The Mutation Process is used to access, mutate, and deliver validated information contained in an Event Store, and/or Projection Store, as required to respond to the Mutation Request, wherein a mutation comprises persisting an event, appending an event to an existing aggregate, or assessing the impact of a different transaction on the aggregate's state. In some embodiments, the Mutation Process is used to generate and store a Projection. The Event Store comprises a database used to store events and/or other data related to the contents of the Event Store. In various embodiments, other data related to the contents of the Event Store comprises (i) a listing (e.g., a lookup table) of the Event Store contents (e.g., as listed by unique Sequence Number, or any other appropriate type of content identifier), (ii) content status (e.g., active, inactive, valid, corrupt, etc.), and/or (iii) access records (e.g., an access record that identifies what event data was accessed, by what user and when, or any other appropriate type of access information). The Projection Store comprises a database used to store, cache, and/or pre-compute current and/or frequently accessed forms of an aggregate useful for performance boosting. A projection is a copy of an aggregate's current state. In some embodiments, the projection includes incorporating modification and/or deletions as of an effective moment/date of an entry in the database system even if the entry is later in the entry log (e.g., the entry or actual date of the entry is after the effective moment/date). In1400, a Mutation Request and Client Key are received. For example, a Mutation Request and Client Key are received by the Mutation Process via a Request Handler and/or a Process Gateway. In1402, it is determined whether the Mutation Request is valid. For example, the Mutation Process determines whether the Mutation Request is valid. In some embodiments, determining whether the request is valid comprises determining whether the request is valid syntactically and/or whether data included with the request is appropriate for the request including determining one or more of the following: whether the data is within range, whether the data is of appropriate data type, and/or whether the data matches the requested input parameters. In response to the Mutation Request failing to validate, control passes to1404. In1404, an Exception is returned, and the process ends. For example, the service provider is notified (e.g., via log, email, etc.) and an unsuccessful response is returned to the Client (e.g., a message stating that the Mutation Request failed to validate, and/or including an error code, or any other appropriate information). In response to the Mutation Request being valid, control passes to1406. In1406, the Desired Aggregate is located within the Event Store using the Aggregate ID included in the Mutation Request. For example, the Mutation Process locates the Desired Aggregate using an Aggregate ID (e.g., a Sequence Number that uniquely identifies a particular aggregate) by examining a look-up table within the Event Store. In1408, the Events for the Desired Aggregate are retrieved. For example, the Mutation Process retrieves the Events for the Desired Aggregate from the Event Store. In1410, an Empty Aggregate N is instantiated. For example, the Mutation Process instantiates an Empty Aggregate N. In some embodiments, Empty Aggregate N is an object-oriented data structure. In some embodiments, the data structure comprises a format to receive one or more events. In1412, a Next Event is selected from the Desired Aggregate for validation. In1414, the Contents of the Next Event are retrieved. For example, the contents of a Next Event such as input data, an identifier, a proposed event signature, and a prior event signature are retrieved. In1416, a Hash Value is generated by hashing the Input Data and Sequence Number of the Next Event with the Event Signature from the Prior Event. For example, the Input Data and Sequence Number of the Next Event are hashed with the Event Signature from the Prior Event, wherein the prior event comprises the event immediately prior to the selected Next Event from the Events retrieved in1408. In1418, the Hash Value Output of1416is encrypted using the Client Key to generate a Check Signature. In1420, it is determined whether the Check Signature is equivalent to the Next Event Signature. In response to the Check Signature being determined invalid, control passes to1422. In1422, a record of the Corrupt Event is stored in the Audit Store. For example, an indication or status is associated with the Next Event indicating that the Next Event is not valid, an indication is sent or logged indicating that the Next Event is not valid, the system reverts the Event data in the Aggregate to a valid state prior to the invalid event, the system regenerates the Next Event using stored input data for the Next Event, or any other appropriate action following an invalid determination. In1404, an exception is returned, and the process ends. For example, the service provider is notified (e.g., via log, email, etc.) and an unsuccessful response is returned to the Client (e.g., a message stating that the Mutation Request failed to validate, and/or including an error code, or any other appropriate information). In response to the Check Signature being equivalent to the Next Event Signature, control passes to1424. In1424, the Next Event is applied onto Aggregate N. In1426, it is determined whether all Desired Events have been validated (i.e., all the Events on the list of Events retrieved from the Desired Aggregate in1310). In response to determining that all Desired Events have not been validated, the process flows to1412to select a Next Event from the Desired Aggregate for validation. In response to determining that all Desired Events have been validated, the control passes to1428. For example, in response to all the Events retrieved from the Desired Aggregate in1408having been validated, the control passes to1428. In1428, Event Signature N-1from the Last Validated Event is tracked. For example, the Event Signature N-1is stored in processor memory. In1430, business logic is applied to Aggregate N to generate a New Event. For example, business logic comprises a logical algorithm (e.g. a client-specific API) used to create a new event or to modify a copy (e.g., a projection) of an existing event. An event is a data entry that represents a change in an application state (i.e., a mutation). In a financial system, examples of events include TradeOrderFilled, ClientNotified, AccountBalanceUpdated, or any other appropriate type of event. An application state does not change until a proposed mutation is ‘persisted’ into a database (e.g., stored in an event store). In1432, a Sequence Number N is generated and applied onto the New Event. For example, a Sequence Number N comprising an integer value representing the next logical sequence number for the New Event (e.g., based on the existing event sequence numbers within the Desired Aggregate) is applied onto the New Event. In1434, a Hash Value is generated by hashing Input Data and Sequence Number N of the New Event with Event Signature N-1. For example, the Input Data and Sequence N Number of the New Event are hashed with the Event Signature N-1tracked in1428. In1436, the Hash Value Output of1434is encrypted using the Client Key to generate Check Signature N. In1438, Check Signature N and Event Signature N-1are applied onto the New Event to generate a Mutation. In1440, the Mutation is stored in the Event Store. In1442, it is determined whether a Projection is desired. For example, it is determined whether a copy of the Mutation is desired to be created and stored in the Projection Store (e.g., for performance boosting by shortcutting the Aggregate-hydration process when only concerned with the latest state of the Aggregate). In response to determining that a Projection is not desired, the control passes to1444. In1444, a Notice of Success and/or the Requested Data is provided to the Client, and the process ends. For example, the Mutation Process provides to the Client a confirmation that the New Event (i.e., the proposed Mutation) has been successfully stored in the Event Store, and/or provides to the Client a modified (i.e., mutated) aggregate of events, and/or the state of the modified aggregate, as based on the Mutation Request, and the process ends. In response to determining that a Projection is desired, the control passes to (D) ofFIG.14B. In1460, the Aggregate, Aggregate ID, and Client Key are received. For example, the Aggregate, Aggregate ID, and Client Key are received from processor memory for the process of fulfilling a mutation request as inFIG.14A. In1462, the Aggregate is serialized to generate Projection Data. For example, the Mutation Process serializes the Aggregate to a storage-writable format (e.g., binary, JSON, etc.) to generate Projection Data. In1464, a Hash Value is generated by hashing the Projection Data. In1466, the Hash Value Output is encrypted using the Client Key to generate a Projection Signature. In1468, the Projection Signature is concatenated with the Check Signature from the Latest Event. For example, the Projection Signature is concatenated with the Check Signature from the NthEvent of the Aggregate comprising a chain of N events received in1460. In1470, a Hash Value is generated by hashing the Concatenated Projection Signature and Check Signature. In1472, the Hash Value Output is encrypted using the Client Key to generate a Parent Signature. In1474, the Aggregate ID, Projection Data, Projection Signature, and Parent Signature are stored in the Projection Store. In1476, it is determined whether a Request for Data was associated with the Projection Data. For example, the Mutation Request of1400may have requested simply to mutate and persist a new event to an existing aggregate without requiring more than notice of success. In response to determining that a Request for Data was not associated with the Projection Data, the process ends. In response to determining that a Request for Data was associated with the Projection Data, the control passes to1478. In1478, the Requested Data is provided to the Client, and the process ends. For example, the Mutation Process responds with a mutated aggregate of events, and/or the state of the mutated aggregate (e.g., comprising a validated chain of one or more mutated events), and the process ends. Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive. | 123,081 |
11943345 | DETAILED DESCRIPTION In order to make the objects, the technical solutions and the advantages of the embodiments of the present application clearer, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings of the embodiments of the present application. Apparently, the described embodiments are merely certain embodiments of the present application, rather than all of the embodiments. All of the other embodiments that a person skilled in the art obtains on the basis of the embodiments of the present application without paying creative work fall within the protection scope of the present application. The terms “include”, “comprise”, “have” and any variations thereof mentioned in the embodiments of the present application are intended to cover non-exclusive inclusion. For example, a process, method, system, product or device comprising a series of steps or units is not limited to the listed steps or units, but alternatively further includes other unlisted steps or units, or alternatively further includes other steps or units inherent to the process, method, product or device. By referring toFIG.1, a key management method provided by the embodiments of this application adopts a three-level access control verification policy; according to the above key management method, a user is subjected to authority verification step by step, including identity verification, role-based access control policy, and attribute-based access control policy, the method including:S101, receiving key generation request information;S102, generating attribute access policy information on the basis of the key generation request information, the attribute access policy information being an attribute set for encrypting a data key;S103, encrypting the data key on the basis of the attribute set for encrypting the data key;exemplarily, on the basis of a first-level identity verification module and a second-level RBAC-based access control module, the coarse-grained access control on the key to be generated is completed; the key is generated according to the preset attribute access policy information, wherein the attribute access policy information is an attribute set that allows encryption of the key to be generated; a specific user is authorized, and corresponding attributes are set for the corresponding user, so that the specific user has corresponding access authority to the key to be generated, so as to complete fine-grained access control on the key to be generated;exemplarily, the preset attribute access policy information is bound to each key of a key management system.S104, receiving key acquisition request information;S105, on the basis of the attribute set for encrypting the data key, verifying whether attribute information of the key acquisition request information is included in the attribute set for encrypting the data key;exemplarily, in order to distinguish attribute levels, each attribute of the user is classified as either a self-authorization attribute or an authorized attribute; the authorized attribute may only be updated and revoked by the user who created the key; and the self-authorization attribute may be updated and revoked by the user itself. Exemplarily, when a user creates a key, he may authorize himself to only have attribute information operated by himself, so as to prevent other users from accessing. The other users may also be authorized to only read the attribute information. The user who created the key has the authority to revoke the access attribute authority of the authorized user, and meanwhile, the user who created the key also has the authority to update the attribute access policy in the key management system; on the basis of the preset attribute access policy information, verify whether the attribute information of the key acquisition request information is included in the attribute access policy information, that is, verify whether the attribute information in the attribute set that allows encryption of the key to be generated when the key is generated according to the key generation request information and the preset attribute access policy information is included in the attribute information of the key acquisition request information. S106, in response to the attribute information of the key acquisition request information being included in the attribute set for encrypting the data key, acquiring a destination data key on the basis of the attribute information of the key acquisition request information. Receive key generation request information; generate attribute access policy information on the basis of the key generation request information, wherein the attribute access policy information is an attribute set for encrypting a data key; encrypt the data key on the basis of the attribute set for encrypting the data key; receive key acquisition request information; on the basis of the attribute set for encrypting the data key, verify whether attribute information of the key acquisition request information is included in the attribute set for encrypting the data key; and in response to the attribute information of the key acquisition request information being included in the attribute set for encrypting the data key, acquire a destination data key on the basis of the attribute information of the key acquisition request information. By generating a key according to the key generation request information and the attribute set that allows encryption of the key to be generated, when the key acquisition request information requests to acquire the corresponding key, the key acquisition request information is verified through the attribute set that allows encryption of the key to be generated, so as to implement access control for each key. In a possible implementation, before the step of generating attribute access policy information on the basis of the key generation request information, the method further includes:determining whether the key generation request information is correct according to role information and identity information of a user requesting to generate a key. Exemplarily, verify whether the key generation request information is legal according to the identity information of the user requesting to generate the key and first preset identity information, wherein the first preset identity information is identity information capable of generating the key;in response to the key generation request information being legal, determine whether the key generation request information is correct according to the role information of the user requesting to generate the key and first preset role access control information, wherein the first preset role access control information is role authority information capable of generating the key;in response to the key generation request information being correct, perform the step of generating the key; andin response to the key generation request information being incorrect, the step of generating the key is not performed. Verifying whether the key generation request information is correct according to preset legal information and first preset authority information, ensures that the key generated according to the key generation request information and the preset attribute access policy information is a reasonable key, thereby avoiding that the key is created by malicious access users, which will affect a server and other users. In a possible implementation, before the step of encrypting the data key on the basis of the attribute set for encrypting the data key, the method further includes:generating an initial data key according to the key generation request information;generating a project key according to the key generation request information, wherein in response to the project key existing, it is used directly, and is encrypted with a system root key; andencrypting the initial data key according to the project key to obtain a first encrypted data key. By completing coarse-grained encryption of the initial data key, the coarse-grained protection of the initial data key is realized. In a possible implementation, the step of encrypting the data key on the basis of the attribute set for encrypting the data key includes:converting a character string attribute of the attribute set for encrypting the data key into the access control policy matrix by using a Boolean function; andencrypting the first encrypted data key according to the access control policy matrix to obtain a second encrypted data key. Exemplarily, by adopting a linear secret sharing scheme to generate the ABE-based access attribute structure policy, and using the Boolean function to realize the automatic conversion of the character string attribute to the linear secret sharing access control policy, the access control policy matrix is generated. Exemplarily, with referring toFIG.2, the key management system component generates a data key of a corresponding type, and encrypts the data key with the project key to obtain a first encrypted data key. The first encrypted data key is encrypted by using the access control policy matrix to obtain a second encrypted data key. The second encrypted data key is stored, and the data key is returned, wherein the data key is the above key. In a possible implementation, the step of verifying whether attribute information of the key acquisition request information is included in the attribute set for encrypting the data key on the basis of the attribute set for encrypting the data key includes:verifying whether the key acquisition request information is legal according to the identity information of the user requesting to acquire the key and second preset identity information, wherein the second preset identity information is the identity information capable of acquiring the key;in response to the key acquisition request information being legal, verifying whether the key acquisition request information is correct according to the role information of the user requesting to acquire the key and second preset role access control information, wherein the second preset role access control information is the role authority information capable of acquiring the key; andin response to the key acquisition request information being correct, verifying whether the attribute information of the key acquisition request information is included in the attribute set for encrypting the data key. In response to the attribute information of the request information being included in the attribute access policy information, construct an access control policy matrix of the minimum attribute set. In a possible implementation, the step of acquiring a destination data key on the basis of the attribute information of the key acquisition request information includes:acquiring the second encrypted data key corresponding to the key acquisition request information;decrypting the second encrypted data key according to the access control policy matrix of the minimum attribute set corresponding to the attribute set for encrypting the data key to obtain the first encrypted data key;decrypting the encrypted project key corresponding to the first encrypted data key with the system root key to obtain the project key; anddecrypting the first encrypted data key according to the project key to obtain the initial data key corresponding to the key acquisition request information, wherein the initial data key is used as the destination data key. Exemplarily, with referring toFIGS.2and3, the first encrypted data key is recovered from the second encrypted data key by using the linear secret sharing scheme and the access control policy matrix of the minimum attribute set, the encrypted project key corresponding to the key is decrypted with the system root key to obtain the project key, the first encrypted data key is decrypted with the project key to obtain the data key, and the data key is returned to the user. With referring toFIG.4, the key management system includes an identity authentication and access control component, a key management system component, and a resource service component; wherein, when a user logs into the system for the first time, the identity authentication and access control component registers the user and assigns it into the corresponding project, and at the same time grants the corresponding role authority. Finally, the identity authentication and access control component verifies the attribute information submitted by the user, verifies and assigns the corresponding user attribute. When a registered user logs into the system, the identity authentication and access control component authenticates the user and authorizes a access token. The user may securely and legally access other components in the cloud computing platform through the access token. The access token specifically includes: a user's role authority, accessible service components, and a user's attribute set. The key management system component is responsible for the generation, storage and distribution of the key, and provides a key management function for other resource service components in the cloud computing platform. The key management system component includes a first-level identity verification module, a second-level RBAC-based access control module, a third-level ABE-based access control module, and a key management module. The key management system component mainly includes three functions: a three-level access control function when the user accesses the key management component; a key generation and storage function; and a key use function. The resource service component is used for users to access resources. The user obtains the authority to call the resource service component through the identity authentication and access control component, obtains a key through the key management system component on the basis of the authority to call the resource service component, and accesses the resource service component according to the key. When a user needs to access the key management system, he first obtains an authorization token at the identity authentication and access control component. The first-level identity verification module of the key management system verifies the user's authorization token to confirm the legitimacy of the user. Then the second-level RBAC-based access control module performs coarse-grained role authority verification on the user's authorization token to ensure that the user has the authority to access the key management system. Then the third-level ABE-based access control module verifies the attribute of the key that the user wants to access. In a possible implementation, with referring toFIG.5, the embodiments of the present application provide a key management apparatus including:a data acquisition module201, configured to receive key generation request information;an attribute access policy information generation module202, configured to generate attribute access policy information on the basis of the key generation request information, wherein the attribute access policy information is an attribute set for encrypting a data key;a data key generation module203, configured to encrypt the data key on the basis of the attribute set for encrypting the data key;a data receiving module204, configured to receive key acquisition request information;a verification module205, configured to on the basis of the attribute set for encrypting the data key, verify whether the attribute information of the key acquisition request information is included in the attribute set for encrypting the data key; anda key acquisition module206configured to, in response to the attribute information of the key acquisition request information being included in the attribute set for encrypting the data key, acquire a destination data key on the basis of the attribute information of the key acquisition request information. In a possible implementation, with referring toFIG.6, the embodiments of the present application provide an electronic device, including a memory310, a processor320, and a computer program311stored in the memory320and operable on the processor320. When the processor320executes the computer program311, the steps of the above key management method are implemented. In a possible implementation, with referring toFIG.7, a computer-readable storage medium400is provided, on which a computer program411is stored. When the computer program411is executed by a processor, the steps of the following key management method are implemented. In several embodiments provided by the embodiments of the present application, it should be understood that the disclosed apparatus and method may also be implemented in other ways. The apparatus embodiments described above are only illustrative. For example, the flowcharts and block diagrams in the accompanying drawings show the architecture, functions and operations of possible implementations of apparatuses, methods and computer program products according to multiple embodiments in the embodiments of the present application. In this regard, each block in a flowchart or block diagram may represent a module, a program segment, or a part of codes that includes one or more executable instructions for realizing specified logical functions. It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts, can be implemented by a dedicated hardware-based system that performs specified functions or actions, or may be implemented by a combination of dedicated hardware and computer instructions. For another example, the division of the above units is only a logical function division, and there may another division method in actual implementation. For another example, multiple units or components may be combined or integrated into another system, or some features may be ignored, or not performed. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of apparatuses or units may be in electrical, mechanical or other forms. The units described above as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, various functional units in the embodiments provided by the embodiments of the present application may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit. When the above functions are realized in the form of software functional units and sold or used as an independent product, they may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or the part of the technical solution may be embodied in the form of a software product, and the computer software product is stored in a storage medium, including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the above-mentioned methods in various embodiments of the embodiments of the present application. The aforementioned storage media include various media capable of storing program codes such as U disk, mobile hard disk, read-only memory (ROM for short), random access memory (RAM for short), magnetic disk or optical disk. It should be noted that like numerals and letters denote similar items in the following drawings, and therefore, once an item is defined in one drawing, it does not require further definition and explanation in subsequent drawings. In addition, terms “first”, “second”, “third”, etc. are only used for distinguishing descriptions, but should not be construed as indicating or implying relative importance. Finally, it should be noted that the above-mentioned embodiments are only the specific implementations of the embodiments of the present application, and are used to illustrate the technical solutions of the embodiments of the present application, rather than limiting them. The protection scope of the embodiments of the present application is not limited thereto. Although the embodiments of the present application have been described in detail with reference to the foregoing embodiments, a person skilled in the art should understand that: within the technical scope disclosed in the embodiments of the present application, a person familiar with the technical field can still modify the technical solutions set forth by the foregoing embodiments or can easily think of changes, or make equivalent substitutions to part of the technical features; and these modifications, changes or substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application. All should be covered within the scope of protection of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application should be determined by the protection scope of the claims. | 22,011 |
11943346 | DETAILED DESCRIPTION It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views. The various disclosed embodiments include methods and systems for single round multi-party computation of digital signatures. More specifically, data is partially signed by an offline device during a single round of signing operations. One or more other devices each partially sign the data, thereby completing the signature and generating signed data which can be, for example, uploaded. Thus, the amount of interaction with the offline device involved in the signing process is minimized. Further, some of the disclosed embodiments can support use of stateless protocols by the offline device. In an embodiment, the digital signing includes a key generation stage and a signing stage. During the key generation stage, parties to the digital signing interact to generate a new public key. During the signing stage, the parties interact in order to sign the data using their respective shares. In a further embodiment, the digital signing further includes a preprocessing stage. During the preprocessing stage, the parties interact to generate correlated data to be used in the signing stage (where one package of information is stored for each future sign operation). This preprocessing stage can be executed as many times as required. In an embodiment, one or more of the parties to the data signing is an offline device which is offline with respect to one or more networks, i.e., the offline device not connected to the Internet or other network (e.g., an air-gapped device). More specifically, the offline device is offline at least with respect to networks accessible to at least some of the other participants in the shares generation and digital signing. The interactions with the offline device during the signing stage occur during a single round of interactions. In the single round of interactions, data is sent to the offline device by another participating device, calculations are performed on the offline device based on the sent data in order to partially sign the data, and the partially signed data from the offline device to the other participating device. In accordance with the disclosed embodiments, a device is offline with respect to a network when the device is not connected, either directly or indirectly through other systems or devices, to that network. A first device is offline with respect to a second device when the first device and the second device are not communicatively connected through any network or combination of networks. In an example implementation, the offline device is offline with respect to the Internet and with respect to any networks accessible to devices that are connected to the Internet. In another example implementation, multiple offline devices may be connected to each other, and each of the offline devices is offline with respect to other online devices. In yet another example implementation, all devices participating in the MPC may be offline with respect to the Internet, and at least some of the devices are offline with respect to each other. In various embodiments, the disclosed embodiments may be applied to a keyless multi-party computation protocol such as, but not limited to, the protocol for securing digital signatures using multi-party digital signatures described in U.S. patent application Ser. No. 16/404,218, assigned to the common assignee, the contents of which are hereby incorporated by reference. Such embodiments have improved security as compared to key-based solutions while maximizing flexibility. The disclosed embodiments provide benefits typically associated with use of “cold” wallets or otherwise digital assets whose keys are offline without at least some of the usual drawbacks of such approaches. Because the disclosed embodiments are compatible with multi-party computation approaches to digital signing, the disclosed embodiments can provide availability while minimizing exposure to security risks. More specifically, the disclosed single round protocol allows for digitally signing assets offline, thereby protecting digital assets from unauthorized access attempts. Further, assets may be signed on premises of a user, thereby removing any need to use a third party service for signing. Further, the disclosed embodiments provide a multi-party computation protocol that minimizes the number of rounds needed to securely sign data. As noted above, implementations involving “cold” devices (i.e., devices storing keys which are not directly or indirectly accessible via the Internet) require manual interactions with the cold devices. This can be time-consuming and labor-intensive, which makes frequent involvement of cold devices undesirable. Accordingly, the disclosed embodiments mitigate this challenge by reducing the number of times in which an offline device needs to be involved in the multi-party computation. FIG.1Ashows an example network diagram100A utilized to describe the various disclosed embodiments. In the example network diagram100A, an online device140communicates with an online platform150via a network110. The network110may be, but is not limited to, a wireless, cellular or wired network, a local area network (LAN), a wide area network (WAN), a metro area network (MAN), the Internet, the worldwide web (WWW), similar networks, and any combination thereof. Each of the online device140and the offline device120may be, but is not limited to, a personal computer, a laptop, a tablet computer, a smartphone, a wearable computing device, or any other device capable of receiving and displaying notifications. The offline device120may be configured with any kind of software for participating in digital signatures such as, but not limited to, server side, client side, and the like. Data stored by the offline device120may be stored in a hardware security module (not shown), i.e., a storage module without a central processing unit. In order to facilitate transfer to the offline device120, in an example implementation, data to be signed is transferred from the online device140to an intermediary data storage (DS) device130. The intermediary data storage130stores information and allows for transferring data to or from the offline device120. Thus, the intermediary data storage130is any data transfer mechanism that does not need to connect to the network110in order to transfer such data. The intermediary data storage device130may, but is not limited to, an external hard drive, a memory card, a quick response (QR) code, a universal serial bus (USB) key, a uniform resource locator (URL), a producer of a sound (e.g., a modem configured to make sounds), a one-way link (e.g., an optical, digital or analog one-way link), a printed paper and a camera, a scanner, combinations thereof, and the like. The intermediary data storage130is not otherwise communicatively connected to the network110, and is only communicatively connected to the online device140when not in communication with the offline device120. In some implementations, multiple intermediary data storages (not shown) may be utilized. In various implementations, the intermediary data storage130is configured for one-way communication with a receiving device (i.e., a device to which data is transferred). As a non-limiting example, such one-way communication may be realized using a half-duplex system (not shown). Alternatively or connectively, the intermediary data storage130may be configured for bi-directional communication such as, but not limited to, using Bluetooth or other wireless technology standards. In particular, the intermediary data storage130may utilize one-way communication when communicating with an offline device. In some implementations, all data being transferred from the online device140to the offline device120pursuant to the disclosed embodiments is partially signed or encrypted using a public key known to the offline device120. To this end, in an embodiment, the offline device120is configured to attempt to decrypt or validate incoming communications using the public key. In another embodiment, incoming data transferred from the online device140can include a hash of the last message received from the offline device120. To this end, the offline device120may be configured to check the hash in the incoming data against the hash of the most recent data transferred to the online device140in order to confirm that the incoming data is being received from the same device. Including hashes of previous messages allows for verifying that the messages are being received from the same sender and, therefore, secure. In particular, this may be performed when key generation involves multiple rounds of communication. It should be noted that, in some embodiments, all devices participating in the signing protocol may be offline with respect to networks such as the Internet. In such an embodiment, the signed transaction data may be uploaded by transferring the signed transaction data from one of the offline devices to an online device, for example via an intermediary storage device. FIG.1Bshow an example network diagram100B that illustrates a potential network configuration that may be utilized with multiple offline devices. In the example network diagram100B, the offline device120is offline with respect to one or more participant devices160and the online device140is configured to upload signed data to the online platform150but is not necessarily a participant in the signing. The participant devices160are other participants in the signing. To this end, data is transferred between the offline device120and one or more of the participant devices160via the intermediary data storage130. Any or all of the participant devices160may be offline with respect to the network110. It should be noted that a single intermediary data storage130is shown inFIG.1Bfor simplicity, but in other implementations, data may be transferred between the offline device120and each participant device160using a different intermediary data storage, the participant devices160may transfer signed data to the online device140via a different intermediary data storage, or both. It should also be noted that the arrangements of components depicted inFIGS.1A-Bare merely examples used for illustrative purposes, but that the disclosed embodiments may be equally applicable to other arrangements of components. For example, in some implementations, the offline device120may be connected to some networks (e.g., private local networks) but not others (e.g., the Internet). In particular, in accordance with the disclosed embodiments, the offline device120is not connected to any networks accessible to at least some of the other devices participating in the signing protocol. Thus, in some implementations, the device120does not need to be an air-gapped device. In various disclosed embodiments, the offline device120is not directly or indirectly connected to the Internet or otherwise exposed to the network(s) accessible to the online device140, but the disclosed embodiments are not necessarily limited to this implementation. Further, the offline device communicates with at least some of the other device(s) participating in the signing indirectly, i.e., without establishing direct communications or otherwise establishing a connection to those online devices. To this end, in various disclosed embodiments, data is transferred to and from the offline device is transmitted using one or more intermediary data storages. Additionally, it should be noted that multiple online devices, multiple offline devices, or both, may be participants in the disclosed digital signing processes in accordance with the disclosed embodiments. Likewise, data may be transferred between the offline devices and the online devices via multiple intermediary data storages without departing from the scope of the disclosure. A single online device, a single offline device, and a single intermediary data storage are shown merely for simplicity purposes. The interactions between the components described above with respect toFIG.1Aare now further explained with respect toFIG.2.FIG.2is a sequence diagram illustrating communications among the offline device120, the intermediary data storage device130, the online device140, and the online platform150. At210, the offline device120is configured to perform the relevant portion of the disclosed embodiments. In particular, the offline device120is configured such that it can participate in secret generation (e.g., keys, shares, or both) and the single round of digital signing described herein. In an example implementation, the configuration at210may include installing software on the offline device120. At220, a process requiring a digital signature is initiated between the online device140and the online platform150. As a non-limiting example, the process may be a transaction such as a transaction to be added to a blockchain and the data may be transaction data. It is checked if the data meets one or more signing policies. The process may further require determining whether the signature meets one or more signing policies. At230, when the data meets all applicable signing policies, the data is transferred to the intermediary data storage device130. At240, the data is transferred from the intermediary data storage device130to the offline device120. At250, the data is validated and partially signed on the offline device120. At260, the partially signed data is transferred to the intermediary data storage device130. At270, the partially signed data is transferred from the intermediary data storage device130to the online device140. At280, the partially signed data is uploaded to the online platform150by the online device140. At290, the partially signed data is stored, recorded, or otherwise finalized via the online platform150. In some embodiments,290includes adding the partially signed data to a decentralized ledger such as a blockchain (not shown). It should be noted thatFIG.2is merely an example, and that communications among devices are not limited to the specific communications shown inFIG.2. As noted above, the disclosed signing process may be among all offline devices (e.g., all devices that are offline with respect to the Internet or other networks). In these and other implementations, the device which initiates the process requiring the digital signature may not actually participate in the signing process, and may instead simply be a conduit for uploading signed data. FIG.3is an example flowchart300illustrating a method for signing data using a single round multi-party computation protocol according to an embodiment. In an embodiment, the method is performed by a device participating in signing such as the offline device120, the online device140, or one of the participant devices160,FIGS.1A-B. At S310, a first device secret key skAis chosen. In an embodiment, S310may further include sending a commitment on a value skA·G calculated based on the first device secret key skA, a non-interactive zero-knowledge proof of knowledge (NIZKPoK) of the first device secret key skA, or both, to the other participants in the signing. Each of the other participants chooses a secret key. In an embodiment, the secret keys chosen by the participants are all chosen from among the same set. A unique “session ID” for the shares generation is chosen by the parties to the signing. The session ID may be represented as a function of random strings chosen by participant devices participating in the signing. When a random oracle is needed by any of the participants during the shares generation, the random oracle may be instantiated using a function such as, but not limited to, a HMAC function. The function used for instantiating the random oracle may be instantiated based on the session ID and a string that is unique to the protocol being used. In an embodiment, at least one participant in the signing is offline with respect to other participants in the signing. As a non-limiting example, such a participant may be the offline device120,FIGS.1A-B. The offline device120is offline with respect to other participants in the signing such that the offline device120cannot communicate with the other participants in the signing via one or more networks such as, but not limited to, the Internet. To this end, in an embodiment, the commitment is sent via, for example, one or more intermediary storage devices. At S320, a commitment on a value calculated based on a second secret key skBchosen by another participant in the signing as well as a NIZKPoK on the second secret key skBare received from the other participant in the data signing. As a non-limiting example, such a NIZKPoK may be created using the Fiat-Shamir heuristic. The participants interact to setup a secure multiplication protocol, for example, based on oblivious transfer or homomorphic encryption. In an embodiment, the commitment is received via, for example, one or more intermediary storage devices. At S330, a decommitment on the value calculated based on the first secret key skAas well as a NIZKPoK of the first secret key skAare sent to the other participant in the data signing. The other participant stores the result of the secure multiplication protocol setup. In an embodiment, the decommitment and the NIZKPoK sent via, for example, one or more intermediary storage devices. At S340, a public key and a secret share are stored. The public key is the same as a public key generated by other participants in the data signing and can be calculated as, for example, skA·skB·G or (skA+skB) G. The secret share includes the offline device secret key skAand a secret output of the secure multiplication protocol setup. Each public key is generated by one of the parties based on the party's respective secret shares such that data that is partially signed using the secret shares corresponding to the public key. Due to this correspondence, the secrets shares have the property that the secret shares could be validated using the public key if the secret shares were to be assembled by a single system (which is not possible in accordance with various disclosed embodiments since the shares are not revealed among systems). At S350, data is partially signed using the secret share generated at S340. In an embodiment, the signing may follow the process described further below with respect toFIG.4. The data is partially signed in that the offline device(s) provide a portion of the whole signature, where the whole signature includes portions of signatures corresponding to the respective secret shares of the parties participating in the signing. In another embodiment, the signature protocol may further allow for supporting a deterministic key derivation process such as BIP32 non-hardened derivation. To this end, in such an embodiment, the signature generated by each party to the digital signing is generated based further on a value δ determined via the deterministic key generator, where the value δ is known to all parties. In some implementations, S350may include checking the resulting signature with a derived public key that is a predetermined function of the original public key in order to verify that the public key was generated using data from one or more offline devices in accordance with the disclosed embodiments. Any or all other parties to the signing may perform similar checks to independently verify the public key. The derived public key may be calculated, for example, as skA·skB·G+δ·G or (skA·skB)·G+δ·G, where δ defined according to the BIP32 standard. It should also be noted that, in an embodiment, secret shares may be rotated in a way that is consistent with the public key. Such rotation may be performed between any two or more parties that wish to refresh their secret shares. The rotation includes creating, by each of the parties rotating shares, new shares that collectively form the same private key as the previous shares created by the party and may include, but is not limited to, generating derived shares or independently generating new shares. It should also be noted that the steps ofFIG.3are depicted as a single process merely for simplicity purposes, but that at least some steps may be performed remotely in time from other steps. In particular, the data signing at S350may occur at a remote time from the other steps S310through S340. Additionally, in some implementations, at least some of the steps ofFIG.3may be performed multiple times in series or in parallel. For example, when there are multiple other participants in the digital signing, the steps for determining keys and secret shares may be performed multiple times, and multiple secret shares may be determined. In this regard, it should also be noted that each party generates its respective secret shares and that none of the parties knows the entire set of shares that will be used to sign the data. As a non-limiting example, if a customer SDK and a MPC service provider system will store the shares, at least two shares are created. As another non-limiting example, if four systems will store the shares, at least four shares are created. The system of each party creates one or more of the shares separately and independently from the other systems. FIG.4is an example flowchart400illustrating a method for securing digital signatures via MPC according to an embodiment. In an embodiment, one of the systems participating in and performing the method ofFIG.4is the offline device120. At S410, a request for a signature on a message is received. The request may be received from, for example, a user device. The request at least includes data to be signed (for example, Bitcoin transaction data). At S420, it is determined whether requirements of a signing policy have been met and, if so, execution continues with S430; otherwise, execution either terminates (not shown) or continues with S410(shown). The signing policy includes rules for validating the authenticity and, if the signature is approved, the system of the validating party (for example, a service provider) will use its respective shares in order to sign a transaction. The signing policy may further include additional requirements as described herein above. To this end, S440may include communicating with other systems to, for example, prompt other users to provide authentication and approval for the transaction. At S430, secret shares are used to sign the data. In an embodiment, the secret shares are created as described above with respect toFIG.3. The signing includes running a MPC protocol using the shares as part of an interactive signing process. The interactive signing process includes each system running a MPC protocol using its respective shares. In an embodiment, the data is partially signed by each system using a distributed implementation of a digital signature algorithm (for example, ECDSA/EdDSA/Schnorr), such that the full private key is never reconstructed on any system. In a further embodiment, during the signing, no portion of each system's shares is revealed to the other system. As non-limiting examples, such a digital signature algorithm may be ECDSA, Edwards-curve Digital Signature Algorithm (EdDSA), Schnorr, and the like. The security of the protocol may be proven based on cryptographic assumptions (e.g., a discrete log). To this end, the digital signature algorithm may utilize an additive homomorphic encryption scheme and efficient zero knowledge proofs for proving statements about elements from different finite groups without conveying other information about the elements. In another embodiment, S430further includes performing a key derivation as part of the signature step without deriving the actual keys separately. In that case, the parties may sign a message with shares of a derived key directly using the shares of the original key. In an embodiment, one or more of the systems participating in the signing are offline at least with respect to the other systems (i.e., not connected to a network or, more specifically, not connected to the network(s) used by the other systems) and only one round of interaction will be required during part of the signing including the offline systems. It should be noted that a share may be stored offline even if the system storing the share is online. For example, a share may be stored locally on the system in a location that is not accessible over networks such that the share is not accessible over a network. To this end, in a further embodiment, S430includes receiving a message including data needed by the offline device for signing, signing the data using the offline device secret shares, and sending the message to one of the other participants for completion of the signature (e.g., by adding further signatures of other parties, uploading of the signed data to an online platform, both, and the like). This signing by the offline device is performed in a single round, i.e., one message transferred to the offline device and one message transferred from the offline device. As mentioned above, transfers of data to and from the offline device may be accomplished via use of intermediary data storages. Performing the partial signing in a single round with one or more offline parties allows for minimizing interactions by the offline devices, minimizing exposure of the offline shares, or both. Further, by utilizing intermediary storage devices for communications with the offline devices, data needed for the interactive signing process can be transferred without exposing the offline devices to the same network as the other devices. In some embodiments, one of the systems is offline except during a small number of rounds of interaction (e.g., below a predetermined threshold). In those rounds of interaction, the offline system receives aggregated messages from the other systems. Each aggregated message includes information used for the distributed implementation of the digital signature algorithm provided by all of the other systems. The result of the interactive rounds may be used for multiple signing operations, each requiring only a single round of interaction with the offline device. In other words, by sending aggregated messages containing all of the information required by the offline system, the offline system may participate in signing without requiring multiple rounds of interaction. As noted above, such rounds of interaction are particularly cumbersome when dealing with an offline device, as data must be transferred indirectly (e.g., through an intermediary data storage). The disclosed embodiments may also be utilized to provide such a verified encrypted backup. To this end, in an embodiment, each party encrypts its secret share using a public key encryption scheme (e.g., RSA) and proves with a zero-knowledge proof that it encrypted the right value. As a non-limiting example, the parties may use additive shares to generate the signature keys as following: party i chooses a number xi and publishes xiG, where G is a generator in the elliptic curve used for the digital signature. Note that x is hidden given xG based on the security of the digital signature in use. The public key of the digital signature will be x1G+x2G+ . . . =(x1+x2+. . . )G. In addition, party i encrypts xi under the public key of the backup encryption and proves with a zero-knowledge proof that the same xi is used in xiG and the encrypted backup. It should be noted that, in an embodiment, no portion of any of the shares is revealed to any system outside of the system that generated those shares at any time. In other words, each system maintains its own shares, and shares are not provided or otherwise revealed to the other participating systems or other external systems. Since shares are not revealed to other systems, the latent private key cannot be reconstructed and used to fraudulently sign transactions. It should be noted that the method may be repeated with the same keys because the same keys can be used for signing multiple messages. FIG.5is an example schematic diagram of a participant device500according to an embodiment. The participant device500includes a processing circuitry510coupled to a memory520, and a storage530. In an embodiment, the components of the participant device500may be communicatively connected via a bus540. The offline device120ofFIG.1, the online device140ofFIG.1, or both, may be configured as the participant device500in accordance with the disclosed embodiments. The processing circuitry510may be realized as one or more hardware logic components and circuits. For example, and without limitation, illustrative types of hardware logic components that can be used include field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), Application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), graphics processing units (GPUs), tensor processing units (TPUs), general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), and the like, or any other hardware logic components that can perform calculations or other manipulations of information. The memory520may be volatile (e.g., random access memory, etc.), non-volatile (e.g., read only memory, flash memory, etc.), or a combination thereof. In one configuration, software for implementing one or more embodiments disclosed herein may be stored in the storage530. In another configuration, the memory520is configured to store such software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the processing circuitry510, cause the processing circuitry510to perform the various processes described herein. The storage530may be magnetic storage, optical storage, and the like, and may be realized, for example, as flash memory or other memory technology, compact disk-read only memory (CD-ROM), Digital Versatile Disks (DVDs), or any other medium which can be used to store the desired information. It should be understood that the embodiments described herein are not limited to the specific architecture illustrated inFIG.5, and other architectures may be equally used without departing from the scope of the disclosed embodiments. In particular, the offline device120shown inFIG.5is an air-gapped device lacking any network interface. However, in some implementations, the offline device120may include a network interface. The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal. All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosed embodiment and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosed embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure. It should be understood that any reference to an element herein using a designation such as “first,” “second,” and so forth does not generally limit the quantity or order of those elements. Rather, these designations are generally used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. Also, unless stated otherwise, a set of elements comprises one or more elements. As used herein, the phrase “at least one of” followed by a listing of items means that any of the listed items can be utilized individually, or any combination of two or more of the listed items can be utilized. For example, if a system is described as including “at least one of A, B, and C,” the system can include A alone; B alone; C alone;2A;2B;2C;3A; A and B in combination; B and C in combination; A and C in combination; A, B, and C in combination;2A and C in combination; A,3B, and2C in combination; and the like. | 34,488 |
11943347 | DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts. Automatic Generation of Initial Network Credentials at an Integrated Tamper Resistant Device FIG.1illustrates an exemplary integrated tamper resistant device generating initial network credentials in accordance with various aspects of the disclosure. As shown inFIG.1, a client device102produced by a client device manufacturer100may contain a system on chip device (SoC)104. The term client device as used herein broadly refers to a diverse array of devices and technologies. For example, some non-limiting examples of a client device include a user equipment (UE), mobile, a cellular (cell) phone, a smart phone, a session initiation protocol (SIP) phone, a laptop, a personal computer (PC), a notebook, a tablet, a personal digital assistant (PDA), wearable devices (e.g., smart watches, glasses, etc.), and a broad array of embedded systems, e.g., corresponding to an “Internet of things” (IoT). As further shown inFIG.1, the system on chip device (SoC)104may contain an integrated tamper resistant device106(also referred to as an integrated tamper resistant element). In some examples, the integrated tamper resistant device106may be a dedicated device that provides a secure environment in which sensitive operations (e.g., cryptographic functions) may be performed. In some aspects of the disclosure, the integrated tamper resistant device106may be a subsystem included in the system on chip (SoC) device104. For example, the integrated tamper resistant device106may include a secure processing device108(also referred to as a secure processor or a secure processing circuit) that may not be accessed by other processing devices (e.g., a central processing unit (CPU)) included in the system on chip (SoC) device104. In some examples, the integrated tamper resistant device106may further include a memory device (not shown inFIG.1for simplicity). For example, the memory device may be a volatile memory or a non-volatile memory, such as a read only memory (ROM) or a one time programmable (OTP) memory. As shown inFIG.1, the integrated tamper resistant device106may generate initial network credentials112(also referred to as network credentials, bootstrapping network credentials, bootstrapping network authentication data, or a bootstrapping profile). In some examples, the initial network credentials112may include data (e.g., subscriber identity data) that the client device102may use to perform cryptographic authentication with a network (e.g., a cellular network). The integrated tamper resistant device106may implement the network credentials generating device110to generate the initial network credentials112. In some examples, the network credentials generating device110may be a random number generator (RNG) circuit included in the integrated tamper resistant device106. In other examples, the network credentials generating device110may be at least a portion of the secure processing device108configured to execute instructions of a random number generating operation. The network credentials112may include an initial network identity that the client device102may use to access a network (e.g., a cellular network). In the aspects described herein, the initial network credentials112may include unique data (also referred to as diversified data or personalized data) that may serve as the initial network identity. In some aspects of the disclosure, the unique data may be a random number generated by the network credentials generating device110or a number derived from the generated random number. The unique data may be unknown to the client device manufacturer100and the portions of the client device102external to the integrated tamper resistant device106. In some examples, the initial network credentials112may include a cryptographic key (e.g., a public key) provided by the integrated tamper resistant device106in addition to the unique data. In some examples, the initial network credentials112may further include information that is specific to the client device102, such as an international mobile equipment identity (IMEI) assigned by the client device manufacturer100. As shown inFIG.1, the integrated tamper resistant device106may encrypt the initial network credentials112. The integrated tamper resistant device106may implement the encrypting device114to encrypt the initial network credentials112. In some examples, the encrypting device114may be an encryption circuit included in the integrated tamper resistant device106. In other examples, the encrypting device114may be at least a portion of the secure processing device108configured to execute one or more instructions of an encryption procedure. The encrypting device114may include a pre-set cryptographic key (e.g., a public key) provided by the network solution provider126. In some aspects of the disclosure, the encrypting device114may receive input data (e.g., the initial network credentials112in plain text or clear text form), perform an encryption operation on the input data (e.g., using a public key from the network solution provider126), and output encrypted data (also referred to as ciphertext). As shown inFIG.1, the integrated tamper resistant device106may sign the encrypted initial network credentials116to generate encrypted and signed initial network credentials120. The integrated tamper resistant device106may implement the signing device118to digitally sign the encrypted initial network credentials116. In some aspects of the disclosure, the signing device118may be a cryptographic signing circuit included in the integrated tamper resistant device106. In other examples, the signing device118may be at least a portion of the secure processing device108configured to execute one or more instructions of a cryptographic signature generating procedure. The integrated tamper resistant device106may sign the encrypted initial network credentials116to provide proof of the authenticity of the encrypted initial network credentials116. This may enable the receiver of the encrypted data (e.g., the network solution provider126) to verify that the encrypted data was produced by a genuine integrated tamper resistant device (e.g., the integrated tamper resistant device106). In some examples, the signing device118may sign the encrypted initial network credentials116using signing keys provisioned by a manufacturer of the integrated tamper resistant device106. The signing keys may be based on a public key infrastructure (PKI) and may be authenticated based on a certificate from a certificate authority. In some examples, the signing device118may sign the encrypted initial network credentials116using a secret value that is shared between the integrated tamper resistant device106and the network solution provider126. As shown inFIG.1, the integrated tamper resistant device106may be configured to output the encrypted and signed initial network credentials120, which may be stored by the client device102in the storage device122. The storage device122may be a non-secure storage device. The client device manufacturer100may obtain the encrypted and signed initial network credentials120from the client device102and may store the encrypted and signed initial network credentials120in a storage device124. In some examples, the storage device124may include encrypted and signed initial network credentials from a number of other client devices. As shown inFIG.1, the client device manufacturer100may deliver128the encrypted and signed initial network credentials120to the network solution provider126. In some aspects of the disclosure, the client device manufacturer100may deliver128some or all of the encrypted and signed initial network credentials collected in the storage device124. In some examples, the network solution provider126may be a mobile network operator, a SIM card vendor, a corporate entity, or any other entity configured to honor the initial network credentials112. If the network solution provider126is a SIM card vendor, for example, the SIM card vendor may process (e.g., decrypt) the encrypted and signed initial network credentials120and may securely deliver the initial network credentials to a mobile network operator. The network solution provider126may implement a decryption and authentication device130to decrypt the encrypted and signed initial network credentials120of the client device102to recover the initial network credentials112. In some aspects of the disclosure, the decryption and authentication device130may be a decryption and authentication circuit. In other examples, the decryption and authentication device130may be a processor configured to execute one or more instructions of a decryption operation and an authentication operation. The network solution provider126may verify the authenticity of the encrypted and signed initial network credentials120(e.g., using a public key infrastructure (PKI) certificate or by authenticating a digital signature appended to the encrypted and signed initial network credentials120) to ensure that the encrypted and signed initial network credentials120were produced by an genuine integrated tamper resistant device (e.g., the integrated tamper resistant device106). The network solution provider126may store the initial network credentials in a storage device132. In some examples, the network solution provider126may also use the storage device132to store the initial network credentials of other client devices. The network solution provider126may implement the decryption and authentication device130and the storage device132in a secure environment to preserve the confidentiality of the initial network credentials (e.g., to prevent an unauthorized party from viewing, obtaining, or accessing the initial network credentials). In some aspects of the disclosure, the initial network credentials (e.g., initial network credentials112) described herein may include initial network identity data (also referred to as temporary subscriber identity data) that may be used by the network solution provider126to enable network authentication and connectivity for the client device102. For example, upon an initial connection established between the client device102and the network supported by the network solution provider126based on the initial network credentials112, the network solution provider126may provide the integrated tamper resistant device106operational network credentials (e.g., including permanent subscriber identity data) that are intended to replace the initial network credentials112. The client device102may then implement the integrated tamper resistant device106to replace the initial network credentials112with the operational network credentials and to use the operational network credentials for subsequent connections with the network. In some scenarios, depending on the algorithm and information used by the integrated tamper resistant device106to generate the initial network credentials112, the initial network credentials112may be the same as initial network credentials sent from one or more other integrated tamper resistant devices. This scenario may be referred to as an initial network credential collision. The network solution provider126may determine to ignore such collisions or may apply rules for handling or mitigating such initial network credential collisions. In some aspects of the disclosure, the network credentials generating device110may be configured to generate a secret seed, which may be used to deterministically derive one or more secondary initial network credentials. In some examples, the initial network credentials112may include the secret seed. In other examples, the secure processing device108of the integrated tamper resistant device106may generate the secret seed and may encrypt and sign the secret seed for delivery to the network solution provider126. In some aspects of the disclosure, the encrypted and signed secret seed may be delivered to the network solution provider126instead of the encrypted and signed initial network credentials120. For example, if the encrypted and signed initial network credentials120includes the secret seed, the network solution provider126may implement the decryption and authentication device130to decrypt the encrypted and signed initial network credentials120to recover the secret seed. The network solution provider126may use the secret seed to deterministically derive one or more secondary initial network credentials. For example, each of the one or more secondary initial network credentials may be associated with a different network. In some examples, the network solution provider126may use the authenticated and decrypted initial network credentials as a secret seed value to deterministically derive one or more secondary initial network credentials. In some aspects of the disclosure, components of the client device102external to the integrated tamper resistant device106may not have access to the initial network credentials generated by the integrated tamper resistant device106. For example, and as described in detail herein, if the integrated tamper resistant device106is implemented as a part of a system on chip (SoC) device, portions of the SoC device external to the integrated tamper resistant device106may not have access to the initial network credentials generated by the integrated tamper resistant device106. Moreover, the network solution provider126may ensure the security of the initial network credentials obtained from the decryption and authentication device130. Therefore, in some examples, the initial network credentials (e.g., the initial network credentials112) generated by the integrated tamper resistant device106may serve as temporary bootstrapping network credentials that enable the client device102to access a mobile network (e.g., a cellular network supported by the network solution provider126) without first establishing a local area network connection (e.g., a Wi-Fi connection) to obtain operational network credentials (e.g., cellular or mobile profiles that include permanent subscriber data). This may provide a more convenient out-of-the-box experience for a user of the client device and may facilitate manufacturing of the client devices. In some examples, the client devices described herein may use the network connection established with the bootstrapping network credentials to obtain (e.g., over the air) operational and more permanent subscriber data. The aspects described herein may allow a client device manufacturer to assemble client devices containing integrated tamper resistant devices without having to know the specific characteristics (e.g., subscriber data that may be tied to the integrated tamper resistant devices) of each integrated tamper resistant device. Since the initial network credentials generated by the integrated tamper resistant devices included in the client devices are encrypted, the initial network credentials may be opaque to the manufacturer of the client devices. Accordingly, the aspects described herein may reduce the amount of security needed during assembly of the client devices containing the integrated tamper resistant devices. FIG.2illustrates an example assembly of a client device including installation of a system on chip (SoC) device containing an integrated tamper resistant device in accordance with various aspects of the disclosure. With reference toFIG.2, the client device manufacturer100may obtain system on chip (SoC) devices200(e.g., SoC devices104,204,206) containing integrated tamper resistant devices from a supplier. As shown inFIG.2, prior to being assembled in a client device, each of the system on chip (SoC) devices200may not include a secure storage device and/or may not have access to any storage devices (e.g., a flash memory device). Therefore, since each integrated tamper resistant device may not have access to a storage device, the manufacturer of the integrated tamper resistant devices may not be able to provision network credentials (e.g., subscriber identity data) for each integrated tamper resistant device. Continuing withFIG.2, the client device manufacturer100may install each of the SoC devices200in a different client device. For example, the client device manufacturer100may install the SoC device104in the client device102. As shown inFIG.2, after the client device102is assembled, the system on chip (SoC) device104may be in communication with the storage device122via the data path202. FIG.3shows the client device102containing the integrated tamper resistant device106in accordance with various aspects of the disclosure. The integrated tamper resistant device106may automatically generate initial network credentials (e.g., initial network credentials112) and may encrypt and sign the initial network credentials. For example, the integrated tamper resistant device106may be configured to automatically generate the initial network credentials when the client device102is turned on for the first time, or in response to an instruction or command. In some aspects of the disclosure, the integrated tamper resistant device106may be configured to encrypt and sign the initial network credentials for delivery to a network solution provider and to encrypt and sign the initial network credentials for local use. For example, the integrated tamper resistant device106may encrypt the initial network credentials (e.g., initial network credentials112) with a public key of the network solution provider and may sign the encrypted initial network credentials to obtain the encrypted and signed initial network credentials120. The integrated tamper resistant device106may then output the encrypted and signed initial network credentials120to the storage device122as shown inFIG.3. Continuing with this example, the integrated tamper resistant device106may also encrypt the initial network credentials (e.g., initial network credentials112) with a key specific to the integrated tamper resistant device106(e.g., a key that is only known to the integrated tamper resistant device106). The integrated tamper resistant device106may sign the encrypted initial network credentials to obtain the encrypted and signed initial network credentials320. The integrated tamper resistant device106may then output the encrypted and signed initial network credentials320to the storage device122as shown inFIG.3. Therefore, by storing the encrypted and signed initial network credentials320in the storage device122, the integrated tamper resistant device106may be able to access the initial network credentials112after the encrypted and signed initial network credentials120have been delivered to a network solution provider. It should be noted that the integrated tamper resistant device106may store encrypted and signed initial network credentials120(and the encrypted and signed initial network credentials320) on the storage device122, even if the storage device122is considered a non-secure storage device (e.g., external to the integrated tamper resistant device106). Although the client device102has access to the storage device122and may read, write and modify the ciphertext of the encrypted and signed initial network credentials120stored in the storage device122, it may be difficult or intrusive for the client device manufacturer100to decipher the initial network credentials and obtain the cleartext of the encrypted and signed initial network credentials120. Additionally, the client device manufacturer100may not have the time or resources (e.g., network connectivity and support) to provision the network credentials to the client device102. Moreover, since the network credentials may include secure information such as subscriber identity data, the client device manufacturer100may not have the requisite security in place to securely provision the network credentials to the client device102. With reference toFIG.4, the client device manufacturer100may assemble a number of client devices400(e.g., client devices102,404,406). Each of the client devices400may generate and store their own encrypted and signed initial network credentials. The client device manufacturer100may obtain408the encrypted and signed initial network credentials from each of the client devices400. In some examples, the client device manufacturer100may store the encrypted and signed initial network credentials from each of the client devices400in a storage device124. For example, the storage device124may include the encrypted and signed initial network credentials120,412,414obtained from the client devices102,404, and406. As shown inFIG.4, the client device manufacturer100may deliver416one or more of the encrypted and signed initial network credentials stored in the storage device410to the network solution provider126. The network solution provider126may decrypt the encrypted and signed initial network credentials of the client devices400to recover the initial network credentials of each client device. The network solution provider126may verify the authenticity of the initial network credentials and may store the initial network credentials in a storage device. In accordance with the aspects described herein, the initial network credentials may include an initial network identity data that may be used by the network solution provider126to enable network authentication and connectivity for the client devices400. Upon an initial connection established between a client device (e.g., client device102,404,406) and the network supported by the network solution provider126based on the initial network credentials, the network solution provider126or another entity, may provide the client device operational network credentials (e.g., including permanent subscriber identity data) that is intended to replace the initial network credentials recovered from the encrypted and signed initial network credentials. The client device (e.g., client device102,404,406) may then use the operational network credentials for subsequent connections with the network. FIG.5(includingFIGS.5A and5B) illustrates a signal flow diagram500in accordance with various aspects of the disclosure.FIG.5includes the integrated tamper resistant device106, the storage device122, the client device manufacturer100, and the network solution provider126. InFIG.5, the areas within the dashed lines502,504represent secure environments. For example, the operations506,508,510,512,514,518, and520depicted within the dashed lines502may be performed in a secure environment of the integrated tamper resistant device106, and the operations526,528, and530depicted within the dashed lines504may be performed in a secure environment at the network solution provider126. As shown inFIG.5A, the integrated tamper resistant device106may obtain one or more cryptographic keys506. For example, the integrated tamper resistant device106may obtain a public cryptographic key of the network solution provider126and a cryptographic key specific to the integrated tamper resistant device106. The one or more cryptographic keys may enable the integrated tamper resistant device106to prove its authenticity to a network. In some examples, a cryptographic key and optionally its certificate may be provisioned by a manufacturer of the integrated tamper resistant device106. As further shown inFIG.5A, the integrated tamper resistant device106may obtain configuration data508. For example, the configuration data may be non-diversified data (e.g., standard or not unique data), such as a public key from the network solution provider126or a subscriber identity range. The integrated tamper resistant device106may generate510initial network credentials (e.g., initial network credentials112). The initial network credentials112may include a random number as discussed herein. In some examples, the initial network credentials may include the random number and shared secrets, used for symmetric key cryptography. In other examples, the initial network credentials112may include the random number and keys for public key cryptography, such as public keys signed by certificate authorities. In still other examples, the initial network credentials112may include the random number and a combination of symmetric and asymmetric keys. In some aspects of the disclosure, the integrated tamper resistant device106may use the initial network credentials112to provide a cryptographic proof as to its identity, and the network solution provider126may use the initial network credentials112to verify the cryptographic proof provided by the integrated tamper resistant device106. In some examples, the cryptographic proof based on the initial network credentials112may include a digital signature or a message authentication code generated by the integrated tamper resistant device106. In some examples, the initial network credentials112may include symmetric or asymmetric key material that enables secure provisioning of operational network credentials from the network solution provider126to the integrated tamper resistant device106. The integrated tamper resistant device106may encrypt the generated initial network credentials512for local usage using a key specific to the integrated tamper resistant device106. The integrated tamper resistant device106may sign the encrypted initial network credentials514and may output515(e.g., transfer) the encrypted and signed initial network credentials (e.g., the encrypted and signed initial network credentials320) to the storage device122. The storage device122may store516the encrypted and initial signed network credentials (e.g., the encrypted and signed initial network credentials320). The integrated tamper resistant device106may encrypt the generated initial network credentials518for delivery to the network solution provider126. For example, the integrated tamper resistant device106may encrypt the generated initial network credentials using a public key of the network solution provider126. The integrated tamper resistant device106may sign the encrypted initial network credentials520and may output521(e.g., transfer) the encrypted and signed initial network credentials (e.g., the encrypted and signed initial network credentials120) to the storage device122. As shown inFIG.5B, the storage device122may store522the encrypted and signed initial network credentials (e.g., the encrypted and signed initial network credentials120). As shown inFIG.5B, the client device manufacturer100may obtain523the encrypted and signed initial network credentials intended for delivery to the network solution provider126(e.g., the encrypted and signed initial network credentials120) from the storage device122. In some examples, the client device manufacturer100may maintain a database of encrypted and signed initial network credentials from a number of client devices. As further shown inFIG.5B, the client device manufacturer100may provide525the encrypted and signed initial network credentials (e.g., the encrypted and signed initial network credentials120) to the network solution provider126. In some examples, the client device manufacturer100may provide the encrypted and signed initial network credentials along with a number of other encrypted and signed initial network credentials from different integrated tamper resistant devices. As shown inFIG.5B, the network solution provider126may determine whether the encrypted and signed initial network credentials (e.g., the encrypted and signed initial network credentials120) have a valid signature526. The network solution provider126may decrypt the encrypted and signed initial network credentials if the signature is valid528. The network solution provider126may store the initial network credentials530resulting from the decryption. FIG.6illustrates a signal flow diagram600in accordance with various aspects of the disclosure.FIG.6includes the client device102and the network solution provider126. The client device102and the network solution provider126may have already performed the operations previously described with reference toFIG.5. The client device102and the network solution provider126may establish a connection (e.g., an over-the-air connection) based on the initial network credentials generated by an integrated tamper resistant device (e.g., the integrated tamper resistant device106) of the client device102. The network solution provider126may generate operational network credentials (e.g., permanent subscriber identity data) and may transmit606the operational network credentials to the client device102. The client device102may implement the integrated tamper resistant device (e.g., the integrated tamper resistant device106) of the client device102to replace the initial network credentials with the operational network credentials. Exemplary Apparatus and Method Thereon FIG.7is an illustration of a client device700according to one or more aspects of the disclosure. The client device700includes a communication interface (e.g., at least one transceiver)702, an apparatus708, a user interface710, a memory device712, and a storage medium760. These components can be coupled to and/or placed in electrical communication with one another via a signaling bus or other suitable component, represented generally by the connection lines inFIG.7. In some aspects of the disclosure, the apparatus708may be a system on chip (SoC) device. The signaling bus may include any number of interconnecting buses and bridges depending on the specific application of the apparatus708and the overall design constraints. The signaling bus links together the communication interface702, the apparatus708, the user interface710, the memory device712, and the storage medium760. The signaling bus may also link various other circuits (not shown) such as timing sources, peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further. The communication interface702may be adapted to facilitate wireless communication of the client device700. For example, the communication interface702may include circuitry and/or code (e.g., instructions) adapted to facilitate the communication of information bi-directionally with respect to one or more communication devices in a network. The communication interface702may be coupled to one or more antennas714for wireless communication within a wireless communication system. The communication interface702can be configured with one or more standalone receivers and/or transmitters, as well as one or more transceivers. In the illustrated example, the communication interface702includes a receiver704and a transmitter706. The storage medium760may represent one or more computer-readable, machine-readable, and/or processor-readable devices for storing code, such as processor executable code or instructions (e.g., software, firmware), electronic data, databases, or other digital information. For example, the storage medium760may be used for storing data that is manipulated by the secure processing circuit730(also referred to as a secure processing device) of the integrated tamper resistant device720when executing code. The storage medium760may be any available media that can be accessed by a general purpose or special purpose processor, including portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying code. By way of example and not limitation, the storage medium760may include, a random access memory (RAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), a register, a configuration of one or more fuses, and/or any other suitable medium for storing code that may be accessed and read by a computer. The storage medium760may be embodied in an article of manufacture (e.g., a computer program product). By way of example, a computer program product may include a computer-readable medium in packaging materials. In view of the above, in some implementations, the storage medium760may be a non-transitory (e.g., tangible) storage medium. The storage medium760may be coupled to the secure processing circuit730of the of the integrated tamper resistant device720, such that the secure processing circuit730can read information from, and write information to, the storage medium760. Code and/or instructions stored by the storage medium760, when executed by the secure processing circuit730of the integrated tamper resistant device720, causes the secure processing circuit730to perform one or more of the various functions and/or process operations described herein. The secure processing circuit730of the integrated tamper resistant device720is generally adapted for processing, including the execution of such code/instructions stored on the storage medium760. As used herein, the term “code” or “instructions” shall be construed broadly to include without limitation programming, instructions, instruction sets, data, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. The secure processing circuit730of the integrated tamper resistant device720is arranged to obtain, process and/or send data, control data access and storage, issue commands, and control other desired operations. The secure processing circuit730may include circuitry configured to implement desired code provided by appropriate media in at least one example. For example, the secure processing circuit730may be implemented as one or more processors, one or more controllers, and/or other structure configured to execute executable code. Examples of the secure processing circuit730may include a general purpose processor, a secure processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic component, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may include a microprocessor, as well as any conventional processor, controller, microcontroller, or state machine. The secure processing circuit730may also be implemented as a combination of computing components, such as a combination of a DSP and a microprocessor, a number of microprocessors, one or more microprocessors in conjunction with a DSP core, an ASIC and a microprocessor, or any other number of varying configurations. These examples of the secure processing circuit730are for illustration and other suitable configurations within the scope of the disclosure are also contemplated. According to one or more aspects of the disclosure, the secure processing circuit730may be adapted to perform any or all of the features, processes, functions, operations and/or routines for any or all of the apparatuses described herein. As used herein, the term “adapted” in relation to the secure processing circuit730may refer to the secure processing circuit730being one or more of configured, employed, implemented, and/or programmed to perform a particular process, function, operation and/or routine according to various features described herein. According to at least one example of the client device700, the secure processing circuit730may include one or more of a network credentials generating circuit/module732, a encrypting circuit/module734, a cryptographic signing circuit/module736, a outputting circuit/module738, and a network credentials replacing circuit/module740that are adapted to perform any or all of the features, processes, functions, operations and/or routines described herein (e.g., features, processes, functions, operations and/or routines described with respect toFIG.7). The network credentials generating circuit/module732may include circuitry and/or instructions (e.g., network credentials generating instructions762stored on the storage medium760) adapted to perform functions relating to, for example, generating initial network credentials for accessing a network. The initial network credentials enable a secure environment (e.g., the integrated tamper resistant device720) of the processing device (e.g., the apparatus708) to be authenticated by a network solution provider before operational network credentials are provided securely by the network solution provider. The encrypting circuit/module734may include circuitry and/or instructions (e.g., encrypting instructions764stored on the storage medium760) adapted to perform functions relating to, for example, encrypting the initial network credentials. The cryptographic signing circuit/module736may include circuitry and/or instructions (e.g., cryptographic signing instructions766stored on the storage medium760) adapted to perform functions relating to, for example, cryptographically signing the encrypted initial network credentials. The outputting circuit/module738may include circuitry and/or instructions (e.g., outputting instructions768stored on the storage medium760) adapted to perform functions relating to, for example, outputting the encrypted and signed initial network credentials for delivery to the network solution provider. The network credentials replacing circuit/module740may include circuitry and/or instructions (e.g., network credentials replacing instructions770stored on the storage medium760) adapted to perform functions relating to, for example, replacing the initial network credentials with operational network credentials from the network solution provider. The processing circuit750of the apparatus708is generally adapted for processing, including the execution of such code/instructions stored on the storage medium760. In some aspects, the processing circuit750may not be able to view, modify, or otherwise access code/instructions that are to be executed by the secure processing circuit730, such as the network credentials generating instructions762, the encrypting instructions764, the cryptographic signing instructions766, the outputting instructions768, and the network credentials replacing instructions770. According to at least one example of the client device700, the processing circuit750may include one or more of a network connection establishing circuit/module752and a receiving circuit/module754that are adapted to perform some of the features, processes, functions, operations and/or routines described herein (e.g., blocks910and912inFIG.9). The network connection establishing circuit/module752may include circuitry and/or instructions (e.g., network connection establishing instructions772stored on the storage medium760) adapted to perform functions relating to, for example, establishing a connection with the network solution provider based on the network credentials. The receiving circuit/module754may include circuitry and/or instructions (e.g., receiving instructions774stored on the storage medium760) adapted to perform functions relating to, for example, receiving operational network credentials including operational subscriber identity data from the network solution provider. As mentioned above, instructions stored by the storage medium760, when executed by the secure processing circuit730of the integrated tamper resistant device720, causes the secure processing circuit730to perform one or more of the various functions and/or process operations described herein. For example, the storage medium760may include one or more of the network credentials generating instructions762, encrypting instructions764, cryptographic signing instructions766, outputting instructions768, and network credentials replacing instructions770. In some aspects of the disclosure, the client device700shown inFIG.7may be the previously described client device102. In these aspects, the apparatus708may be the previously described system on chip device104, the integrated tamper resistant device720may be the previously described integrated tamper resistant device106, the secure processing circuit730may be the previously described secure processing device108, and the memory device712may be the previously described storage device122. In some examples, the network credentials generating circuit/module732may be the network credentials generating device110shown inFIG.1, the encrypting circuit/module734may be the encrypting device114shown inFIG.1, and the cryptographic signing circuit/module736may be the signing device118shown inFIG.1. FIG.8illustrates a method800operational in a processing circuit of an integrated tamper resistant device (e.g., the integrated tamper resistant device720) in accordance with various aspects of the present disclosure. In an aspect of the disclosure, the integrated tamper resistant device720generates initial network credentials for accessing a network802. The initial network credentials enable the integrated tamper resistant device720to be authenticated by a network solution provider before operational network credentials are provided securely by the network solution provider. The integrated tamper resistant device720encrypts the initial network credentials804. The integrated tamper resistant device720cryptographically signs the encrypted initial network credentials806. The integrated tamper resistant device720outputs the encrypted and signed initial network credentials808for delivery to the network solution provider. FIG.9illustrates a method900operational in a client device (e.g., the client device102) that includes a processing device (e.g., the system on chip device (SoC)104). The processing device may include an integrated tamper resistant device (e.g., the integrated tamper resistant device720) in accordance with various aspects of the present disclosure. In an aspect of the disclosure, the integrated tamper resistant device720of the client device102generates initial network credentials for accessing a network902. The initial network credentials enable the integrated tamper resistant device720to be authenticated by a network solution provider before operational network credentials are provided securely by the network solution provider. The integrated tamper resistant device720encrypts the initial network credentials904. The integrated tamper resistant device720cryptographically signs the encrypted initial network credentials906. The integrated tamper resistant device720outputs the encrypted and signed initial network credentials908for delivery to the network solution provider. The client device102establishes a connection with the network solution provider based on the initial network credentials910. The client device102receives operational network credentials including operational subscriber identity data from the network solution provider912. The integrated tamper resistant device720of the client device102replaces the initial network credentials with the operational network credentials. Those of ordinary skill in the art would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the implementations disclosed herein may be implemented as hardware, software, firmware, middleware, microcode, or any combination thereof. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Within the disclosure, the word “exemplary” is used to mean “serving as an example, instance, or illustration.” Any implementation or aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term “aspects” does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation. The term “coupled” is used herein to refer to the direct or indirect coupling between two objects. For example, if object A physically touches object B, and object B touches object C, then objects A and C may still be considered coupled to one another—even if they do not directly physically touch each other. For instance, a first die may be coupled to a second die in a package even though the first die is never directly physically in contact with the second die. The terms “circuit” and “circuitry” are used broadly, and intended to include both hardware implementations of electrical devices and conductors that, when connected and configured, enable the performance of the functions described in the disclosure, without limitation as to the type of electronic circuits, as well as software implementations of information and instructions that, when executed by a processor, enable the performance of the functions described in the disclosure. As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining, and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory), and the like. Also, “determining” may include resolving, selecting, choosing, establishing, and the like. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. A phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover: a; b; c; a and b; a and c; b and c; and a, b and c. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” Accordingly, the various features associate with the examples described herein and shown in the accompanying drawings can be implemented in different examples and implementations without departing from the scope of the disclosure. Therefore, although certain specific constructions and arrangements have been described and shown in the accompanying drawings, such implementations are merely illustrative and not restrictive of the scope of the disclosure, since various other additions and modifications to, and deletions from, the described implementations will be apparent to one of ordinary skill in the art. Thus, the scope of the disclosure is only determined by the literal language, and legal equivalents, of the claims which follow. | 48,701 |
11943348 | DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS The 5PP approaches described herein provide techniques for securely exchanging a secret data matrix between a source computer system and a destination computer system in a manner that is designed to more reliably maintain the secrecy of the secret data matrix as to other computer systems. In this fashion, the secret data matrix can support operations such as key services (e.g., key generation) by the source and destination computer systems (such as for keys that would be used in symmetric encryption/decryption (e.g., AES encryption/decryption)). First Example 5PP Embodiment (FIG.3) FIG.3shows an example five-pass protocol (5PP) for securely exchanging a secret data matrix between a source computer system and a destination computer system. The 5PP ofFIG.3obscures the secret data matrix x using 5 parameters, where the first and second parameters (P1and A, respectively) are known by the source computer system but not shared in unobscured form with the destination computer system, and where the third, fourth, and fifth parameters (b, P2, and c, respectively) are known by the destination computer system but not shared in unobscured form with the source computer system. Each pass is shown inFIG.3by an arrow between the source and destination, and the sequence of passes is numbered1through5byFIG.3. FIG.4Ashows an example networked system400in which the 5PP cryptographic approaches can be implemented, where the source takes the form of a source computer system402that is in communication with the destination computer system404over a network406. Source and destination computer systems402,404may each comprise any electronics system that includes one or more processors with one or more associated memories that cooperate with each other to implement the processing logic discussed herein. The one or more processors may take the form of any computational resource capable of performing the operations described herein. For example, the processor may be a general purpose processor (GPP) such as a CPU that executes software to carry out the operations described herein. The software may take the form of processor-executable instructions that are resident on a non-transitory computer-readable storage medium such as computer memory, and the processor may fetch and execute the software instructions to carry out the operations described herein. As another example, the processor may be a special purpose or fixed purpose processor that is tailored for implementing the 5PP approaches described herein (e.g., an application-specific integrated circuit (ASIC)). As yet another example, the processor may take the form of a reconfigurable logic device, such as a field programmable gate array (FPGA). The FPGA can be loaded with a bitfile or the like that serves as firmware to configure the configurable logic gates of the FPGA so that the FPGA becomes effectively hard-wired to carry out the operations described herein. Accordingly, the FPGA can implement a hardware logic circuit that applies massive parallelism and pipelining to hardware accelerate the processing operations described herein. As yet another example, the processor may take the form of a graphics processor unit (GPU) which can be very suitable for performing the bit level manipulations described herein. The source and destination computer systems402,404may each also include network connectivity for sending messages over the network406to each other. The network406can take the form of any communications network or combination of communications networks, whether wired or wireless, capable of communicating data between a source computer system and a destination computer system, including but not limited to wide area networks such as the Internet, local area networks, etc. For the purpose of analysis below, we will assume that network406provides a reliable, high data-rate connection between the source computer system402and destination computer system404(and that the 5PP approaches discussed herein need not be sensitive to latencies of the order of a network roundtrip time). FIG.3also shows different message matrices v1, v2, v3, v4, and v5that are sent between the source and destination for each pass. As shown byFIG.4B, (1) the matrix v1can be included in message410that is sent from the source computer system402to the destination computer system404in the first pass, (2) the matrix v2can be included in message412that is sent from the destination computer system404to the source computer system402in the second pass, (3) the matrix v3can be included in message414that is sent from the source computer system402to the destination computer system404in the third pass, (4) the matrix v4can be included in message416that is sent from the destination computer system404to the source computer system402in the fourth pass, and (5) the matrix v5can be included in message418that is sent from the source computer system402to the destination computer system404in the fifth pass. Each message410,412,414,416, and418may be broken up over multiple packets transmitted across network406if necessary. In each pass,FIG.3shows that obscuration of the secret data matrix x can be provided over the series of five passes via reversible logic operations that commute such as permutations and modulo additions/subtractions with respect to the first, second, third, fourth, and fifth parameters. Thus, withFIG.3, the confidentiality of each pass is protected by both a permutation and a modulo addition (or subtraction). The modulo 2 additions/subtractions can be implemented as XOR operations. The permutations can be two permutation operations that are commutative: permuting among words (word-wise permutation—see the example ofFIG.9Cdiscussed below) and permuting the bits of each word (bit-wise (or within-word) permutation—see the example ofFIG.9Bdiscussed below). These permutations alone are not secure because an observer can observe the before and after of each permutation; and with these two observations, it is possible to reveal the permutation itself by means of a permutation recovery algorithm (PRA) (see Appendix 1). Therefore, we want to avoid allowing an observer to see the before and after of a permutation. This can be accomplished by applying an additional reversible logic operation such as a modular arithmetic operation in connection with each permutation. XOR operations are particularly simple and effective for this. With the example ofFIG.3, within-word permutations (P1) are XORed with a single random constant and among-words permutations (P2) are XORed with a random number for each word. A constant can be used when permuting bits of each word (P1) because, while all words are permuted and XORed in the same way, it is impossible to determine which changes are from which operation. The attacker is left with only one possibility, a brute-force attack with complexity of O(2m) presuming the attacker picks the values for m and n correctly. To maintain this level of security it is desirable to use n distinct random numbers to obscure the among-words permutation (P2). Moreover, it is desirable to use a high entropy Random Bit Generator (RBG) to generate the values for the private parameters in order to avoid the weakening of the estimated attack complexities discussed below. Otherwise, correlations in the numbers generated could potentially be used to aid an attacker. But, some practitioners may choose to employ other techniques for generating the private parameters, such as pseudo-random bit generation or other algorithmic techniques capable of producing the bit values for the private parameters which the practitioner deems as providing effective resistance against certain levels of attack. The permuting of bits within each word and the permuting among words are two functions that commute. This is true because the permuting among the words does not affect the bits within the words and the permuting of bits within each word has an identical effect throughout all the words. However, the addition of the XOR steps defeats the ability to commute the two permutations. This observation eliminates the possibility of using the XOR steps to achieve a confidential, commutative and capable three-pass protocol. Without commutability, the source cannot fully remove its influence on the key from the first pass leaving only the destination's influence on the key in the third pass. To make a protocol using permutations and XORs in this manner capable, more information is needed on the destination side than is available with a 3PP. At least one more roundtrip between the source to the destination is needed. Thus, if we use two permutations as the primary mechanism for encryption, a minimum of five passes will be required for the destination to be able to extract the secret data matrix x. The secret data matrix x can comprise n words of m bits each.FIG.8shows an example secret data matrix x (800) where each word Wi(8021,8022, . . .802n) of x has m bits (Bit1, Bit2, . . . , Bit m). While a practitioner may choose any values for n and m that are greater than 1 if desired, it should be understood that larger values of n and m will enhance the security of the system. The value of m determines the brute-force effort required to guess A. The expected number of guesses required for this brute-force effort is 2(m-1). To make the number of guesses for an attack on P1equivalent, the value of n should be such that n!≥2m. Example values of m and n that require the same brute-force effort to break AES 256 are m=256 and n=64. Accordingly, a practitioner may find that using m=256 and n=64 is suitable for the system. However, as noted above it should be understood that practitioners may choose different values of m and n if desired. For example, values of m between 64 and 1024 (or higher) may be desirable for some practitioners. Also, values of n between 16 and 170 (or higher) may be desirable for some practitioners. The parameter A can take the form of a matrix of n words of m bits each, where each word of A shares the same value (or equivalently an m-bit word that is used n times to obscure the bits within each word of v1and v2). Thus, as a simple example where n is 3 and m is 4, an example value for A can be the three-word combination {1011, 1011, 1011}, (or thought of as the single word {1011} that affects each of the n words the same). The parameter b can take the form of a data matrix whose size is m×n bits in length, where b can include n words of m bits each, where each word of b may exhibit a different value. Thus, as a simple example where n is 3 and m is 4, an example value for b can be the three-word combination {0100, 1011, 0010}. The parameter c can take the form of a data matrix whose size is m×n bits in length, where c can include n words of m bits each, where each word of c may exhibit a different value. Thus, as a simple example where n is 3 and m is 4, an example value for c can be the three-word combination {0011, 1000, 0100}. The permutation matrix P1can take the form of an m×m array, where each row and column of P1will have only a single “1”/“true” value while all other values will be “0”/“false”, as shown byFIG.9A. P1serves as a bit-wise permutation agent that permutes the positions of bits within each word of a matrix made up of words such as x.FIG.9Bshows an example (where m=4) of how bit-wise permutation can be performed on the bits of a word such as a simple example of a 4-bit word802of x. As shown byFIG.9B, (1) the true value within the first row of P1identifies the bit position to which the first bit of word802will be permuted, (2) the true value within the second row of P1identifies the bit position to which the second bit of word802will be permuted, (3) the true value within the third row of P1identifies the bit position to which the third bit of word802will be permuted, and (4) the true value within the fourth row of P1identifies the bit position to which the forth bit of word802will be permuted. Thus, the P1shown byFIG.9Bwill operate on word802of {0010} to produce a bit-wise permuted word902of {0100}. This permutation pattern would be repeated over each word of the subject data matrix such as x. The permutation matrix P2can take the form of an n×n array, where each row and column of P2will have only a single “1”/“true” value while all other values will be “0”/“false”, as shown byFIG.9A. P2serves as a word-wise permutation agent that permutes the positions of words in a matrix.FIG.9Cshows an example (where n=5, and m=4) of how word-wise permutation can be performed on the words of a 5-word example matrix y (904). As shown byFIG.9C, (1) the true value within the first row of P2identifies the word position to which the first word of matrix904will be permuted, (2) the true value within the second row of P2identifies the word position to which the second word of matrix904will be permuted, (3) the true value within the third row of P2identifies the word position to which the third word of matrix904will be permuted, (4) the true value within the fourth row of P2identifies the word position to which the fourth word of matrix904will be permuted, and (5) the true value within the fifth row of P2identifies the word position to which the fifth word of matrix904will be permuted. Thus, the P2shown byFIG.9Cwill operate on matrix904of {0011, 1101, 0111, 1000, 1011} to produce a word-wise permuted matrix906of {1101, 1011, 0011, 0111, 1000}. FIGS.9B and9Cshow examples where P1and P2are read and applied row-wise. It should be understood that P1and/or P2can also be read and applied column-wise if desired by a practitioner. The source computer system402can randomly generate the parameters P1and A. The source computer system402can also compute the transpose of P1(P1t) after P1has been generated. Like P1, the source computer system402will not share P1twith the destination computer system404in unobscured form. It should be understood that the random generation with respect to A can be a random generation of an m-bit word of A (where this randomly generated m-bit word is used n times to create v1). Furthermore, the destination computer system404can randomly generate the parameters b, P2, and c. The destination computer system404can also compute the transpose of P2(P2t) after P2has been generated. Like P2, the destination computer system404will not share P2twith the source computer system402in unobscured form. With the 5PP approach ofFIGS.3,4A, and4B, the matrices v1, v2, v3, v4, and v5can have a bit length of m×n bits, which can be arranged as n words of m bits each. An eavesdropper has only five observations for each of the n words within v1, v2, v3, v4, and v5. Given the existence of 6 unknown variables (x, P1, A, b, P2, and c) and5observations, this makes 5PP highly resistant to eavesdropping attacks, as discussed in greater detail below. Returning toFIG.3, the source computer system400can compute v1by (1) bit-wise permuting x according to P1to yield a bit-wise permutation of x, and (2) then adding A to the bit-wise permutation of x.FIG.3shows an example formula for computing v1in this fashion. After v1has been communicated to the destination computer system404via message410, the destination computer system can extract v1from this message and compute v2by (1) adding b to v1to yield a summation of v1with b, and (2) then word-wise permuting the summation of v1with b according to P2to yield a word-wise permutation of the summation of v1with b.FIG.3shows an example formula for computing v2in this fashion. After v2has been communicated to the source computer system402via message412, the source computer system can extract v2from this message and compute v3by (1) subtracting A from v2to yield the difference between v2and A, and (2) then bit-wise permuting the difference between v2and A according to the transposed version of P1(P1t).FIG.3shows an example formula for computing v3in this fashion. It is worth noting that because the same A is applied to all n words, the word-wise permutation of A via P2during the second pass will not affect how A interacts with the other components of v2. Accordingly, the subtraction of A at the third pass (prior to the bit-wise permutation according to P1t) operates to completely remove A from v3. Furthermore, the application of P1toperates to remove the effect of P1on the secret data matrix x. After v3has been communicated to the destination computer system404via message414, the destination computer system can extract v3from this message and compute v4by (1) word-wise permuting v3according to the transposed version of P2(P2t) to yield a word-wise permutation of v3, and (2) then adding c to the word-wise permutation of v3.FIG.3shows an example formula for computing v4in this fashion. It is worth noting that the use of P2toperates to remove the effect of P2on the secret data matrix x. After v4has been communicated to the source computer system402via message416, the source computer system can extract v4from this message and compute v5by (1) subtracting x from v4to yield the difference between v4and x, and (2) then bit-wise permuting the difference between v4and x according to P1.FIG.3shows an example formula for computing v5in this fashion. It is worth noting that the use of P1operates to remove the effect of P1ton the parameter b. After v5has been communicated to the destination computer system404via message418, the destination computer system can extract v5from this message, and the destination computer system404now has the information it needs to derive the secret data matrix x. To do so, the destination computer system404needs to remove the effect of P1on c. Accordingly, the destination computer system404needs to first derive P1tbased on its knowledge of v4, v5, b, and c. Once P1tis derived, the destination computer system can readily compute x as shown byFIG.3. To accomplish derivation of P1t, the destination computer system404uses its knowledge of b and v5to compute the bit-wise permutation of c according to P1(Pic). That is, P1c=v5−b. The destination computer system404can then derive P1according to the permutation recovery algorithm (PRA) described below in Appendix 1. Once P1has been derived by the destination computer system404using the PRA, the destination computer system404can readily derive P1tas the transpose of derived P1. With knowledge of the derived P1t, the destination computer system404can compute a bit-wise permutation of v5according to P1t. The secret data matrix x can then be derived by the destination computer system404as the difference between v4and the bit-wise permutation of v5according to P1t(seeFIG.3). FIG.5shows an example process flow for execution by the source computer system402to implement its operations as part of the 5PP approach ofFIG.3. At step500, a processor of the source computer system402generates the secret data matrix and the parameters A, P1, and P1t. As noted, the source computer system402can randomly generate x, A, and P1. It should be understood that P1tneed not necessarily be computed at step500, and some practitioners may choose to wait to compute P1tuntil later during the process flow (e.g., between or as part of steps502-508). Furthermore, it should be understood that the order in which A and P1are generated is immaterial. At step502, a processor of the source computer system402computes the matrix v1using logical operations as shown byFIG.3for the first pass. This matrix v1is then sent to the destination computer system404via message410(see step504). At step506, the source computer system402receives the matrix v2in message412from the destination computer system. At step508, a processor of the source computer system402then uses logical operations as shown byFIG.3for the third pass to compute the matrix v3. This matrix v3is then sent to the destination computer system404via message414(see step510). At step512, the source computer system402receives the matrix v4in message416from the destination computer system. At step514, a processor of the source computer system402then uses logical operations as shown byFIG.3for the fifth pass to compute the matrix v5. This matrix v5is then sent to the destination computer system404via message418(see step516). FIG.6shows an example process flow for execution by the destination computer system404to implement its operations as part of the 5PP approach ofFIG.3. At step600, a processor of the destination computer system404generates the parameters b, P2, and P2t, and c. As noted, the destination computer system404can randomly generate b, P2, and P2t, and c. It should be understood that P2tneed not necessarily be computed at step600, and some practitioners may choose to wait to compute P2tuntil later during the process flow (e.g., between or as part of steps602-610). Similarly, it should be understood that c need not necessarily be generated at step600, and some practitioners may choose to wait to generate c until later during the process flow (e.g., between or as part of steps602-610). Furthermore, it should be understood that the order in which b, P2, and c are generated is immaterial. Further still, it should be understood that the destination computer system404may potentially perform step600before source computer system402performs step500(and that the source computer system402may potentially perform step500before destination computer system404performs step600). The inventors note that the value of c can be evaluated and re-generated if necessary in order to prevent the possibility that the subsequent derivation of P1t(see step616below) will produce an ambiguous result. Accordingly, a practitioner may want to include a process flow as shown byFIG.7as part of step600in order to reduce the risk of ambiguity in the derivation of P1t. As shown byFIG.7, at step700, a processor of the destination computer system404randomly generates a candidate for c. At step702, the processor determines whether the c candidate will lead to an ambiguous permutation recovery with respect to the PRA described in Appendix 1. This test can rely on the observation that if a value of c produces an ambiguous derivation of P1for any value of P1, it will produce an ambiguous derivation for all values of P1and likewise if a value of c produces an unambiguous derivation of P1for a known value of P1, it will produce an unambiguous derivation of P1for any value of P1. Accordingly, for step702, a test value of P1can be arbitrarily chosen, and the PRA of Appendix 1 can be applied to the c candidate and this test value of P1. If this application of the PRA at step702yields an unambiguous derivation of the test P1, then a conclusion can be reached that the c candidate will produce an unambiguous derivation of the actual P1used in the 5PP. Likewise, if this application of the PRA at step702yields an ambiguous derivation of the test P1, then a conclusion can be reached that the c candidate will produce an ambiguous derivation of the actual P1used in the 5PP. While the test P1can be any arbitrary value for P1, the inventors believe that the use of an identity matrix for the test P1will provide a simple and efficient manner for testing the c candidate. Appendix 2 included herewith explains techniques that can be used to compute an approximation to the probability of ambiguity for a c candidate. This determined probability can be used to estimate the frequency of failure that would arise if a process flow like that shown byFIG.7were not employed (or similarly, the frequency by whichFIG.7would need to compute new values for c as discussed below). As explained in Appendix 2, these probabilities are extremely low for relatively large values of m and n (such as 256 and 64 respectively) thereby ensuring that a value of c which produces a unique derivation of P1can always be found rapidly. If step702results in a determination that the c candidate permits unambiguous derivation of P1, then the destination computer system404will use the c candidate as the value for c (step704). Otherwise, the process flow returns to step700where a new c candidate can be generated and evaluated again by step702. At step602, the destination computer system404receives the matrix v1in message410from the source computer system. At step604, a processor of the destination computer system404computes the matrix v2using logical operations as shown byFIG.3for the second pass. This matrix v2is then sent to the source computer system402via message412(see step606). At step608, the destination computer system404receives the matrix v3in message414from the source computer system. At step610, a processor of the destination computer system404then uses logical operations as shown byFIG.3for the fourth pass to compute the matrix v4. This matrix v4is then sent to the source computer system402via message416(see step612). At step614, the destination computer system404receives the matrix v5in message418from the source computer system. At this point, the destination computer system now has the information it needs to derive P1and P1t. To derive P1and P1t, at step616, a processor of the destination computer system404can perform a permutation recovery algorithm (PRA) to derive P1and P1t. An example PRA to derive the value for P1is described in Appendix 1. As noted above, this computation will always produce a unique result if the procedure described inFIG.7is followed. Once P1has been derived, the transpose of P1can be readily computed to thereby derive P1t. If a practitioner chose not to implement the procedure ofFIG.7to guarantee a value for c that will produce an unambiguously derived P1, it is possible that the PRA will produce ambiguity in the derivation of P1 at step616. That is, the PRA may not resolve to a single value for the derived P1. If this ambiguity results at step616, the destination computer system404can halt the protocol and notify the source computer system402of this error. In such an event, the source computer system402can restart the five-pass protocol from the beginning (or the process flow can go back to the fourth pass computation of v4using a new value of c). With the re-started process, the values of one or more of the private parameters can be changed by the source computer system402and/or the destination computer system404, although it is preferred that the re-started 5PP regenerate all of the private parameters to more reliably ensure security. The likelihood of consecutive derivations of P1leading to an ambiguous result will be exceedingly small. But, as noted, if the procedure ofFIG.7is employed, there would be no need for this type of exception handling. Once P1and P1t have been unambiguously derived at step616, the process flow can proceed to step618. At step618, a processor of the destination computer system404derives the secret data matrix x by computing the value for x based on v4, v5, and P1tas shown byFIG.3. The destination computer system404then knows the secret data matrix x. At this point, the destination computer system404can send a message to the source computer system402acknowledging that it was able to successfully derive the secret data matrix x. After the destination computer system404has successfully derived the secret data matrix x, x can then be used by the destination computer system404as a basis to derive a key, such as a random symmetric key, to be used for encryption operations. Similarly, the source computer system402can also use x to derive a key, to be used for decryption operations. Any of a number of techniques can be used to derive a key from x. For example, x itself can be used as the key. As another example, a defined subset of bits within x can be used as the key. As yet another example, a logical operation such as a hashing function can be applied to x to generate the key. The source and destination computer systems402,404can employ the same key generation techniques for deriving the key from x. The message acknowledging the success in the sharing of the secret x may also contain one or more keys to be used for symmetric encryptions and decryption of a future message. These additional keys would be encrypted with the key derived from the secret x. Accordingly, the system400ofFIGS.4A and4Bcan be put to beneficial use by both the operator of the source computer system402and the operator of the destination computer system404because the system400provides a mechanism for both operators to securely communicate with each other. That is, based on the generated keys, the source and destination computer systems402and404can communicate data to each other in encrypted formats; and the keys can be used to decrypt the encrypted data at the receiving end. For example, the source and destination computer systems402,404can take the form of computers that operate websites and browsers where HTTP data is exchanged. The 5PP approach to key exchange can then be used to support the encryption/decryption of such HTTP-based data exchanges. Accordingly, it should be understood that the 5PP approaches described herein can be used to replace existing key exchange techniques that underlie Hypertext Transfer Protocol Secure (HTTPS) security. Beyond just HTTPS, the inventors believe that the 5PP approaches described herein can be used as the basis for secure key exchange in any system that employs Public Key Infrastructure (PKI). Practitioners can choose how frequently the source and destination computers402,404will perform the 5PP with respect to new secret data matrices in order to generate new keys. For example, the key exchange can be performed at the time of installation of new endpoint equipment in a network and only rarely thereafter whenever there are equipment or network failures. However, in a web traffic context, new key exchange can happen each time there is a new session between a browser and a new server. This is likely to occur only every few minutes (whereas the 5PP can be implemented within milliseconds). Another use of the 5PP involves the registration of the security devices such as the Q-Net I/O (QIO) units marketed by Q-Net Security. Registration is the process of sharing a secret that enables the sharing of keys between a QIO unit and a Q-Net Policy Manager (QPM). In applications involving a few QIO units, it is possible to register them all with the QPM at a single secure location. However, if there are a large number of QIO units such a procedure becomes logistically problematic. As a more flexible procedure, a small handheld device capable of executing the 5PP can perform the registration function even in insecure locations (e.g., outside of a secure environment for the QPM) while making eavesdropping-based attacks on security computationally infeasible (as discussed below). It should also be understood that the secret data matrix x need not be limited to use for supporting key generation. The secret data matrix x could be any type of data matrix that a practitioner wants to transfer between source and destination in an encrypted format for decryption at the destination. While the example 5PP ofFIG.3shows the use of modular addition and subtraction operations as reversible logic operations at various stages of the 5PP, it should be understood that other reversible logic operations could be employed, examples of which are discussed above. For example, the addition and subtraction operations could be replaced by XOR operations. As noted above, the source and destination computer systems402,404can implement the logic for carrying out the 5 stages of operations shown in the example ofFIG.3using software and/or hardware. For example, software in the form of a plurality of instructions executable by a processor of the source computer system402and/or a processor of the destination computer system404can be used to carry out the operations ofFIG.3. However, for improved performance, these operations could also be implemented in hardware logic, such as state machine-controlled hardware logic, rather than in stored program-enabled software logic. By hardwiring a hardware logic circuit to carry out the logical operations, massive parallelism can be applied to the bit operations to further improve performance. As examples, hardware such as ASICs and/or FPGAs could be used as the resource for a hardware implementation of the logical operations shown byFIG.3. As another option, GPUs could be used as the compute resource for carrying out the logical operations ofFIG.3. The computational complexity needed by the source and destination computer systems402,404is quite modest (e.g., modular addition/subtraction and permutations) except for the destination's recovery of P1from knowledge of c and P1·c. But, the O(m·n) algorithm described in Appendix 1 for the PRA is not particularly onerous since it is comparable in complexity to the transmission of the m·n bits that make up each of the five passes. Also notable is the ease of computation for the XOR operations. On most processors, only a single instruction is required for the XOR operation on two machine words. For software implementations, perhaps a handful of such instructions would be required for the 2m bits of two five-pass words. In hardware implementations, the XOR operation for all m bits of the n words (m×n bits) can take place in a single clock cycle. Furthermore, whileFIGS.5and6show example process flows for execution by the source and destination computer systems respectively, it should be understood that a practitioner may want to configure a computer system to be capable of playing the role of both a source and a destination based on the context of its situation at a given time. Accordingly, a computer system can be configured to carry out the process flows of bothFIGS.5and6, where the computer system can selectively switch between theFIG.5process flow and theFIG.6process flow based whether it is playing a role as a source or destination. Moreover, it should be understood that the source computer system402and/or the destination computer system404could perform its role in the 5PP as a service on behalf of one or more other computer systems. For example, performing 5PP as a service can be done where the providing computer is in confidential communication with its client, e.g., where it has already established a shared secret for confidential communication between provider and client. Resistance to Attacks: We will now elaborate on the difficulty of breaching the 5PP approach ofFIG.3using brute force attacks. However, before diving into this issue, there are several issues worth noting:The validity of the third-pass matrix operation: P1t(P1x+b)=x+P1tb (seeFIG.3)Limitation of arithmetic operations to m-bits per word The validity of the third-pass matrix operation and the limitation on arithmetic operations are tied together. With no limit on the number of bits representing the variables in the third pass, the matrix operation is clearly valid. However, there should be a limit on the number of bits to be transmitted. For calculations modulo k, where m/k is a positive integer, it can easily be shown that, Modk[P1t(P1x+b)]=Modk[x+P1tb] In particular for k=m, all such modular computations will never require more than m bits. For cases where m is an even number, the addition becomes an exclusive-or operation (XOR). With respect to possible attacks, since at least two parameters are changed on each pass, it is not possible for an eavesdropper to discover a parameter by comparing two consecutive passes. However, guessing any one of the private parameters, P1, P2, A, b, c, is sufficient to allow the eavesdropper to discover x. The private parameters b and c are more difficult to guess since each has n random, m-bit values as does the secret x. The expected number of guesses required is shown in the table below. Number ofParameterGuessesP1m!P2n!A2mb, c, x2m·n If we choose m=256 and n=64, the eavesdropper will typically require 2256guesses for A and about 2296guesses (which approximates 64!) for P2. This is an effort that is comparable or greater than that required to break AES 256. Attempting to guess A seems the best course for the eavesdropper since knowledge of a single parameter is all that is required to breach the encryption (as compared to the other parameters which would require significantly more guesses). But as noted, A is still highly resistant to brute force attacks at a level comparable to AES 256. As such, the 5PP approach ofFIG.3is believed to be highly secure against brute force attacks. Considering the number of guesses required one can see that attempting to guess any other parameter would require more effort. The PQC methods advanced so far (such as SIDH and RLWE), rely upon mathematically hard problems that have no known efficient algorithmic solution. This is analogous to the situation three decades ago when RSA was introduced: there was no known efficient algorithm for factoring large integers. In contrast, 5PP depends on numerical complexity requiring an attacker to use brute force to discover the secret. In certain applications there are, however, several burdens that a 5PP user must bear:A practitioner may conclude that the length of time to complete the five-pass protocol may be too long.A practitioner may find that using 5PP in channels limited to low link-rates is overly tedious.Like PKI, 5PP is at risk with respect to Man-in-the-Middle (MITM) attacks. The widespread availability of broadband service diminishes these first two burdens. Consider two examples of the use of the 5PP ofFIG.3, one operating over a 100 Mb/s link and the other over a 1 Gb/s link. The table below indicates total time that would elapse over a link with a 1 ms roundtrip time including the time required for both computation, data transfer and 2.5 ms for 2.5 roundtrips. Words perBits perTime perFive2.5TotalPassWordPassPassesRoundtripsTime10064256164 μs820 μs2.5 ms3.32 msMb/s1 Gb/s6425616.4 μs82 μs2.5 ms2.58 ms Clearly, the roundtrip time is the dominating component of the total time for the covert transmission of a secret at the two data rates shown above. Lower speed links will take longer, of course, but such links will become less common in the future. Furthermore, custom hardware as discussed above can assure that 5PP computations do not compromise performance. To mitigate the risks arising from MITM attacks, certificate authorities and/or multi-factor authentication can be employed. For example, the 5PP approaches described herein may be combined with a Certificate of Authority (CA) that authenticates each party's identities. As another example, the 5PP approaches described herein may be combined with multi-factor authentication (MFA) where out-of-band communications are used to provide an additional channel for verifying each party's identity. The 5PP approaches as described herein has several advantages over other quantum-resistant approaches.Computational complexity for 5PP is substantially lower than elliptic-curve methodsFor the 5PP approach ofFIG.3, the computations only need five O(m·n) XOR operations and the O(m·n) PRAFuture quantum algorithms are unlikely to provide an effective attack on 5PP5PP word-size grows only linearly with quantum-computer qubit sizeGrover's algorithm (see Wikipedia, “Grover's Algorithm”, edit date Nov. 13, 2019) is ineffective with a 256 qubit size and O(2128) complexityThe 5PP approach is sufficiently transparent to allow rapid identification of weaknessesThe individual computations for 5PP are fast and easy to understandQuantum key distribution (QKD) methods are, safe but more limited than 5PPQKD networks are expensive, distance-limited, and have data rates under 1 MbpsTotal time taken for the 5PP approach is believed to often be less than for competing approachesThis is particularly important for Internet of Things (IoT) endpoints with limited computational powerIndependent random numbers can be used for each instance of the 5PP approachAs a result, replay and other classic attacks can be blocked In summary, the 5PP approaches for cryptographic data exchange is quite different than other proposed post-quantum algorithms. For example, 5PP depends on numerical complexity instead of computational complexity to obscure the secret passed from source to destination. Furthermore, future discovery of applicable quantum-computer algorithms is unlikely to threaten 5PP. The effort required by an attacker using a classical computer is O(2256) and is hypothesized to be O(2128) for future 256-qubit quantum computers. Also, the time to execute 5PP is comparable to network roundtrip times when high-speed links are used. Second Example 5PP Embodiment (FIG.12) FIG.12shows another example 5PP approach for securely exchanging a secret data matrix between a source computer system and a destination computer system. While the 5PP approach ofFIG.3uses a series of 5 messages over 5 passes to securely exchange the secret data matrix, the 5PP approach ofFIG.12uses a series of 8 messages over 5 or more passes. Also, while the 5PP approach ofFIG.3uses a secret data matrix x that is effectively its own independent parameter, the 5PP approach uses a secret data matrix that is a combination of a plurality of the source computer system's parameters. In the example ofFIG.12, the secret data matrix is computed from3of the source computer system's parameters as P5P3A. ThisFIG.12approach to expressing the secret data matrix is believed to better enhance the security of 5PP by effectively burying the secret data matrix more deeply within the message exchanges. The 5PP ofFIG.12obscures a secret data matrix using 9 private parameters, of which 6 parameters (A, B, D, P1, P3, and P5) are known by the source computer system but not shared in unobscured form with the destination computer system, and of which 3 parameters (C, P2, and P4) are known by the destination computer system but not shared in unobscured form with the source computer system. Each pass is shown inFIG.12by an arrow between the source and destination, and the sequence of passes is numbered1through5byFIG.12. The example 5PP embodiment ofFIG.12can be implemented using a networked system400as discussed above in connection withFIGS.4A and4B, albeit where the message exchanges can include 3 additional messages (V1*, V1**, and V2* as discussed below). As discussed above, the source and/or destination computer systems may leverage compute resources such as FPGAs, ASICs, GPUs, and the like to accelerate their processing operations. FIG.12shows the different messages (V1, V1**, V2, V2*, V3, V4, and V5) that are sent between the source and destination for each pass, where each message V can take the form of an X by X matrix. While the 5PP approach ofFIG.12requires the computer systems to communicate V1-V5in sequence (as the destination computer system needs to have V1before it can compute V2, the source computer system needs to have V2before it can compute V3, and so on), it should be understood that there is some flexibility with respect to when the additional messages V1*, V1**, and V2*are communicated over the network. For example, V1* is needed for the destination computer system to compute V4, which means that V1*can be generated and communicated by the source computer system as part of the first or third passes (or even in a separate pass that occurs before what is shown as the fourth pass byFIG.12). As another example, V1** is needed for the destination computer system to compute the secret data matrix after the fifth pass, which means that V1** can be generated and communicated by the source computer system as part of the first, third, or fifth passes (or even in a separate pass that occurs before the destination computer system computes the secret data matrix after what is shown as the fifth pass byFIG.12). As yet another example, V2* is needed for the source computer system to compute V5, which means that V2* can be generated and communicated by the destination computer system as part of the second or fourth passes (or even in a separate pass that occurs before what is shown as the fifth pass byFIG.12). In each pass,FIG.12shows that obscuration of the secret data matrix can be provided over the series of five passes via reversible logic operations such as permutations and modulo additions. Thus, withFIG.12, the confidentiality of each pass is protected by both a permutation and a modulo addition. The modulo 2 additions can be implemented as XOR operations. In an example embodiment, additions shown byFIG.12can be modulo 2 additions. However, this need not be the case (for example straight additions/subtractions could be employed if desired by a practitioner). The parameters A, B, C, and D shown byFIG.12can be X by X matrices populated with random values. Accordingly, these can be referred to as variable matrices. The parameters P1, P3, and P5shown byFIG.12can be X by X matrices that operate to change the order of the columns of a variable matrix. The parameters P2and P4shown byFIG.12can be X by X matrices that operate to change the order of the rows of a variable matrix. Accordingly, these matrices P1, P2, P3, P4and P5can be referred to as permutation matrices. A permutation matrix is a matrix where each row has only one entry of “1” (and all other entries in that row are “0”) and where each column has only one entry of “1” (and all other entries in that column are “0”). Whether a given permutation matrix operates as a column-wise permutation matrix or a row-wise permutation matrix on the subject matrix with which it is combined will depend on the order of multiplication for the two matrices. That is multiplying permutation matrix P by variable matrix A in the order of PA would produce row-wise permutation of A. By contrast, multiplying permutation matrix P by variable matrix A in the order of AP would produce column-wise permutation of A. The nature of the column permutation and row permutation within the permutation matrices can be randomized” through randomization of where the “1”s are located in each permutation matrix. The secret data matrix can be an X by X matrix that is computed as P5P3A. Accordingly, the secret data matrix can be the variable matrix A as permuted by the (columnar) permutation matrices P3and P5. The value of X can be defined by a practitioner to be a number of bits that balances security versus computation time/efficiency. Setting the value of X low will sacrifice security (e.g., making the cryptography more susceptible to cracking) while setting the value of X high would increase the computation time and resources required for the Practitioners can choose to balance these competing interests in a manner deemed appropriate for their particular use cases, but the inventors note that a value of X=64 bits can provide a nice balance of security and computational time/efficiency. However, it should be understood that other values for X could be employed (for example, 128 bits, 256 bits, 512 bits, 1024 bits, or even higher if desired by a practitioner). The source computer system402can randomly generate the parameters A, B, D, P1, P3, and P5. The source computer system402can also compute the transpose of (P1T) after P1has been generated. Like P1, the source computer system402will not share P1Twith the destination computer system404in unobscured form. Furthermore, the destination computer system404can randomly generate the parameters C, P2, and P4). The destination computer system404can also (1) compute the transpose of P2(P2T) after P2has been generated and (2) compute the transpose of P4(P4T) after P4has been generated. Like P2and P4, the destination computer system404will not share P2Tor P4Twith the source computer system402in unobscured form. With reference toFIG.12, the source computer system400can compute V1by (1) column-wise permuting A according to P1to yield a column-wise permutation of A, and (2) then adding B to the column-wise permutation of A.FIG.12shows an example formula for computing Vi in this fashion. The source computer system can also compute the additional messages V1*and V1** using the formulas shown byFIG.12. V1*can be computed by (1) column-wise permuting B according to P1T, and (2) then column-wise permuting P1TB according to P3. V1** can be computed by column-wise permuting D according to P5. After V1has been communicated to the destination computer system404via message410, the destination computer system can extract V1from this message and compute V2by (1) row-wise permuting V1according to P2, and (2) then adding C to this row-wise permutation of V1.FIG.12shows an example formula for computing V2in this fashion in terms of its constituent parameters. The destination computer system can also compute the additional message V2* using the formula shown byFIG.12. V2* can be computed by (1) row-wise permuting C according to P2T, and (2) then row-wise permuting P2TC according to P4. After V2has been communicated to the source computer system402via message412, the source computer system can extract V2from this message and compute V3by (1) column-wise permuting V2according to P1T, (2) then column-wise permuting P1TV2according to P3, and (3) then adding D to P3P1TV2.FIG.12shows an example formula for computing V3in this fashion in terms of its constituent parameters. After V3has been communicated to the destination computer system404via message414and V1* has been communicated to the destination computer system (via message410or414(or via its own message)), the destination computer system can extract Wand V3from the message(s) and compute V4by (1) row-wise permuting V1* according to P2, (2) then adding this sum to V3, (3) then row-wise permuting this sum according to P2T, and (4) then row-wise permuting this row-wise permutation according to P4,FIG.12shows an example formula for computing V4in this fashion in terms of its constituent parameters. After V4has been communicated to the source computer system402via message416and V2* has been communicated to the source computer system (via message412or416(or via its own message)), the source computer system can extract V2* and V4from the message(s) and compute V5by (1) column-wise permuting V2* according to P1T, (2) then column-wise permuting P1TV2*according to P3, (3) then adding P3P1TV2* to V3, and (4) then column-wise permuting this sum according to P5,FIG.12shows an example formula for computing V5in this fashion in terms of its constituent parameters. After V5has been communicated to the destination computer system404via message418and V1** has been communicated to the destination computer system (via message410,414, or418(or via its own message)), the destination computer system can extract V1**and V5from the message(s) and it will then have the information it needs to derive the secret data matrix. To do so, the destination computer system404can derive the secret data matrix (now the shared secret data matrix) according to the formula shown byFIG.12. That is, by (1) row-wise permuting V5according to P4T, (2) row-wise permuting V1**according to P2T, and (3) summing P4TV5with P2TV1**, the destination computer system is able to derive the shared secret matrix P5P3A. It should be understood that this example formula for computing the shared secret matrix is for when the system is employing modulo 2 addition (where addition of equivalent matrices operates to produce a zero value). If the system employs straight addition, then the computation of the secret data matrix could be expressed as P5P3A=P4TV5−P2TV1**. FIG.13shows an example process flow for execution by the source cor system402to implement its operations as part of the 5PP approach ofFIG.12. At step1300, a processor of the source computer system402generates the parameters A, B, D, P1, P1T, P3, and P5and the secret data matrix (P5P3A) As noted, the source computer system402can randomly generate A, B, D, P1, P3, and P5. It should be understood that P1Tneed not necessarily be computed at step1300, and some practitioners may choose to wait to compute P1Tuntil later during the process flow (e.g., between or as part of steps1302-1308). Similarly, other parameters could be generated later in the process flow (e.g., just before they are needed in the message computations). Furthermore, it should be understood that the order in which the randomized parameters are generated is immaterial. At step1302, a processor of the source computer system402computes the matrix V1using logical operations as shown byFIG.12for the first pass. The processor may also generate V1* and V1** at step1302, although it should be understood that this need not be the case as discussed above. This matrix V1is sent to the destination computer system404via message410(see step1304). This message410may also include V1* and/or V1**, although it should be understood that V1* and/or V1** could be sent later during the process flow as discussed above if desired. At step1306, the source computer system402receives the matrix V2in message412from the destination computer system. This message412may also include V2* (although the destination computer system may send V2* in subsequent messages if desired, as discussed above). At step1308, a processor of the source computer system402then uses logical operations as shown byFIG.12for the third pass to compute the matrix V3. This matrix V3is then sent to the destination computer system404via message414(see step1310). At step1312, the source computer system402receives the matrix V4in message416from the destination computer system. At step1314, a processor of the source computer system402then uses logical operations as shown byFIG.12for the fifth pass to compute the matrix V5. This matrix V5is then sent to the destination computer system404via message418(see step1316). FIG.14shows an example process flow for execution by the destination computer system404to implement its operations as part of the 5PP approach ofFIG.12. At step1400, a processor of the destination computer system404generates the parameters C, P2, P2T, P4, and P4T. As noted, the destination computer system404can randomly generate C, P2, and P4. It should be understood that P2Tand P4Tneed not necessarily be computed at step1400, and some practitioners may choose to wait to compute P2Tand/or P4Tuntil later during the process flow (e.g., between or as part of steps1402-1410). Similarly, other parameters could be generated later in the process flow (e.g., just before they are needed in the message computations). Furthermore, it should be understood that the order in which the randomized parameters are generated is immaterial. Further still, it should be understood that the destination computer system404may potentially perform step1400before source computer system402performs step1300(and that the source computer system402may potentially perform step1300before destination computer system404performs step1400). At step1402, the destination computer system404receives the matrix V1in message410from the source computer system. This message may also include the matrices V1* and V1** as discussed above. At step1404, a processor of the destination computer system404computes the matrix V2using logical operations as shown byFIG.12for the second pass. The processor may also generate V2* at step1404, although it should be understood that this need not be the case as discussed above. The matrix V2is sent to the source computer system402via message412(see step1406). This message412may also include V2*, although it should be understood that V2* could be sent later during the process flow as discussed above if desired. At step1408, the destination computer system404receives the matrix V3in message414from the source computer system. At step1410, a processor of the destination computer system404then uses logical operations as shown byFIG.12for the fourth pass to compute the matrix V4. This matrix V4is then sent to the source computer system402via message416(see step1412). At step1414, the destination computer system404receives the matrix V5in message418from the source computer system. At this point, the destination computer system now has the information it needs to derive the secret data matrix, and the process flow can proceed to step1416. At step1416, a processor of the destination computer system404derives the secret data matrix as shown byFIG.12. At this point, the destination computer system404can send a message to the source computer system402acknowledging that it was able to successfully derive the secret data matrix. As discussed above in connection with theFIG.3embodiment, after the destination computer system404has successfully derived the secret data matrix, the secret data matrix can then be used by the destination computer system404as a basis to derive a key, such as a random symmetric key, to be used for encryption operations. Similarly, the source computer system402can also use the secret data matrix to derive a key, to be used for decryption operations. Further still, as noted above in connection with theFIG.3embodiment, the 5PP approach ofFIG.12can be used to support the registration of the security devices such as the Q-Net I/O (QIO) units marketed by Q-Net Security, where the registration can include a process of sharing a secret that enables the sharing of keys between a QIO unit and a Q-Net Policy Manager (QPM). Moreover, as with the 5PP approach ofFIG.3, with the 5PP approach ofFIG.12, practitioners can choose how frequently the source and destination computers402,404will perform the 5PP with respect to new secret data matrices in order to generate new keys. While the invention has been described above in relation to its example embodiments, various modifications may be made thereto that still fall within the invention's scope. Such modifications to the invention will be recognizable upon review of the teachings herein. APPENDIX 1—PERMUTATION RECOVERY ALGORITHM (PRA) FIG.10shows example pseudocode for operational steps that reveal the permutation P1when the destination computer system404has knowledge of c and P1c≙d. Assume there are m bits in each of the n words of these two known quantities, c={c1, c2, . . . cn} and d={d1, d2, . . . dn}. Let ci={ci1, ci2, . . . cim} and di={di1, di2, . . . dim} where cij=0 or 1 and dij=0 or 1. Let ⊕ represent the XOR operation and define the following XOR operation, dij⊕ci¬{dij⊕ci1,dij⊕ci2, . . .dij⊕cim} In this instance, the complexity of the PRA ofFIG.10is thus O(n·m). If the eavesdropper has knowledge of c and P1·c, the computational complexity of the PRA is insufficient to deter the discovery of the permutation P1and the subsequent discovery of x. However, as noted, c is a private parameter known only to the destination computer system, and P1·c is sent to the Destination in ciphertext (within the matrix v5). As noted above, it is possible for the PRA to return ambiguous results for derivations of P1.FIG.11Ashows an example of values for c and d that return an unambiguous derivation, andFIG.11Bshows an example of values for c and d that return an ambiguous derivation. For both of these examples, m=4 and n=4. Row1100inFIGS.11A and11Bshow summations from a recovery table, and the locations of the value “4” in the vectors of row1100identify the bit positions of the “1”/“true” values in each row of the recovered P1. If each vector has only a single value “4”, then the P1derivation is unambiguous. However, if one of the vectors has multiple instances of “4”, then the P1 derivation is unambiguous. It should be understood that the use of “4” in this instance arises from the use of m=4. For other implementations where m equals a value q, it should be understood that the value q is what would govern the identification of bit positions for “1”/“true” values in P1. WithFIG.11A, the recovery table gives G={1,3, 2, 4}, a unique result correctly identifying the permutation P1as shown byFIG.11A. With knowledge of derived P1, the computation of derived P1tis straightforward (seeFIG.11A). FIG.11Bshows an example with a slight variation in c, but which produces an ambiguous derivations. In the example shown inFIG.11B, the recovery table will yield a different G={1, {3,4}, 2, {3,4}}, an ambiguous result. The PRA could yield G={1,3, 2,4} or G′={1,4,2,3}, permutations which when expressed in matrix from give, Pi=[1000001001000001]orP1′=[1000000101000010] APPENDIX 2—DEVELOPMENT AND ANALYSIS OF PERMUTATION FORMULA FOR IDENTIFYING PROBABILITY OF AMBIGUITY IN PERMUTATION RECOVERY The permutation formula is quite complex, so it is useful to review some properties of permutations and how they can manipulate the values of c¬{c1, c2, . . . cn}. PRA Ambiguity. Consider an arbitrary permutation P1athat is among the set of all possible permutations,¬{P11, P12, . . . P1m!}. These permutations are each represented by an m×m square matrix. In a particular instance of the 5PP, the destination knows both P1a·ciand cifor n different values of i, but does not know and must calculate P1a. The ci∈c are random m-bit binary words. The permutation recovery algorithm (PRA) may not yield a unique solution for P1asince a second permutation matrix P1bmay yield the same result: P1b·ci=P1a·ci. Multiplying by the transpose of P1awe get the result, P1at·P1b·ci=Ci. Since P1at·P1a=Im, the m-bit identity matrix, and since a permutation of a permutation is a permutation, all cases of the product matrix P1a·P1b∈. For brevity, let P1at·P1b¬P1cso that solutions of the equation P1c·ci=ciinclude unique solutions when there is a single permutation solution P1cfor a given ciand ambiguous solutions when there are two or more permutations each satisfying the equation P1c·ci=ci. Symmetry. Letcirepresent the compliment of ciwith all ones replaced by zeros and all zeros replaced by ones. If and only if P1c·ci=ciis true, then P1c·ci=ciis also true. This is easy to show by addition: P1c·(ci+ci)=ci+cimust be true since ci+ci={1,1, . . . 1}. Note that any permutation of ones in this result leaves it unchanged. If P1c·ci≠ci, then as again seen by addition: P1c·(ci+ci)≠ci+ci. Thus, there is symmetry between corresponding pairs (ci,ci) when ci≤{0,1,1, . . . 1},ci≥{1,0,0, . . . 0} and, ci+ci={1,1,1, . . . 1}. Universality of Identity Permutation. It is possible, but tedious, to evaluate P1c·ci=cifor all values of P1c∈and ci∈c, a total of m!×2m·nresults, particularly since m! and 2m×nmay be very large numbers. However, we note that for a given P1c∈and for n values of ci∈c, the PRA will produce either a unique result, permutation P1cor an ambiguous result with two or more permutations, including P1c. If we choose another permutation P1c′ and permute the n values of ciaccordingly to get n values ci′, the two results of P1c·ci=ciand P1c′·ci′=ci′ will be indistinguishable except for permutation. Thus, both results will be either unique or ambiguous. When we consider all possible permutations, they will each produce different resulting values, but the same number of unique results and the same number of ambiguous results. Therefore, we choose P1c=Im, the identity matrix when calculating the number of ambiguous results, confident that that number will be the same for all P1c∈. Probability of Ambiguity. We wish to calculate p(m, n), the probability of obtaining an ambiguous result using the PRA to solve P1c·ci=ci. We begin by calculating an approximation to the probability of a unique result, 1−p(m,n). There are many ways to obtain a unique result. To demonstrate this we examine the case, m×n=4×4 where P1c=I4, c={c1, c2, c3, c4}=d. Using the PRA algorithm as described in Appendix 1 we get the recovery table: c=d={{1,0,1,1},{0,1,1,1},{1,0,0,0},{0,0,1,1}}{1,0,1,1},{0,1,0,0},{1,0,1,1},{1,0,1,1}{1,0,0,0},{0,1,1,1},{0,1,1,1},{0,1,1,1}{1,0,0,0},{0,1,1,1},{0,1,1,1},{0,1,1,1}{1,1,0,0},{1,1,0,0},{0,0,1,1},{0,0,1,1}_{4,1,1,1},{1,4,2,2},{1,2,4,4},{1,2,4,4} As before the result can be read off the bottom line by identifying the positions where the total is 4. As we should expect from the universality of the identity permutation, this result is also ambiguous. It could be either {1,2,3,4} or {1,2,4,3}. If there were no other constraints and the extra columns of 1s and 0s were completely random, the probability would be (12)4=116. How many ways can this second column of 1s appear? Two columns of 1s can appear in (42)=6 ways. Looking at it in a more granular way we have, 110010101001=>(31)01000101=>(21)0011=>(11) Adding the binomial coefficients together yields, (31)+(21)+(11)=3+2+1=6 This is the result given originally above, but the granular approach is more easily generalized. Thus, when m=n=4 and considering only the possibility of one extra column (4 bits) of 1s, the probability of a unique result is (1-(12)4)(31)+(21)+(11)=(1-(12)4)3+2+1. However, there are two other possibilities: Two extra columns (2·4=8 bits) of 1s can occur (21)+(11)=3 different ways requiring 8 bits to be 1. Three extra columns (3·4=12 bits) being 1s can occur in only one way, (11)=1. Multiplying the probabilities of each of these events failing to occur gives the probability of a unique event, 1-pF(4,4)=(1-(12)4)3+2+1(1-(12)8)2+1(1-(12)12)=0.668225 From this result we calculate pF(4,4)=0.331775, which can be checked by random simulation, pR(4,4)=0.33218, and by exhaustive iteration, pI(4,4)=0.333496. These results are all slightly different. The random simulation pR(4,4) is expected to be different on each run because of the million random selections of four c words. However, the iterative solution is repeatable and emphasizes the statement that the formula for 1−pF(4,4) was an approximation. The general formula only approximates the individual components of the ambiguity because we assume the binary digits in a recovery table column are random and independent of each other. They are not completely so since a bit in one column has a dependency on the corresponding bit in an adjacent column. Also, the actual number of ambiguous results must be an integer, but the formula gives pF(4,4)·24·4=21,743.2064 which is not an integer and pI(4,4)·24·4=21,856 is an integer. However, pF(m,n) is always close to the value of ambiguity given by the exhaustive iteration of results. To verify this observation, we generalize the formula for pF. pF(4,4)=1-(1-(12)4)3+2+1(1-(12)8)2+1(1-(12)12)=1-∏i=14-1∏k=14-i(1-1/24i)(4-ki) For other values of m and n this becomes pF(m,n)=1-∏i=1m-1∏k=1m-i(1-1/2ni)(m-ki) This generalization can be checked by calculating pI(m,n), pR(m,n) and pF(m,n) for various values of m and n. The results are shown in the following table: m/n481632644I0.3334960.023270R0.3332180.0230070.000091F0.3317750.0232690.0000911.39*10−908I0.879180R0.8791350.1046190.000431F0.8705200.1045670.0004276.52*10−9016IR0.9999980.3805740.001784F0.9999710.3801780.0018292.78*10−8032IR0.8674570.007450F10.8672340.0075411.15*10−7064IR0.030164F0.9998100.0303034.69*10−71.11*10−16128IR0.000002F10.1167121.89*10−64.44*10−16256IR0.000000F0.3926777.59*10−61.78*10−15 The exhaustive iteration simulation (I) is an O(2m·n) calculation. It becomes exponentially impractical for m·n>32. The random simulation (R) grows arithmetically but matches closely the I simulation for m·n≤32. Thus, we can use it as a proxy for I for m·n>32. Because of the correlations between columns and the non-integer approximations in the formula, results are only approximate. The root-mean-square error of the formula (F) with respect to random simulation (R) is 0.53% for the dozen results with n=4, 8 and 16. The average error for these same values of n is −0.04%. Contributions to this error are the between-column correlations, the non-integer nature of the formula and the random variation of the simulated results of the PRA when using random c words. | 68,023 |
11943349 | DETAILED DESCRIPTION The following detailed description of the invention is intended to provide various examples, but it is not intended to limit the invention or the application and uses of the invention. Furthermore, there is no intention to be bound by any theory presented in the preceding background or the following detailed description. As noted above, various embodiments are able to authenticate devices, applications and/or users of devices through the sharing of secrets initially established with a set top box, television receiver, placeshifting device, video game player, personal computer or other trusted home device. Because the home device typically has a high level of trust relative to the client device, the home device can be made to “vouch” for the less trusted client device through secrets that are shared between the client device and the trusted home device. Trust in the client device can be further elevated by the trusted home device verifying that the client device is operating on the same local area network (LAN) as the home device. That is, by generating trust between the client device and the home device, the trust previously established with the home device can be extended to the client device. In particular, applications executing on a client device can be designed to authenticate with services operating on a network, but the network service cannot typically identify the client device (or applications operating on the client device) until the identity of the device is initially established. A different device that is previously known to the service, however, can act as an intermediary in initially delivering the client's identifying data to the network service. After the network service has received reliable identifying information from the client, the service is able to directly authenticate the client device in subsequent transactions by requesting and verifying receipt of the same secret identifier. Various embodiments may expand upon these basic concepts in any number of ways, several of which are set forth below. With reference now toFIG.1, most homes, offices and other customer environments now include one or more digital video recorders (DVRs), set top boxes (STBs) or other digital television receivers, placeshifting devices, video game players and other hardware-type home devices120. Generally these devices are installed to communicate via a local area network (LAN)107at the user's home or other premises, and are primarily operated by users who live or work at the premises, and who are typically subscribers to a broadcast television, media streaming, video gaming or other service. Often, home devices120have pre-established arrangements for secure communications with a remote backend security service130via network105. Typically, communications between the home device120and the backend security service130are reliably secure so that secrets can be exchanged without fear of interception. This security can be based upon credentials in the home device's hardware or firmware that are presented to the security service130via transport layer security (TLS) and/or other encrypted data communications. In some cases, certain types of home devices120(e.g., STBs) are professionally installed at the user's premises by trusted personnel, thereby adding to the level of trust in the device120. That is, the home device120can often be reliably associated with authorized users of the hardware due to physical delivery and/or installation of the hardware to a known physical location by a trusted technician; this trust is maintained and enhanced through secure communications to and from the home device120after installation. Data connections with phones, tablets, portable computers and other client devices110, however, are often considered to be much less trustworthy. Consumers typically operate any number of different client devices that are received from different retailers and service providers, so it is generally impractical to provide security through trusted delivery or installation channels. Moreover, although many client devices are designed with internal digital credentials that identify the device with a high level of confidence, these credentials are often not available to third party developers of applications117A-C that run on the device110. Devices manufactured by the Apple Corporation of Cupertino, Calif., for example, typically have secure internal codes that are used by the device manufacturer, but that are not made available to other developers. As a result, it can be beneficial for third party applications117to leverage another trusted device (e.g., the home device120) to vouch for the more unknown client device110. This can be accomplished by using the trusted device120as an intermediary to deliver secret identifying information from the client device no to a backend authentication server130or the like. As illustrated inFIG.1, the trusted home device120is a television receiver, set top box (STB), digital video recorder (DVR), video game player, media player and/or the like. In a particular example, home device120is a STB that receives broadcast television signals from a direct broadcast satellite (DBS), cable, IPTV or other television content provider. Frequently, a technician retained by the content provider visits the customer's premises to physically install the device120, thereby providing a high level of confidence that the device120is operated by a particular customer at a particular geographic location. Home device120typically communicates with the Internet105or the like via the customer's home or office network107(e.g., a wired or wireless local area network (LAN)) to provide additional features such as time and/or placeshifting, home monitoring and control and/or other functions as desired. Home device120is typically a consumer-operated hardware device that includes computing hardware, including one or more microprocessors or digital signal processors, memory, mass storage and input/output interfaces as desired. Home device120typically executes an operating system and appropriate software and/or firmware to carry out the various functions. In the example illustrated inFIG.1, home device120includes a security module126implemented with any combination of hardware, software and/or firmware. Typically, security module126is a firmware or software module that resides in memory (or other digital storage) and that includes appropriate instructions to be executed by a processor of home device120to carry out the various functions described below relating to handling of secrets with one or more client devices110. Secure communications132can occur between the home device120and a remote backend security service130. Backend service130is typically a computer server having a processor, memory and input/output interfaces. In various embodiments, service130may make use of “cloud-type” storage, processing and/or other hardware abstraction services such as Amazon Web Services (AWS), Microsoft Azure and/or any other “infrastructure-as-a-service” (IaaS) or “platform-as-a-service” (PaaS) provider, as desired. Other embodiments may be implemented entirely with hardware that is physically located at the customer's home or other premises. Generally speaking, security service130executes software, firmware or other logic to authenticate users and/or devices operating within system100. Authentication usually involves the requesting person or device to provide a digital credential that is unique to the requester. A device may submit a secret identifier or other digital code that is known only to that device, for example. A user may be able to authenticate by provide a userid/password combination, biometric data, a code transmitted to a known device, or some other secret information known only to the authenticating party. Secret information is transferred to the security service130via secure communications132to prevent unauthorized interception of the secret by other users of network105. In various embodiments, communications132are provided over TCP or UDP protocols that are secured by TLS or similar mechanisms. Secret data may be further encrypted for transit using public/private keys, symmetric keys shared between the communicating devices, and/or other cryptographic techniques as desired. As noted above, home device120typically has a digital identifier that is known to the security service130that that uniquely identifies the home device120to the service130. This identifier can be securely transmitted from the home device120to service130via network105using TLS or similarly-secure communications, as described above. Upon receipt of the secret, service130compares the received identifier to a previously-stored copy; if the received secret matches the previously-stored identifier, the requesting party can be confirmed to be authentic. Various embodiments may further provide authorization services to the authenticated party by granting or denying access to one or more services. Security service130could grant access to placeshifting or other media streaming services, for example. Other embodiments could authorize access to video gaming, electronic commerce, messaging or social networking systems, and/or any other services as desired. As noted above, client device110may not have its own unique identifier, or the identifier may not be available to one or more programs117that are operating on the client device. It is therefore beneficial to generate a new secret identifier118that is associated with the device110and that can be subsequently used to reliably identify device110. Challenges arise, however, in reliably delivering the identifying secret118to the backend service130without allowing unauthorized interception or duplication of the secret. These challenges can be overcome (or at least reduced) by using home device120as a trusted intermediary to deliver the secret118to the backend service130. Client device110is any mobile phone, tablet computer, computer system, video game player, media player or other computing device. In various embodiments, client device120is a phone or tablet capable of communicating with home device120. In the example illustrated inFIG.1, client device110suitably includes a processor111, memory112and input/output interfaces113. Interfaces113may include, for example, network interface circuitry for interfacing with a wired or wireless local area network (LAN)107, as desired. Some embodiments may additionally or alternately include interfaces to personal area networks, mobile telephone networks, point-to-point data links and/or the like. Client device110typically executes an operating system115that provides an interface between one or more application programs117and the system hardware114. Various embodiments also provide a security module116to generate and share a secret118that is used to authenticate the device110with security service130and/or application services140. To that end, security module116is typically implemented as software or firmware instructions that are stored within memory112or other storage available to device110for execution by processor111. In the example shown inFIG.1, security module116is illustrated as a middleware layer that provides secure services to multiple application programs117. Equivalent embodiments, however, could incorporate the functions and features of security module116into one or more programs117themselves. That is, one or more programs117may incorporate the security features attributed to security module116herein. Further, multiple programs117may each provide their own separate security features116, if desired. That is, different applications117executing on the same client device110may each generate their own identifying secrets118that can be used to authenticate with different services130, as desired. In operation, the client device110and home device120are able to discover each other and communicate via home network107. Network107may be a wired or wireless LAN, for example, or a collection of bridged or gated LANs operating within a home, office or similar environment. Home device120can readily confirm that the client device110is operating on the same LAN through, for example, verification of IP or MAC addresses, analysis of LAN traffic, and/or other factors. Because the devices110and120are operating on the same LAN, they can readily share a secret using secure Wi-Fi or the like, thereby allowing client device110to securely share a secret with trusted device120. Home device120, in turn, is trusted by security service130and maintains secure communications132with the security service130over105. This trust allows device120to relay the secret118established with client device110to the security service130via a secure connection132. Security service130can then store the received secret118for later use in authenticating client device110. Moreover, because client device110and security service130now share a secret118that is unique to the device110(or at least a user of the device110), subsequent authentication of the device and/or user can occur through directly contact with security service130via a separate connection134(e.g., a mobile telephone connection, or a different path through network105), without relying upon trusted device120as an intermediary. In various embodiments, this concept can be expanded to allow storage of the secret118in a database142that is associated with a user account or the like. This allows a user to access the secret from other devices110by providing a userid/password pair, biometric identifier, or the like, thereby allowing the secret identifier118to authenticate the user without regard to the specific hardware that the user is operating. Additional detail about these embodiments is provided below. FIG.2illustrates example processes200that can be used to establish and exploit secrets between trusted home devices120and less trusted applications117for efficient yet effective authentication.FIG.2separately illustrates security module116and application117to illustrate additional detail of process200. In practice, however, a single application117could incorporate the features of security module116, as desired. That is, the logic that implements security module116may be physically and/or logically integrated within one or more applications117. Alternatively, security module116may be a separate application, middleware component, plug-in or the like that could interoperate with multiple applications117, as desired. As shown inFIG.2, a secret118is initially created and shared between the home device120and security module116of a client device no (functions202,203). AlthoughFIG.2shows the secret118being generated by the security module116and transferred203from the client device110to the home device120, equivalent embodiments could generate the secret118by the home device120and/or by both devices110,120acting in tandem, with information sharing203as appropriate. In some implementations, the secret118is generated when the client device no is operating on the same LAN107(or home network) as the home device120, thereby ensuring that the two devices are in relatively close physical proximity (e.g., physically located within the same home or similar premises, and having access to the same home networks). This can be verified by the home device120through evaluation of IP and/or MAC addresses, ETHERNET or similar traffic on LAN107, and/or the like. Moreover, security modules116and126may be designed to increase trust between client device no and home device120, respectively. Other embodiments could verify or enhance the level of trust between the client device110and the home device120prior to secret generation202in any other manner. After the secret118is generated and shared between the home device120and the security module116of client device no, the secret118may be stored and/or shared as desired for subsequent authentication. In various embodiments, home device120is able to store the secret locally (e.g., in solid state or magnetic storage) for subsequent retrieval and verification (function207). Storage of the secret118on home device120may not be needed, however, after the secret118is stored with security service130and/or database142. To that end, home device120is able to securely provide the secret118to backend security service130using TLS or other secure connections132(function211) for storage at the backend server130(function218). Security service130may store the secret118along with an identifier of home device120that can be used in subsequent authentication, if desired. In some embodiments, client device no stores secret118in local memory112or other storage, as desired (function206). As noted above, security module116may be incorporated into an application program117itself. Alternatively, various embodiments could permit further exploitation of digital secret118by allowing security module116to share the secret118(and any associated identifiers) with separate applications117residing on client device110(function210). Applications receiving the secret118may be restricted, if desired. Applications117may be allowed to store the secret118locally (e.g., in memory112or the like), as shown in function214. Moreover, applications117may be allowed to store the secret (and any associated identifier for home device120) in database142associated with a cloud or other remote service140, particularly if the remote storage is associated with a user account or the like (function216). That is, if a user of the device no has established an account with service140, then the secret118may be stored with that account information for subsequent use on the same device no and/or different client devices, if desired. One drawback of local-only storage is that each device no operated by the same user would require its own unique secret118. That is, restricting storage of the secret118to the device itself would typically require each additional device operated by the same user to go through functions203-214on its own, often creating additional work for the user, as well as additional overhead to manage the multiple secrets118associated with the different devices. Remote storage allows a common shared secret118to be subsequently retrieved from other devices (assuming that the user has access to the account where the secret118is stored), thereby negating the need for each device110to maintain its own secret. FIG.2illustrates two different processes220,230that are examples of ways to use the remote storage feature. In process220, the user of an application117(which may be executing on the same or a different device110from the device no that originally created the secret118) is able to request (function221) and obtain (function222) the shared secret and any associated identifier data from the remote storage140. Although not shown inFIG.2, the process220will typically include verification of the user to server140through presentation of a userid/password combination, a biometric ID or the like before the secret118is retrieved from storage and returned to the application117. In the example220, application117is attempting to obtain services through security backend130, such as reconnecting to home device120and/or a different network service140for file sharing, placeshifting, video game playing or any other purpose. In this example, application117may direct a security module116present on the same device110to present the retrieved secret118to the security backend130, as desired. To that end, the application117provides the secret118to the security module (function224), which then forwards the submitted secret on to the security backend130for authentication (function226). Security backend130is able to compare the submitted secret118against the stored copy of secret118that was previously submitted by home device120(function227), thereby approving or rejecting the authentication request. The approval or denial may be provided back to security module116(function229), which then forwards the result to the appropriate application117(function225). Application117may then process the approval or denial as desired. Again, security module116may be equivalently implemented as a part of application117, if desired. In various embodiments, an approved authentication request will prompt security backend130to generate a token or similar credential that can be delivered to the application117for subsequent presentation at another service as proof that the authentication was successful. Application117may provide the token to the home device120and/or a network service140to establish a video streaming session, for example, or for any other purpose. In a further embodiment, service130may also provide the approval credential to the server that is requesting authentication (function228) for subsequent comparison to credentials provided by application117, if desired. Note that equivalent embodiments of process220for authentication to services or hosts140other than home device120. That is, the backend service130could equivalently notify a different service140or the like on network104in function228, as desired. For example, backend service130could provide an authentication credential to application117(functions229,225) that is also shared with any other service140on network105(function228). When the other service received the credential from the application117, the two credentials could be compared to verify successful authentication. Note that the other server would not need access to the secret itself: if the other service trusts security backend130to vouch for the home device110, then that level of trust can be used for very effective authentication. Process230shows a similar process in which the application obtains the secret118from local or remote storage (function231), and then presents the obtained secret118(and any associated identifiers) to another host140on network105(function232). The host receiving the secret118then queries the security backend130(function233) to determine if the secret is valid. The backend130sends a reply234confirming or denying authentication. This process230may be helpful in certain situations, but it potentially exposes the secret118to additional parties, thereby possibly weakening the security of the system. Nevertheless, it may be acceptable in some circumstances, depending upon the application and the level of trust that is needed. Generally speaking, then, trust in a STB, television receiver, video game player or other home device can be used to authenticate other client devices. By using the trusted home device to securely relay a secret identifier that can be stored and/or presented for subsequent authentication, a reliable and secure form of device, application and/or user authentication can be provided. Various embodiments could modify these general concepts in any number of ways. Any types or numbers of home or client devices could be used, and the concepts described herein could be readily applied in any number of different applications and settings beyond placeshifting or video streaming. Moreover, although frequent reference is made to “home” devices for familiarity and convenience, equivalent devices designed for deployment in offices, factories, schools or other premises could be equivalently used. The term “exemplary” is used herein to represent one example, instance or illustration that may have any number of alternates. Any implementation described herein as “exemplary” should not necessarily be construed as preferred or advantageous over other implementations. While several exemplary embodiments have been presented in the foregoing detailed description, it should be appreciated that a vast number of alternate but equivalent variations exist, and the examples presented herein are not intended to limit the scope, applicability, or configuration of the invention in any way. To the contrary, various changes may be made in the function and arrangement of the various features described herein without departing from the scope of the claims and their legal equivalents. | 24,212 |
11943350 | DESCRIPTION OF THE PREFERRED EMBODIMENTS The following description of embodiments is not intended to limit the claims to these embodiments, but rather to enable any person skilled in the art to make and use embodiments described herein. 1. Overview. Signing of transactions for operations related to management of a cryptocurrency protocol often require access to a private key. However, accessing and using such a private key for transaction signing can increase risk of the private key being accessed by an unauthorized entity. Variations of the invention disclosed herein relate to signing of transactions such that a private key never exists in the clear outside of key generation. In some variations, a private key is generated offline (e.g., during a key generation ceremony at an offline secure processing facility). During this key ceremony, the private key is encrypted by using a ceremony key (e.g., symmetric key or public key; using AES encryption; “a key”; etc.) to generate ciphertext (α-ciphertext), the ciphertext is sharded into n shards (e.g., using Shamir's Secret Sharing) such that a threshold number of shards t of the total number of shards n can reconstruct the ciphertext (e.g., by performing a Shamir's Secret Sharing process). Each shard (α-shard) is then encrypted to an account manager (e.g., a sage, a participant) by using a key (β-key) of the respective account manager (e.g., a public key of an asymmetric key pair). Shards encrypted to an account manager by using a respective β-key are referred to as β-shards in this disclosure. During this key ceremony, the ceremony key (e.g., the symmetric key, the private key paired with the public key, α-key, etc.) used to generate the ciphertext is securely stored offline (e.g., at the offline secure processing facility101, in a hardware security module (HSM)112, etc.). In some variations, the ceremony key offline storage (e.g., CKOS) can be password-protected, wherein the CKOS password can be sharded into m shards (e.g., using Shamir's Secret Sharing) such that s of m shards can reconstruct the password to unlock the CKOS (e.g., RuCK HSM112shown inFIG.1) for use in private key restoration. In some variations, the ceremony key can be used to restore the private key (generated during the key ceremony) offline, and the private key can be used to sign a message (e.g., a cryptocurrency transaction) offline. The signed message can be securely provided to an on-line system for further processing and/or transmission to another system (e.g., a blockchain node). In some variations, a system (e.g.,100includes a cold storage system (e.g.,120), and a secure signing system (e.g., no). The system (e.g., Dm) can also include one or more of an upload client system (e.g.,130) and a cold storage client system (e.g.,140). In some variations, the method200includes at least one of: receiving a signing request (optionally referencing a message and/or a public key associated with a message signing private key to be used for message signing); receiving message signing key shards (e.g., α-shards819shown inFIG.8, β-shards902shown inFIG.9) for a message signing private key S220; restoring the message signing private key by using the message signing key shards S230; and signing a message using the restored message signing private key S240. In variants, the method includes generating the message signing key shards (e.g., α-shards, β-shards) for the message signing private key S210. In variants, the method includes providing the signed message S250. In an example, all or parts of the method is performed by an offline secure signing module, and the method includes receiving a signing request that identifies a message to be signed, and at least a portion of a plurality of message signing key shards for a message signing private key. The message signing key shards and/or message can be received from a data diode or one or more hardware security modules (HSMs) coupled to the secure signing module. The offline secure signing module accesses a symmetric key from a Reusable Cold Key (RuCK) HSM (Hardware Security Module) coupled to the offline secure signing module. The signing module combines the received message signing key shards into an encrypted message signing private key, and decrypts the encrypted message signing private key by using the symmetric key. The signing module signs the message by using the decrypted message signing private key. The signed message can be provided via a second data diode. The message signing key shards can be generated by: generating a message signing private key (S310), encrypting the message signing private key with the symmetric key, thereby generating ciphertext (S320), and splitting the ciphertext into message signing key shards (S330). The message signing key shards can be secured (e.g., encrypted, protected by a password, or otherwise secured) (S340). In an example implementation, the method includes at least one of: generating a message signing private key (S310shown inFIG.3); encrypting the message signing private key to generate ciphertext (e.g., S320); splitting the ciphertext into shards (e.g., S330); optionally securing the shards (e.g., S340) (e.g., by encrypting each shard to generate encrypted shards). A signing request (e.g., that identifies a message, and a public key for the message signing private key) can be received (e.g., S410shown inFIG.4) from a client. In response to identifying the private key for the signing request as part of a reusable cold key (RuCK) pair (e.g., at S420), α-shards (message signing key shards) are accessed (e.g., S430). The α-shards can be accessed by decrypting encrypted versions of the α-shards (e.g., by sending the encrypted α-shards to users, and receiving decrypted α-shards back, wherein the encrypted α-shards are decrypted using keys held by the users, such as within an HSM), and providing the α-shards and the message to a secure signing system. The secure signing system can be offline, air-gapped, security-hardened, have limited entry and exit points, and/or otherwise secured. The secure signing system can reassemble the message signing private key by using the α-shards, and use reassembled message signing private key to sign the message (e.g., generate a signed message). In variants, the α-shards are reassembled into an α-ciphertext, wherein the secure signing system can access an α-key (e.g., ceremony key) and decrypt the α-ciphertext to obtain the message signing private key. The secure signing system can access the α-key by unlocking password-protected storage (e.g., an HSM) storing the α-key (e.g., wherein the password can be reconstructed from shards). The secure signing system can provide the signed message to another system (e.g., the cold storage system120shown inFIG.1). In variants, each data transfer between a sending and receiving system can be verified (e.g., by verifying the sender's private key signature; by verifying metadata sent with the data, such as a CKOS history log or a secure signing system history log; by verifying that the sender or recipient's IP address is on a whitelist; using out-of-band authentication, such as 2-factor authentication; etc.) before decryption with the receiver's private key (wherein the data can be encrypted to the receiver's public key by the sender). Data transferred between sending and receiving systems can optionally be encrypted (e.g., with a public key of the receiving system), signed (e.g., with a key of the sending system), be stored in an immutable or read-only format or medium (e.g., flashed data, DVDs, CDs,), and/or have other properties. Digital versions of the β-shards, α-ciphertext, message signing key, α-key, and/or other sensitive information can be deleted or erased immediately upon use (e.g., transmission, signing) or otherwise managed. All or portions of the method can be performed in volatile memory (e.g., RAM) or other memory. In some implementations, at least one component of the system (e.g.,100) performs at least a portion of the method (e.g.,200). In one example, the key generation and storage system and method that is used can be that described in U.S. application Ser. No. 16/386,786 filed 17 Apr. 2019, which is incorporated herein in its entirety by this reference. However, any other suitable key generation method and cold storage method can be used. This system and method can be used in: custodying cryptocurrency (on behalf of another entity), proof of stake use cases (e.g., by the delegate or delegator), and/or any other suitable application. 2. Benefits. The system and method disclosed herein can confer several benefits over conventional systems and methods. First, signing can be securely performed off-line without exposing the private signing key to an on-line system. This can be further enabled by reconstructing the signing key (e.g. reconstructing the α-ciphertext) off-line. In some use cases, this can enable cold-stored private keys to be securely reused, without requiring new key assignment to the set of account managers and/or cryptocurrency transfer to a new address associated with the new key. Second, access to the private signing key can be restricted by: limiting access to encrypted shares of the private key, and distributing keys, passphrases (needed to recover the private key), and/or shards thereof among a plurality of discrete hardware devices such that no single hardware device has access to all information needed to reconstruct the private signing key. Third, logging of transaction signing operations can be performed to enable auditing and oversight of transaction signing operations. Additional benefits can be provided by the systems and methods disclosed herein. 3. System. FIG.1shows a system. The system100includes at least one of a cold storage system120, and a secure signing system110. In some variations, the system includes one or more of an upload client system130and a cold storage client system140. In some implementations, the secure signing system no and the upload client system130are included in an offline secure processing facility101. In some variations, the cold storage client system140functions to generate signing requests and provide the signing requests to the cold storage system120. The signing requests can identify or include a message to be signed (unsigned message), and a public key to be used to authenticate the signed message. The message can be any suitable type of message, such as a blockchain transaction, a secure communication, and the like. In some variations, the cold storage client system140is implemented as one or more hardware devices (e.g., servers including at least one processor and at least one storage device that includes machine-executable instructions to be executed by the at least one processor). However, the cold storage client system140can be otherwise configured and/or perform any other suitable functionality. In some variations, the cold storage system120functions to process a signing request received from a cold storage client system (e.g.,140and use the secure signing system110to sign the message identified in the signing request. In variants, the cold storage system requests (and optionally stores) RuCK public keys (and/or hashes thereof) (e.g., received from a secure computing system), and uses the RuCK public keys (and/or hashes thereof) for cold storage of cryptocurrency assets (or any other suitable data). In some variations, the cold storage system120is implemented as one or more hardware devices (e.g., servers including at least one processor and at least one storage device that includes machine-executable instructions to be executed by the at least one processor). However, the cold storage system120can be otherwise configured and/or perform any other suitable functionality. In a first variant, the cold storage system120operates within an on-line computing environment (e.g., the cold storage system120is communicatively coupled to other devices via one or more private or public networks, which may or may not be secured). In an example, the cold storage system120is communicatively coupled to at least one other computing device via a network (e.g., the Internet, a private network, etc.). In a second variant, the cold storage system120operates within an off-line computing environment (e.g., the cold storage system120has no access or limited access to other devices via networks). In an example, the cold storage system120receives information from account managers via a user input device (e.g., keyboard, scanner, camera, microphone, etc.) or hardware bus of the cold storage system120. In an example, the cold storage system120receives information generated by the cold storage client system140via a user input device (e.g., keyboard, scanner, camera, microphone, etc.) or hardware bus of the cold storage system120. However, the cold storage system120can be configured to receive information from other components of the system100in any suitable manner. In some variations, the upload client system130functions to securely transport to the cold storage system120private key information for a private key (e.g., the private key to be used to sign the message identified in the message request, message signing key). The upload client system130can receive the private key information from one or more of physical storage, cold storage, air gapped storage, an input device, an account manager computing system (sage), or any other suitable computing system or device. In some implementations, the private key information includes encrypted private key shards (e.g., β-shards) for a private key to be used for signing the message. In some implementations, the private key corresponds to a public key identified in the signing request received by the cold storage system120. In some implementations, the private key information uploaded by the client system130(e.g., from physical storage, cold storage, airgapped storage, etc.) to the cold storage system120includes β-ciphertext for the private key (which is encrypted with a first set of one or more keys), and the cold storage system120orchestrates the decryption of the β-ciphertext into α-shards. In some variations, the upload client system130is implemented as one or more hardware devices (e.g., servers including at least one processor and at least one storage device that includes machine-executable instructions to be executed by the at least one processor). In some variations, the upload client system130is implemented as a secured laptop. In some variations, the upload client system includes a secured operating system that has restricted or limited functionality and optionally limited persistent storage functionality. However, the upload client system130can be otherwise configured and/or perform any other suitable functionality. In some variations, the secure signing system110functions to sign a message received from the cold storage system120(e.g., via a data diode198shown inFIG.1) by using private key information (e.g., message signing key shards, α-shards) provided by the cold storage system120and a ceremony key (encryption key, α-key). The ceremony key is preferably received from offline storage (e.g., the CKOS; etc.), but can additionally or alternatively be securely stored at the secure signing system110, or otherwise stored or obtained. In some implementations, the private key information provided from the cold storage system120to the secure signing system110includes the α-shards. When the secure signing system110receives α-shards, the secure signing system can optionally reconstruct the α-ciphertext from the α-shards. Alternatively, the private key information provided from the cold storage system120to the secure signing system110includes the α-ciphertext (e.g., reconstructed from the α-shards by the cold storage system120by the cold storage system or other system). The α-ciphertext is then decrypted using the ceremony key (α-key, encryption key) securely loaded at the secure signing system110to obtain the message signing private key. The decrypted message signing private key is then used to sign the message (received from the cold storage system120) by using the reconstructed message signing private key. After message signing, the decrypted message signing private key and/or the encryption key (α-key) can be discarded (e.g., deleted; lost when the secure signing system is depowered, wherein the signing is performed in volatile memory; etc.), or otherwise managed. In some variations, the secure signing system110functions to securely transport the signed message to the cold storage system120(e.g., via a data diode199shown inFIG.1) Communication between the secure signing system110and other systems, such as the cold storage system120, is preferably via one or more data diode (e.g., hardware that functions as a unidirectional security gateway), but can additionally or alternatively be facilitated by: encrypted communications, secure physical connections (e.g., wired connections), physical courier, and/or any other suitable communications means. examples of data diodes that can be used include: compact disks (e.g., read-only, read and write CDs, etc.), DVDs, unidirectional optical fibers, RS-232 cabled with the transmit or receive pin removed (e.g., depending on whether the cable is a receive or transfer cable, respectively), ASICs, and/or any other suitable hardware. In some implementations, the secure signing system110is implemented as a secured laptop. In some variations, the secure signing system110includes a secured operating system that has restricted or limited functionality and optionally limited persistent storage functionality. In some variations, the secure signing system110is implemented as a hardware device800, as shown inFIG.8. In some implementations, the hardware device includes one or more of a processor803(e.g., a CPU (central processing unit), GPU (graphics processing unit), NPU (neural processing unit), etc.), a display device891, a memory890, a storage medium805, an audible output device, an input device881, an output device, and a network device811. In some variations, one or more components included in the hardware device are communicatively coupled via a bus801or other connection. In some variations, one or more components included in the hardware device are communicatively coupled to an external system via the network device811. The network device811functions to communicate data between the hardware device Boo and another device via a network (e.g., a private network, a public network, the Internet, and the like). In some variations, the storage medium805includes the machine-executable instructions for performing at least a portion of the method200described herein. In some variations, the storage medium805includes the machine-executable instructions for one or more of an operating system830, applications813, device drivers814, and a secure signing module111. In some variants, the hardware device800is communicatively coupled (e.g., via the bus801, etc.) to one or more of a hardware security module (HSM) (e.g.,115,116,112, etc.), a removable computer-readable storage medium815(e.g., DVD, CD-ROM, flash drive, etc.) that includes a data bundle816, and/or other computing or storage system. In some variants, during operation, the hardware device loads into the memory890machine-executable instructions for one or more of: an operating system (e.g., a secure operating system)830, applications813, device drivers814, and the secure signing module111. During operation, the secure signing module111can load into the memory890(e.g., at a secure memory location of the memory890) one or more of: a data bundle816, alpha shards819, a message820(to be signed), an alpha key817, a message signing key822, a signed message818, a limited set thereof, and/or other information. In some implementations, the secure signing system110includes a secure signing module111. In some implementations, the secure signing module111is communicatively coupled to at least one HSMs (Hardware Security Modules) (e.g.,112,113,114,115,116). In some implementations, the secure signing module111is communicatively coupled to at least one of a Reusable Cold Key (RuCK) HSM (Hardware Security Module)112(e.g., the CKOS), a librarian HSM113(e.g., held by an entity managing the secure signing system or other entity; can be used to validate the entity's identity and/or store the passphrase for the CKOS or portion thereof), and a floater HSM114(e.g., held by user that is the same or different from the user managing the RuCK HSM and/or librarian HSM; can be used to validate the user's identity and store the passphrase for the CKOS and/or portion thereof). In some variations, the secure signing system110operates within an off-line computing environment (e.g., the secure signing system110has no access to other devices via networks). In an example, the secure signing system110receives information from other devices via a user input device via a hardware bus (e.g.,801) of the secure signing system110. Bus801can be: a peripheral bus (e.g., USB, lightning, etc.), internal bus (e.g., PCI, ATA, etc.), external bus (e.g., Lightning, Fieldbus), and/or other data bus. However, in some implementations, the secure signing system110can be configured to receive information from other components of the system100in any suitable off-line manner. The offline storage (e.g., the HSMs, CKOS) can be: password protected; biometrically-protected; encrypted (e.g., to a public key corresponding to a secure signing system's private key; to a user's private key; etc.); unprotected; or otherwise protected. The offline storage's protection (e.g., password, private key) can itself be: sharded and distributed to different users (e.g., the same account managers holding the β-keys; to different users from the account managers; to the librarians; etc.), encrypted, password-protected (e.g., using a mnemonic, a password, etc.), or otherwise protected. The offline storage can be: separate, distinct, and transiently connectable to the secure signing module or secure signing system (e.g., be physically separate, such as an HSM); be a partitioned portion of the secure signing system or secure signing module; be a separate processor or chipset of the secure signing system or secure signing module; or be otherwise physically configured. The offline storage can include one or more secure cryptoprocessor chips, or be otherwise constructed. The offline storage is preferably securely connectable to the secure signing module or secure signing system, wherein the connection can be: digitally protected (e.g., via encryption), physically protected (e.g., via a security mesh), unprotected, or otherwise protected. The offline storage can: store cryptographic keys (e.g., the α-key), passwords (e.g., passwords for other HSMs or portions thereof, etc.), and/or store other data. The offline storage can optionally: decrypt ciphertext (e.g., decrypt the α-ciphertext to the message signing key), sign messages (e.g., sign the requested messages using the decrypted message signing key), and/or perform other functionalities. In some variations, at least one HSM (e.g.,112) that is communicatively coupled to the secure signing module in includes at least one ceremony key (e.g., the α-key) that is used for reassembling a message signing private key used by the secure signing module111to sign a message received from the cold storage system120. The CKOS preferably stores a single α-key, but can alternatively store multiple α-keys (e.g., for multiple ceremonies, multiple entities). In some variations, the secure signing module in functions to sign messages for one message signing private key. In some variations, the secure signing module in functions to sign messages for a plurality of message signing private keys. In some variations, the offline secure processing facility101includes a plurality of secure signing modules in (executing on respective computing systems) that each function to sign messages for one or more message signing private keys. Each secure signing module111can be: dedicated to a single message signing private key or used for multiple message signing private keys (e.g., in series, in parallel). In a first example, the offline secure processing facility101includes a secure signing system that includes a signing module that functions to sign for a plurality of private keys. In a second example, the offline secure processing facility101includes a secure signing system that includes a plurality of signing modules that each function to sign for at least one private key. In a third example, the offline secure processing facility101includes a plurality of secure signing systems that each function to sign for at least one private key. In some variations, the upload client system130is a hardware device that is separate from the secure signing system no. FIG.8shows a schematic representation of architecture of an exemplary hardware device800. The system can optionally be used with: a message signing key, an α-key (e.g., ceremony key) and a β-key, but can be used with any other suitable cryptographic key. The message signing key is used to sign blockchain messages, wherein a blockchain network validates the signature to verify a transaction. The message signing key is preferably the private key of an asymmetric keypair (RuCK key pair), but can alternatively be any other suitable key. The message signing key can be: a master key, an extended key (e.g., consistent with a BIP protocol), a key derived from a seed phrase (e.g., wherein the seed phrase can be ciphered, split, stored, reconstructed, and deciphered in lieu of the message signing key), and/or any type of cryptographic key, consistent with any suitable cryptographic protocol (e.g., blockchain protocol). The message signing key is preferably associated with an entity and/or a set of users (“sages”), but can be otherwise owned. The α-key is used to generate the α-ciphertext during the key generation ceremony, and can optionally be used to decrypt the reconstructed α-ciphertext during message signing. The α-key is preferably a symmetric key, but can alternatively be an asymmetric keypair (e.g., wherein the public key is used to encrypt the message signing key into α-ciphertext and the private key is used to decrypt the reconstructed α-ciphertext). The α-key is preferably specific to a given key generation ceremony, but can be shared across multiple key generation ceremonies. An identifier for the α-key (e.g., a hash) can optionally be determined and used to identify the β-shards generated from the respective α-ciphertexts. This can be particularly useful in quickly identifying and decommissioning the β-shards associated with a compromised α-key. The α-key is preferably only stored within a single CKOS, but can additionally or alternatively have copies stored in auxiliary CKOS's, in a digital format, or otherwise stored. The β-key is used to enrypt the α-shards into β-shards and/or to decrypt the β-shards into α-shards. The β-key is preferably part of an asymmetric keypair, but can alternatively be a symmetric key. In a specific example, a public β-key is used to encrypt the α-shards into β-shards, and a paired private β-key is used to decrypt the β-shards into α-shards. The β-key (e.g., private β-key) is preferably retained and stored by a user (e.g., in an HSM, on paper, in a vault, etc.), while the public β-key can be retained and stored by the system or other storage. The user is preferably associated with the master signing key's entity, and is preferably geographically separated from the cold storage system and/or secure signing system, but can be otherwise associated and located. The system can optionally be used with one or more blockchain messages. The blockchain messages are preferably associated with the public key paired with the message signing key (e.g., include the public key, include a hash of the public key, etc.). The blockchain messages can optionally include: transaction information (e.g., a destination address, amount of cryptocurrency to transfer, network fee limits, etc.), function calls, delegation information, and/or other information. The blockchain messages can include transactions to transfer cryptocurrency assets, transfer data, execute a smart contract method, participate in governance of a blockchain network, participate in operations of a blockchain network (e.g., staking, validating, etc.), or any suitable type of blockchain transaction. The blockchain messages can be unsigned (e.g., before signing with the message signing key) or signed (e.g., with the message signing key). The blockchain messages are preferably generated by a client (e.g., service, user, wallet, etc.), but can be generated by any other suitable system. The blockchain messages are preferably received from the client by the cold storage system, but can be received by another system. The blockchain messages are preferably signed by the secure signing system, but can be signed by another system. 4. Method. FIG.2is a flowchart representation of a method200. In some variations, the method200includes at least one of: receiving a signing request and receiving message signing key shards for a message signing private key S220; restoring the message signing private key by using the message signing key shards S230; and signing a message using the restored message signing private key S240. In variants, the method includes providing the signed message S250. In variants, the method includes generating the message signing key shards for the message signing private key S210. In some implementations, at least one component of the system100performs at least a portion of the method200. In some implementations, at least one computing system located in the offline secure processing facility101performs at least a portion of the method200. In some implementations, restoring the message signing private key (S230), and signing the message (S240) is performed entirely by one or more computing systems (e.g.,800shown inFIG.8) located at the offline secure processing facility101. Generating key shards S210functions to generate key shards that can be distributed to several locations (e.g., HSMs, account manages, etc.) and accessed at a later time to sign a message. Generating key shards S210can include one or more of generating a message signing private key S310; encrypting the message signing private key with a symmetric key to generate ciphertext S320; splitting the ciphertext into message signing key shards into the message signing key shards (used to restore the message signing private key at S240), as shown inFIG.3. The message signing key shards can be secured (e.g., at S340). In some implementations, processes S310, S320, S330, S340are each performed entirely by one or more computing systems (e.g., a secure key generation computing system). In one example, S210can be performed as described in U.S. application Ser. No. 16/386,786 filed 17 Apr. 2019, which is incorporated herein in its entirety by this reference, or performed in any other suitable manner. Generating the message signing private key (S310) functions to generate a private key that can be used to sign messages. In some variations, the private key is generated offline (e.g., during a key generation ceremony) (e.g., at the offline secure processing facility ioi). In some variations, the message signing private key is generated as part of a public private key pair (RuCK key pair) that includes the message signing private key and a corresponding public key (e.g., that can be used to verify signatures generated by using the message signing private key). A new RuCK key pair can be generated at any suitable time, and in response to any suitable trigger. In a first example, a new RuCK key pair is generated in response to a request received from the cold storage system120for a new RuCK key pair. In a second example, a new RuCK pair is generated to create a pool of available RuCK key pairs that can be provided to the cold storage system as needed. For example, if the cold storage system120needs a new cold storage address (or account) for cold storing cryptocurrency assets, the cold storage system120can request a new RuCK key, and in response to such a request, the new RuCK key pair is generated and provided to the cold storage system120. The cold storage system120can then use the public key of the RuCK key pair to identify a cold storage blockchain destination for a blockchain transaction that transfers cryptocurrency from a source to cold storage. In variants, the generated message signing private key is encrypted at S320using a ceremony key (e.g., α-key817shown inFIG.8). In some implementations, the signing private key is encrypted using AES encryption. The ceremony key (α-key) is preferably a symmetric key, but can alternatively be the public key of an asymmetric key pair, wherein the private key of the pair is stored in the offline storage (CKOS, e.g., RuCK HSM112), or be any other suitable key type. In some variations, the ceremony key is securely stored in an HSM (e.g.,112). In some implementations, securely storing the ceremony key in the HSM112includes encrypting the ceremony key with a key of the HSM112(e.g., a public key of the HSM, a private key of the HSM, a ceremony key of the HSM). In some implementations, encrypting the ceremony key with the key of the RuCK HSM112includes unlocking the RuCK HSM for use by using a passphrase for unlocking the RuCK HSM, and once unlocked, using a key of the RuCK HSM to encrypt the ceremony key. In some implementations, the passphrase required to unlock the RuCK HSM112is sharded, and at least t of n shards are required to recover the passphrase for unlocking the RuCK HSM112. In some variations, the shards are provided by at least two HSMs (e.g.,113and114). In some variations, the ceremony key (e.g., key817shown inFIG.8) is used to encrypt the private key and is generated in the offline secure processing facility101, and the ceremony key is securely transported to the RuCK HSM112during the key generation ceremony. The ceremony key can be securely transported to the RuCK HSM112in any suitable manner. In a first example, the ceremony key is stored on a storage device, the storage device is coupled to the RuCK HSM112(either directly, or via a secure, off-line computing device), and the ceremony key is copied from the storage device to the RuCK HSM. For example, the storage device that stores the generated ceremony key is coupled to the secure signing system110, the secure signing system110is coupled to the RuCK HSM112, and the ceremony key is copied from the storage device to the RuCK HSM112via the secure signing system110. In a second example, the ceremony key is stored at the RuCK HSM112via a user interface. For example, the secure signing system110can be coupled to (or include) a user input device (e.g., a keyboard, a touchscreen, etc.) that receives user input that represents the ceremony key (or information that can be used to generate the ceremony key), and the secure signing system110receives the user input via the user input device. In a case where the user input represents the ceremony key, the secure signing system110stores the ceremony key in the RuCK HSM112. In a case where the user input represents information that can be used to generate the ceremony key, the secure signing system110generates the ceremony key by using the user input, and stores the ceremony key in the RuCK HSM112. In a third example, the ceremony key is stored at the RuCK HSM112via a secure network interface. For example, a first system (e.g., a secure, off-line computing system) generates the ceremony key, establishes a secure connection with the secure signing system110, and transmits the ceremony key to the secure signing system no via the secure connection. The secure signing system110, receives the ceremony key and stores the ceremony key in the RuCK HSM112(which is communicatively coupled to the secure signing system110. The secure connection can be a dedicated, un-shared communication link between the first system and the secure signing system110, and both the first system and the secure signing system110can be located in the offline secure processing facility101. The secure communication link can be implemented using a direct wired connection between a communication device of the first system and the secure signing system no. In a fourth example, the ceremony key can be directly stored on the RuCK HSM112during the key generation ceremony, wherein the RuCK HSM112is physically transported to the secure signing system. In a fifth example, the RuCK HSM112generates the RuCK keypair, and sends the public key to a public repository (e.g., for subsequent client access). However, the ceremony key can be otherwise stored on the RuCK HSM112. Splitting the encrypted ciphertext (α-ciphertext) (S330) functions to split the ciphertext of the encrypted message signing private key (generated at S320) into n shards (n α-shards) such that t of n shards can reconstruct the ciphertext. In some implementations, the α-ciphertext is split into shards by performing a Shamir's Secret Sharing process. However, the α-ciphertext (alpha ciphertext, e.g.,903shown inFIG.9) can be otherwise split. Message signing key shards (α-shards) can be secured at S340in any suitable manner. In a first example, α-shards are deleted, overwritten, lost when the key generation system is depowered, or otherwise discarded. In second example, at least one α-shard can be printed (or otherwise formed) on a substrate (e.g., paper, plastic, metal), and optionally physically stored in a secure manner (e.g., in a safe, etc.). The α-shards can be encrypted in a non-textual medium, such as a barcode or QR code; encrypted with a key; cleartext, or in any other suitable format. In a third example, at least one α-shard (alpha shard) can be securely transmitted (e.g., via a secure communication link) to a computing device (e.g., a mobile computing device, computing workstation, server, data center, etc.). The α-shard can be encrypted before being transmitted to the computing device. Additionally, or alternatively, the α-shard can be encrypted at the computing device. In a fourth example, at least one α-shard can be securely stored in an HSM that is coupled to the computing device that generates the α-shard. However, α-shards can otherwise be secured such each α-shard is accessible by only an authorized set (e.g., of one or more) account managers. In some implementations, each α-shard is accessible by only one authorized account manager. In some implementations, at least one account manager is a human. Additionally, or alternatively, at least one account manager can be a computing system that functions to provide at least one α-shard in response to a request for α-shards for a RuCK asymmetric key pair (e.g., via an API of the computing system, etc.). In variants, securing message signing key shards (α-shards) S340functions to encrypt each shard (α-shard) to an account manager (e.g., a sage) by using a key (β-key, e.g.,901shown inFIG.9) of the respective account manager (e.g., a public key of an asymmetric key pair), thereby generating doubly-encrypted shards (β-shards). In some implementations, each β-key (beta key) is a public key of an HSM owned by a respective account manager, and the private key used to decrypt the associated β-shard is securely managed by the an HSM or other secure computing system. In some variants, at least one private key used to decrypt the associated β-shard is stored outside of the offline secure processing facility. Even if all of the β-shards (beta shards) have been accessed and decrypted (thereby providing the α-shards), the ceremony key is still needed to recover the message signing private key. And the ceremony key is secured at the offline secure processing facility101. By virtue of requiring a ceremony key (secured at the offline secure processing facility) to recover the message signing private key, the β-shards (and the respective β-keys) can be managed outside of the offline facility, and the account managers do not have to be located within the offline secure processing facility to recover the message signing private key. Even if the cold storage system120accesses the α-shards by communicating with other computing systems via a network, the message signing public key cannot be maliciously recovered without compromising security of the secure offline processing facility101. In some variants, at least one private key used to decrypt the associated β-shard is stored within the offline secure processing facility101. In some variations, the encrypted shards (β-shards) are securely stored. In some variations, the β-shards are stored outside of the offline secure processing facility101. In some variations, the β-shards are stored at the offline secure processing facility101. In some variations, QR codes representing each β-shard are generated, and each QR code is printed, such that each β-shard can be identified and converted to a digital format by scanning the QR codes from the paper printouts. In some variations, securely storing the β-shards includes removing the β-shards from memory and storage devices, such that the only representations of the β-shards are the printouts with the QR codes. In some variations, the β-shards are stored in association with the corresponding public key. In some implementations, the public key is printed on the printouts (e.g., in cleartext) that include the β-shard QR codes. In some variations, each encrypted shard (β-shard) is stored in association with information that identifies an account manager (or HSM) that has access to the key required to decrypt the encrypted shard. In this manner, each encrypted shard can be sent to a computing device (or HSM) of the associated account manager. In some variations, the computing system (e.g., a key generation computing system) that generates the message signing private key (at S310) provides the cold storage system120with RuCK information identifying the public key associated with the message signing private key. In some implementations, the RuCK information identifies the public key as a public key of a Re-usable Cold Key (RuCK) asymmetric key pair. In some variants, the RuCK information also identifies the storage locations or entities (e.g., account managers) that manage the β-keys that are used to restore the message signing private key that corresponds to the public key. In variants, the RuCK information can identify one or more of: contact information (e.g., e-mail addresses, phone numbers, etc.) for account managers that manage the α-shards; API endpoints (Application Programming Interface) for computing systems that mange the α-shards; and authentication information for remote storage locations that store the α-shards. However, RuCK information can include any suitable type of information that can be used to access the α-shards. The cold storage system120can then use this RuCK public key for cold storage purposes, and recognize that data secured by this public key is re-usable, meaning that it can be used in more than one encryption operation. In a first variant, the cold storage system uses the RuCK public key as a cold storage blockchain endpoint. In a second variant, the cold storage system120uses the RuCK public key to generate a cold storage blockchain endpoint (e.g., by hashing the RuCK public key, etc.). However, the RuCK public key can otherwise be used for cold storage purpose. For example, the cold storage system120can use the RuCK public key as a blockchain endpoint (e.g., address, account) and assign this RuCK public key to a user. If an entity wishes to transfer ownership of data (e.g., an amount of cryptocurrency, data, etc.) to the user, the entity can generate a transaction by using the RuCK public key to designate the user as the new owner of the data. The user can be designated as the new owner of the data by encrypting at least a portion of the data or the transaction with the RuCK public key. In variants, the entity transferring ownership can be related to the user, and such a transfer of ownership can include the user transferring custody of the data from current digital wallet to a cold storage digital wallet that is secured by the RuCK asymmetric key pair. Because the RuCK key pair is reusable, a portion of the data (e.g., a portion of cryptocurrency) secured by the RuCK key pair can be transferred to another user (or entity), while the remaining data is still secured by the RuCK key pair. By virtue of the processes for signing transactions using RuCK private keys (described herein), the cleartext version of the private key (message signing private key) is not accessible outside a secure signing module (e.g.,111) running in an offline secure processing facility (e.g.,101). Therefore, the risk of malicious entities accessing the private key during message signing is reduced, as compared to conventional signing of transactions using cold storage keys. Because the security of the private key is less likely to be compromised after message signing (according to the processes described herein), the RuCK asymmetric key pair can continue to be used for storing the remainder of the data that is not transferred during the initial transfer. When the cold storage system120receives the RuCK public key, it recognizes (from the received RuCK information) that the RuCK public key corresponds to a reusable cold storage key. In some implementations, the cold storage system120stores RuCK information for each received RuCK public key, and uses the stored RuCK information to identify each RuCK public key that is used (or can be used) by the cold storage system120, such that the cold storage system120can identify a message signing request that involve uses of a RuCK asymmetric key pair. In variants, message signing using the RuCK private key is performed in response to receiving a signing request at S220. In some variations, the cold storage system120receives a message signing request from the cold storage client system140(S410shown inFIG.4). In some variations, the signing request identifies a message to be signed (e.g., a blockchain transaction). In some variations, the signing request identifies a public key used to verify a signature for the signed message. In some implementations, the signing request is a request to sign a blockchain transaction. In some implementations, the blockchain transaction identifies a blockchain address or account, and the signing request is a request to sign the blockchain transaction with a signature for the blockchain address or account. In some variations, the cold storage system120determines whether the public key identified in the signing request corresponds to a re-usable cold key (e.g., by using the stored RuCK information) (S420shown inFIG.4). In an example, the message is a blockchain transaction, the cold storage system120identifies a source blockchain address (or account) of the blockchain transaction, and the cold storage system120determines whether the source address (or account) is associated with a re-usable cold key (e.g., by accessing data that identifies blockchain addresses and/or accounts that are associated with re-usable cold keys). In response to a determination that the public key of the message (e.g., the public key associated with the source of the blockchain transaction) corresponds to a re-usable cold key (RuCK), the cold storage system120sends a request to the secure signing system110to have the message signed with the associated RuCK private key. In an example, the blockchain transaction identifies the address (or account) by using a public key of a RuCK key pair (or data generated by using the public key of the RuCK key pair). For the blockchain transaction to be validated and recorded on a respective blockchain, the blockchain transaction (identifying the address or account using the RuCK public key) needs to be signed by using the RuCK private key that corresponds to the RuCK public key used to generate the blockchain transaction. In this example, since the cold storage system120does not store the RuCK private key (or otherwise have direct access to the RuCK private key), the cold storage system120sends a signing request to the secure signing system110(which can access the RuCK private key) so that the secure signing system no can sign the blockchain transaction with the RuCK private key and return a signature for the blockchain transaction. In variants, the cold storage system120controls the secure signing system no to sign the message by providing the secure signing system no with the message and additional information used to access the private key used to sign the message. In some implementations, the additional information used to access the message signing private key (used to sign the message) includes the α-shards that correspond to the public key identified in the signing request. At S430(shown inFIG.4) the cold storage system120access the α-shards. In variants, the cold storage system120accesses the α-shards from a plurality of account managers asynchronously. The cold storage system120can access the α-shards from the account mangers via one or more of a user input device, a private network, and a secure network. In some examples, the cold storage system120accesses the α-shards from the account managers within an on-line computing environment. In some implementations, the cold storage system accesses the α-shards by using the RuCK information, which identifies the account managers that manage the α-shards for the RuCK private key associated with the public key identified in the signing request. Using the RuCK information, the cold storage system120identifies the account managers that manage the α-shards, and sends a shard request to each account manager. In variants, the cold storage system120uses the upload client system130to access the α-shards. In a first variant, the cold storage system120accesses the α-shards directly from the upload client system. In some implementations, the cold storage system120sends a shard request to the upload client system130. In a first example, the shard request identifies at least a portion of the RuCK information for the public key of the message signing request, and the upload client system uses the RuCK information to access the α-shards to return to the cold storage system. For example, the shard request can include a list of account managers that mange the α-shards needed to sign the message, the upload client system can notify the account managers, and receive the α-shards from each account manager (e.g., via a user interface, via an API, via a network connection, etc.). Notifying the account managers can include digitizing the β-shards of the corresponding message signing key and sending the account managers all or a subset of digitized β-shards, wherein each account managers can decrypt a subset of the digitized β-shards into α-shards using the β-key managed by the respective account manager. In a second example, the shard request identifies the public key for the message signing request, and the upload client system uses the public key to access the α-shards to return to the cold storage system. For example, the upload client system130can use the public key to identify the account managers that mange the α-shards needed to sign the message, the upload client system can notify the account managers, and receive the α-shards from each account manager (e.g., via a user interface, via an API, via a network connection, etc.). In a second variant, the cold storage system120accesses (from the upload client system130) data (e.g., β-shards) that can be used to obtain the α-shards. In some implementations, the cold storage system120sends a shard request to the upload client system130. The shard request sent to the client system130requests the encrypted shards (β-shards) for the private key corresponding to the public key of the message received at S220. In some implementations, the upload client system130accesses the encrypted shards (β-shards) (e.g., from a storage device, from an HSM, from a user input device, from an image captured by an image sensor, by scanning a printout that includes a QR code for the shard, etc.). In some implementations, the client system130securely transports the encrypted shards (β-shards) to the cold storage service120(e.g., via a secure communication link, etc.). In variants, accessing the α-shards (S430) includes decrypting the encrypted shards (e.g. β-shards) and providing the decrypted shards (the α-shards) to the secure signing system110. In a first variation, the cold storage system120orchestrates decryption of the encrypted shards (e.g., β-shards) (which are encrypted with keys of account managers) into α-shards (which are encrypted with the ceremony key, which is stored at the secure signing system110) by the account managers (e.g., sages) whose keys were used to encrypt the β-shards. In some implementations, the cold storage system120receives user input from t of n of the account managers via a user input device (e.g., keyboard, scanner, camera, etc.) of the cold storage system120, and uses the user input to decrypt the β-shards. In some implementations, the cold storage system120receives the α-shards (accessed by decrypting the β-shards) from t of n computing systems (e.g.,601and602shown inFIG.6,1010shown in FIG. KA), each associated with a different account manager. In some implementations, the cold storage system120receives the α-shards via a user input device (e.g., keyboard, scanner, camera, etc.) of the cold storage system120. In some variations, the cold storage system provides the α-shards and the message to be signed to the secure signing system (S440shown inFIG.4). Alternatively, the α-ciphertext can be provided instead of (or in addition to) the α-shards. In some variations, the cold storage system120provides the α-shards and the message to be signed to the secure signing system in response to receiving t of the n α-shards for the public key (e.g., from the account managers, from the upload client system130, etc.). In some implementations, the cold storage system120provides the α-shards and the message to be signed to the secure signing system by generating a data bundle (e.g.,816shown inFIG.8) that includes the α-shards (e.g.,819shown inFIG.8) and the message to be signed (e.g.,820shown inFIG.8), and provides the data bundle to the secure signing system. In some implementations, the cold storage system120encrypts the data bundle with a public key of the secure signing module (e.g., public key of the RuCK HSM112), and signs the data bundle with a private key of the cold storage system. In some variations, the cold storage system120securely transports the data bundle to the secure signing system110. In a first implementation, the data bundle is securely transported by using a data diode (e.g., via a data diode198). In a first example, a computer-readable storage medium (e.g., a DVD-ROM, an immutable storage device, etc.) functions as a data diode, and the cold storage system120stores the data bundle on the computer-readable storage medium. In one example, the method can include downloading the data bundle to a local computing system to store the data bundle on the computer-readable storage medium. In this example, the identity of the computing system and/or user downloading the data bundle can be verified prior to downloading (e.g., by verifying that the requesting machine has a whitelisted IP address or other identifier, using out-of-band authentication, such as 2 factor authentication, etc.). In variants, the computer-readable storage medium is a removable storage medium that can be coupled to a first computing system to receive and store data provided by the first computing system, decoupled from the first computing system, and then coupled to a second computing system to provide the stored data to the second computing system. In some variations, providing the data bundle to the secure signing system includes decoupling the computer-readable storage medium from the cold storage system120, physically transporting the computer-readable storage medium (e.g.,815shown inFIG.8) to the offline secure storage processing facility101, and communicatively coupling the storage medium with the secure signing system110(e.g., by inserting the computer-readable storage medium into a drive bay of a computer implementing the system110, by coupling the storage medium to a bus, etc.). In variants, restoring the message signing private key S230functions to reassemble the private key (e.g.,822shown inFIG.8) by using the α-shards (e.g.,819shown inFIG.8). In some variations, the secure signing system110restores the message signing private key at S230. Variants of restoring the message signing private key are shown inFIGS.5and7. As shown inFIG.5, restoring the message signing private key can include: accessing the ceremony key S510; combining the message signing key shards (α-shards) to generate ciphertext S520; and decrypting the ciphertext using the ceremony key, to obtain the message signing private key (S530). As shown inFIG.7, restoring the message signing private key can optionally include one or more of: loading a secure operating system S710; loading a data bundle S720; verifying a signature of the data bundle S721; decrypting the data bundle S740; and activating a RuCK HSM S730. Activating the RuCK HSM S730can include one or more of: requesting at least one passphrase shard S731; reassembling a RuCK HSM passphrase from received passphrase shards S732; and unlocking the RuCK HSM using the assembled passphrase S733. However, the message signing private key can be otherwise restored. The secure signing system110can be loaded with a secure operating system (e.g.,810shown inFIG.8) (S710shown inFIG.7). In variants, the secure operating system810is an operating system that performs operations in RAM890, and does not store information or artifacts on a non-transitory storage medium. The secure signing module111is preferably loaded at the secure signing system110(e.g., in RAM890). The data bundle (e.g.,816shown inFIG.8) is preferably loaded at the secure signing system110from the computer-readable storage medium (e.g.,815shown inFIG.8) (e.g., into RAM890shown inFIG.8) (S720shown inFIG.7). In some implementations, the computer-readable storage medium is a DVD-ROM that stores the data bundle, and loading the data bundle includes loading the DVD-ROM from a DVD reader that is communicatively coupled (or included in) the secure signing system110. The signature of the data bundle816is preferably verified as a valid signature for the cold storage system120(S721shown inFIG.7). In some variations, the secure signing system110verifies the signature of the data bundle816by using a public key of the cold storage system120that is used for verifying signatures of the cold storage system120. In some implementations, if signature verification fails, the message signing process aborts. In some variations, activating the RuCK HSM112(S730) includes reassembling a passphrase required to unlock the RuCK HSM112for use (S732shown inFIG.7). In some implementations, the secure signing module111reassembles the passphrase. However, in other implementations, the passphrase can be reassembled in a secure manner by other components included in the offline secure processing facility101. In some variations, at least two passphrase shards (e.g.,833,834shown inFIG.8) are accessed (e.g., at S731shown inFIG.7) and used to reassemble the RuCK HSM passphrase. For example, the RuCK HSM passphrase can be sharded into n shards (e.g., by using a Shamir's Secret Sharing process), such that t of n shards (or another proportion of another number of shards) are required to reassemble the passphrase. In some implementations, each RuCK HSM passphrase shard is securely stored by a respective HSM (e.g.,115,116shown inFIG.1). In some implementations, the secure signing module111accesses the passphrase shards (e.g., at S731shown inFIG.7). However, in some implementations, any the shards can be accessed in a secure manner by other components included in the offline secure processing facility101. In some implementations, accessing the passphrase shards for the RuCK HSM passphrase includes unlocking at least t passphrase HSMs (e.g.,115,116) storing RuCK HSM passphrase shards. In some variations, the passphrase HSMs storing the passphrase shards are unlocked by using user input received via a user interface (e.g., input keys included in the passphrase HSM, a user interface provided by a computing system communicatively coupled to the passphrase HSM via a bus, etc.). In some implementations, unlocking the passphrase HSMs storing the passphrase shards includes communicatively coupling the passphrase HSMs (e.g.,115,116) storing RuCK HSM passphrase shards to the secure signing system110(e.g., via a bus, a network, etc.), and unlocking each of the t HSMs by using user input received via a user interface of the secure signing system110. In some implementations, unlocking the passphrase HSMs storing the passphrase shards includes communicatively coupling the passphrase HSMs (e.g.,115,116) storing RuCK HSM passphrase shards to a computing device (different from the secure signing system110) (e.g., via a bus, a network device, etc.), and unlocking each of the t HSMs by using user input received via a user interface of the computing device. In some implementations, the secure signing module111accesses the passphrase shards (e.g.,833,834). In some implementations, the passphrase HSMs (e.g.,115,116) are communicatively coupled to the secure signing system110(e.g., via a bus801, a network device811, etc.), and the secure signing module111accesses the passphrase shards from the unlocked HSMs. In some implementations, the secure signing module111accesses the passphrase shards from at least one of a user interface, a network device (e.g.,811), and a bus (e.g.,801) of the secure signing system110. In some implementations, the secure signing module111reassembles the passphrase using the accessed shards (e.g.,833,834) (S732shown inFIG.7). In some implementations, the secure signing module111uses a Shamir's Secret Sharing process to reassemble the RuCK HSM passphrase from the shards. In variants, reassembling the passphrase shards produces a ciphertext version of the RuCK HSM passphrase, and the ciphertext version of the passphrase is decrypted to produce the cleartext passphrase that can be used to unlock the RuCK HSM112. In variants, the librarian HSM (e.g.,113shown inFIG.1) (or other transiently or permanently connected storage) stores the key needed to decrypt the ciphertext version of the RuCK HSM passphrase. However, the RuCK HSM can be secured in any suitable manner, and later accessed (e.g., by performing a decryption process, etc.) in any suitable manner. In some implementations, the secure signing module111uses the RuCK HSM passphrase to unlock the RuCK HSM112(S733shown inFIG.7). Accessing the ceremony key S510functions to access the ceremony key (e.g.,817shown inFIG.8), which is used to decrypt the α-ciphertext generated by reconstructing the α-shards. The ceremony key can be accessed by the secure signing module111, by a user connected to the secure signing module111, by the cold storage system120, and/or another system. In a first variation, the ceremony key (α-key) is retrieved from the unlocked RuCK HSM112. In a second variation, the ceremony key is itself encrypted, and is decrypted using a decryption key, wherein the decryption key can be provided by a user, reconstructed from shards, or otherwise obtained. In a third variation, the ceremony key is accessed from the RuCK HSM via one of a bus (e.g.,801) and a network device (e.g.,811). However, the ceremony key can be otherwise obtained. Decrypting the data bundle received from the cold storage system120(S740shown inFIG.7) can be performed by the secure signing module111. In some variations, the secure signing module111accesses (from the unlocked RuCK HSM112) a private encryption key that is used for decrypting messages signed by using the public key of the RuCK HSM112. In some variations, the secure signing module111uses the private encryption key of the Ruck HSM (e.g., ceremony key, other RuCK key) to decrypt the data bundle816. However, any other suitable decryption key can be used. In variants, after the α-shards are accessed by the secure signing system module111, the secure signing module111(of the secure signing system no) combines the α-shards (e.g.,819shown inFIG.8) included in the decrypted data bundle (e.g.,816) (S520shown inFIG.5). In some variations, the secure signing module111combines the α-shards by performing a Shamir's Secret Sharing process. In variants, the result of combining the α-shards is an encrypted version of the message signing private key (e.g., the α-ciphertext generated at S320shown inFIG.3), and the encrypted version of the message signing private key is decrypted (at S530). In variants, the α-ciphertext is decrypted by using the ceremony key accessed at S510, to recover the message signing private key generated at S310. The α-ciphertext can be decrypted by the RuCK HSM, by the secure signing module, or by any other suitable system. Once the message signing private key has been restored (at S230), the message received at S220can be signed (at S240shown inFIG.2), to generate a signed message. The message can be signed using the restored message signing private key. The message can be signed by the secure signing module111, the RuCK HSM, and/or by any other suitable system. Digital versions of the message signing private key, the reconstructed ceremony key, the α-shards, and/or any other suitable data generated prior to signing (at S240) can be discarded or otherwise managed after message signing or after respective data use. A transaction log or signing log associated with the RuCK HSM, the secure signing module111, and/or the secure signing system110can optionally be updated based on message signing (e.g., with signing parameters, such as the public key, a message identifier, the requesting client, the signing timestamp, etc.). After message signing, the signed message (818shown inFIG.8) is provided to the cold storage system120(S250). However, in variants, the signed message can be provided to any suitable system. In some variations, the secure signing module111generates a signed message data bundle. In some variations, the secure signing module111generates the signed message data bundle by encrypting the signed message with a public key of the cold storage system120, signing the encrypted and signed message by using a private key of the RuCK HSM112, and including encrypted signed message and the RuCK HSM's signature in the signed message data bundle. Alternatively, the signed message can be unencrypted, unprotected, or otherwise protected. In some variations, the secure signing module111includes an HSM transaction log of the RuCK HSM in the signed message data bundle. In some variations, the secure signing module111includes a signing transaction log in the signed message data bundle, wherein the signing transaction log identifies message signing transactions performed by the secure signing module111. In some variations, the signed message data bundle is provided to the cold storage system120. In some variations, the signed message data bundle is provided to the cold storage system120via a data diode (e.g.,199shown inFIG.1). In some variations, the signed message data bundle is stored on a removeable storage medium (e.g., a DVD-ROM, HSM, etc.) and the storage medium is transported to the cold storage system120. In variants, the computer-readable storage medium storing the signed message data bundle is a removable storage medium. In some variations, providing signed message data bundle to the cold storage system120includes decoupling the computer-readable storage medium from the secure signing system110, physically transporting the computer-readable storage medium to the cold storage system120, and communicatively coupling the storage medium with the cold storage system120(e.g., by inserting the computer-readable storage medium into a drive bay of a computer implementing the system110, by coupling the storage medium to a bus, etc.). In some variations, the storage medium includes a signing transaction log, wherein the signing transaction log identifies message signing transactions performed by the secure signing module111. In some variations, the cold storage system120accesses the signed message data bundle, and verifies that the signature is a valid signature of the Ruck HSM and/or secure signing module. In some implementations, if the signature is not validated, the signed message data bundle is discarded. In some variations, the cold storage system120decrypts the ciphertext version of the signed message that is included in the signed message data bundle (e.g., by using a private key of the cold storage system120). In some variations, the cold storage system120provides the decrypted (cleartext) version of the signed message to the cold storage client system140, as a response to the signing request received at S250. The signed message can then be provided to the requesting system, sent to the respective blockchain, or otherwise used. In some variations, several signing requests can be received and processed in a batch operation by the secure signing system110. In some implementations, the cold storage system120can generate a data bundle that includes several messages to be signed (either for a same private key, or for different private keys), and provide the data bundle to the secure signing system110. The secure signing system can decrypt the data bundle, which includes several messages to be signed, α-shards for each message, and optionally an identifier that identifies a public key associated with each message. The α-shards can be combined, the combined α-ciphertext can be decrypted, and the resulting private key can be used to sign the message. In some implementations, the ceremony key used to decrypt α-ciphertext for different private keys is different, and the secure signing module111uses a public key identified for the message to select the appropriate ceremony key. In some implementations, a single RuCK HSM can store each ceremony key in association with a public key. In other implementations, one or more RuCK HSMs are used to store the ceremony keys, and RuCK HSMs are associated with the public key that matches the ceremony key stored by the RuCK HSM. Embodiments of the system and/or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein. As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims. | 71,867 |
11943351 | DETAILED DESCRIPTION Applicant's issued U.S. Pat. No. 8,027,335 entitled MULTIMEDIA ACCESS DEVICE AND SYSTEM EMPLOYING THE SAME incorporated by reference herein, describes a media access device that facilitates communication between users employing disparate communication devices and messaging protocols associated with different service providers that is located within the customer premises and that allows for onsite or remote configuration. This solution is a highly specific implementation that provides advanced telephony particularly specific services associated with Voice over IP communications in an Instant Messaging infrastructure. While comprehensive, this system does not address the servicing and management of other digital endpoint devices in the home. Furthermore, this prior art solution requires manual intervention to initiate, configure, and maintain many of the call service, IM, and other service related features for users which can be burdensome for the technically challenged. A significant demand exists for simplifying the management and back-up services of the digital home or even the small enterprise that takes away the complexity of the maintenance, upgrading, and operation of even the more basic needs addressed by these emerging digital endpoint devices and networks, e.g., access management (e.g., parental controls), etc. Home “gateway” and like router/gateway appliances are currently available for the home and small business that allow several computers to communicate with one another and to share a broadband Internet connection. These devices function as routers by matching local network addresses and the hostnames of the local computers with the actual networking hardware detected. As gateways, these devices translate local network addresses to those used by the Internet for outgoing communications and do the opposite translation for incoming packets. For example, U.S. Pat. No. 6,930,598 is representative of a home gateway server appliance enabling networked electronic devices to communicate with each other without the direct interaction with external networks and that provides a mechanism whereby a member of the household may be informed of certain network related events without having to use their home computer or other client devices. It would be highly desirable to provide a multi-services application gateway device that provides not only IP-based communication and voice services, but services management capability associated with use of digital home devices, and obviates the need for users to attend to the provisioning, management, configuration, and maintenance of the emerging home/business digital networks including the myriad of interconnected digital endpoint devices connected thereto. The present invention is directed to a novel gateway appliance that is programmed to simplify various aspects of managing the emerging home/business digital networks including the myriad of interconnected digital endpoint devices. The novel gateway appliance is further programmed to simplify support services in the digital home including: media delivery, content management, access control and use tracking, file sharing, and protection and back-up services of both Internet/Web-generated digital media content and user generated digital media content. The appliance of the present invention further operates in conjunction with a service delivery platform that provides IP-based connectivity to digital devices in the home, e.g., VoIP phones, the personal computer, personal music players, and the like, and emphasizes ease of use and management of these digital devices for the technically challenged. The novel gateway appliance is further programmed to simplify home automation operations, e.g., lights, garage doors, and particularly, facilitating remote access to and management of home automation devices. More particularly, the home or business appliance of the present invention operates in conjunction with a novel network operations framework and network service center that supports the managed services and all of the manageable capabilities of the home/business. For instance, the appliance and supporting network service center architecture provides for distributing configuration and data information to residential home gateways; provides updates to residential home gateways; enables inbound services to in-home gateways; provides remote web access to residential home gateways (include login via control channel); provides off-premise voice extensions for a residential home gateway; provides remote diagnostics and home network management; collects billing records; alarms and statistical information from residential home gateways; updates and manages endpoints in the digital home; and enables remote control of “smart devices” in the home. For the in-home services, the multi-services gateway appliance connects the various service delivery elements together for enabling the user to experience a connected digital home, where information from one source (for example, voicemail) can be viewed and acted on at another point (for example, the TV). The multi-services gateway appliance hosts the various in-home device interfaces and facilitates the moving of information from one point to another. Some of the in-home endpoint device processing duties performed by the appliance10include, but are not limited to: 1) detecting new devices and provide IP addresses dynamically or statically; 2) functioning as a (Network Address Translator) NAT, Router, and Firewall; 3) providing a centralized disk storage in the home; 4) obtaining configuration files from the network and configures all in-home devices; 5) acting as a registrar for SIP-based devices; 6) receiving calls from and deliver calls to voice devices; provide voicemail services; 7) decrypting and securely streaming DRM'd media; 8) distributing media to an appropriate in-home device; 9) compressing and encrypting files for network back-up; 10) backing-up files to the network directly from appliance; 11) handling home automation schedules and changes in status; 12) providing in-home personal web-based portals for each user; 13) providing Parental Control Services (e.g., URL filtering, etc.); 14) creating and transmitting billing records of in-home devices including, recording, and uploading multi-service billing event records; 15) distributing a PC client to PC's in the home used in support of the various services such as monitoring events or diagnostic agents; 16) storing and presenting games that users and buddies can play; 17) delivering context-sensitive advertising to the end point device; 18) delivering notifications to the endpoint device; and 19) enabling remote access through the web, IM client, etc. Other duties the gateway appliance10may perform include: service maintenance features such as setting and reporting of alarms and statistics for aggregation; perform accessibility testing; notify a registration server (and Location server) of the ports it is “listening” on; use IM or like peer and presence communications protocol information for call processing and file sharing services; receive provisioning information via the registration server; use a SIP directory server to make/receive calls via the SBC network element to/from the PSTN and other gateway appliance devices; and download DRM and non-DRM based content and facilitating the DRM key exchanges with media endpoints. According to the present invention, with reference toFIG.1A, the present invention is a next-generation multi-services residential gateway appliance10, also referred to herein as “the appliance”, that can be used at the home or business (“premises”) that is programmed to simplify various aspects of managing the emerging home/business digital networks including the myriad of interconnected digital endpoint devices. With processing, storage, and network connectivity to a novel network operations support infrastructure50, the gateway appliance is capable of extending the service provider's network demarcation point into the subscriber's home offering powerful capabilities at the customer's residence. By leveraging powerful processing and intelligence residing on the gateway appliance at the customer's premise, the solution for the premises provided by the gateway appliance addresses requirements such as shared Internet connection, remote diagnostics and installation help, integrated VoIP support, connected entertainment offerings, and bundled services via a single platform. Besides providing a secure platform for building and providing multiple services for digital clients at the premises, the appliance, in combination with a novel network operations support infrastructure50, additionally provides a communications and instant messaging-type framework including presence and networking capability for enabling file-sharing capabilities amongst a community of peers, friends, or family. As part of the presence and networking capability offered, connectivity is required between the appliance10and a network operations support infrastructure or service center (SC)50described in further detail herein below and that is particularly enabled to support the next-generation multi-service applications gateway appliance provided at the premises that assumes many of the functions that are typically network-based. As shown inFIG.1A, secure connectivity to the service center50is provided, in one embodiment, via a wide area networks (WAN) interface such as Ethernet WAN53over a broadband connection via the public Internet99, or, for example, via a wireless EvDO (Evolution Data Optimized) Internet data interface embodied as a PCMCIA (personal computer memory) wireless card56. As will be described in greater detail hereinbelow, the service center50generally provides a secure IP-based communications and processing infrastructure for supporting the variety of services and applications and communications residing at multiple gateway devices101, . . .10n. This support architecture is designed for high availability, redundancy, and cost-effective scaling. The secure platform for building and providing multiple services for digital clients at the premises assumes connectivity between the appliance10and each of a user's digital devices (referred interchangeably herein as “digital endpoints” or “digital endpoint devices”). This connectivity may be provided by implementation of one or more USB ports (interfaces)13, a wired Local Area Network connection such as provided by an Ethernet local area network (LAN) interface16, or a wireless network interface via a WiFi LAN access point62provided, for example, in accordance with the I.E.E.E. 802.11b/g/n wireless or wireless network communications standard. These physical interfaces provide IP network interconnectivity to the endpoint devices connected to a local IP network60at the premises. That is, the gateway appliance interfaces with digital endpoint devices including, but not limited to: a home automation networking device20(e.g., X10, Z-Wave or ZigBee) for wired or wireless home network automation and control of networked home devices such as a switch controller22, sensor devices23, automatically controlled window blinds24, a controlled lighting or lamp unit25, etc.; individual or a wired or wireless network of personal computing (PC) and laptop/mobile devices30a, . . . ,30cthat serve as file sources, control points, and hosts for various other client endpoints; one or more television display devices32including associated set top boxes (STB)35aor digital media adapter (DMA)35b; and one or more VoIP phone devices (e.g., SIP phones)40, or other devices (not shown) that convert IP interfaces to PSTN FXO and FXS interfaces. Although not shown inFIG.1A, other digital endpoint devices for which connectivity may be established with the appliance10include, but are not limited to: personal music or media players, hi-fi audio equipment with media streaming capability, game stations, Internet radio devices, Wi-Fi phones, wifi or other wirelessly enabled digital cameras, facsimile machines, electronic picture frames, health monitors (sensor and monitoring devices), etc. As will be described in greater detail herein, the gateway appliance includes both a hardware and software infrastructure that enables a bridging of the WAN and LAN networks, e.g., a proxy function, such that control of any digital endpoint device at the premises from the same or remote location is possible via the gateway appliance10using a secure peer and presence type messaging infrastructure or other communications protocols, e.g., HTTPS. For example, via any IM-capable device or client80a,80brespectively connected with an Instant Messaging (IM) or XMPP (Extensible Messaging and Presence Protocol) network messaging infrastructure, e.g., IM networks99a,99bsuch as provided by Yahoo, Microsoft (MSN), Skype, America Online, ICQ, and the like, shown for purposes of illustration inFIG.1A, a user may access any type of functionality at a subordinate digital end point device at the premises via the appliance10and service center50by simple use of peer and presence messaging protocols. In one exemplary embodiment, a peer and presence communications protocol may be used such as Jabber and/or XMPP. Particularly, Jabber is a set of streaming XML protocols and technologies that enable any two entities on the Internet to exchange messages, presence, and other structured information in close to real time. The Internet Engineering Task Force (IETF) has formalized the core XML streaming protocols as an approved instant messaging and presence technology under the name of XMPP (Extensible Messaging and Presence Protocol), the XMPP specifications of which are incorporated by reference herein as IETF RFC 3920 and RFC 3921. Thus, the gateway appliance of the present invention is provided with functionality for enabling a user to remotely tap into and initiate functionality of a digital endpoint device or application at the premises via the IM-based messaging framework. In addition, the appliance10and network connectivity to the novel service center50, in accordance with the invention, provides a secure peer and presence messaging framework, enabling real-time communications among peers via other gateway appliances101, . . . ,10n. For instance, the appliance provides the ability to construct communication paths between peers with formal communications exchanges available between, for example, one appliance at a first premises and a second appliance located at the remote premises. Thus, such an infrastructure provides for content addressing, enabling peers through remote gateway appliances101, . . . ,10nto supply and request content such as files, media content or other resources of interest to a community of interest. Besides handling all aspects of the digital home communications, e.g., IP, voice, VoIP, phone connectivity, the gateway appliance10, when operable with the service center50, provides a service-oriented architecture that manages services for the digital home and facilitates the easy addition of new services or modification of existing services. Such services may include, for example, facility management (home automation), media content downloading and Digital Rights Management (DRM), device updates, data backups, file sharing, media downloading and transmission, etc., without the intermediary of a plurality of external service providers who may typically provide these individual services for every digital endpoint device in the home or premises. That is, the appliance is integrated with hardware and software modules and respective interfaces that handle all aspects of home automation and digital endpoint service and management for the home in a manner without having to rely on external service providers and in a manner that is essentially seamless to the user. This, advantageously is provided by the service center50which is enabled to access regions of the gateway device10that are not accessible to the user, e.g., for controlling the transport and storing of digital content and downloading and enabling service applications and upgrades and providing largely invisible support for many tasks performed by users. Thus, central to the invention, as will be described in greater detail herein below, is the provision of service logic located and stored at the appliance10providing soft-switch functionality for providing call-processing features at the premises (rather than the network) for voice communications and enabling management of other service features to be described. With the provision of central office type call services and other service features provided at the appliances101, . . . ,10n, a distributed soft-switch architecture is built. While transactions occur with cooperation of the service center50to provide, for example, service subscription/registration, authentication/verification, key management, and billing aspects of service provision, etc., and with all of the service logic and intelligence residing at the appliance, a service provider can offer customers a broad spectrum of services including, but not limited to: media services, voice services, e.g., VoIP, automated file backup services, file sharing, digital photo management and sharing, gaming, parental controls, home networking, and other features and functions within the home or premises (e.g., home monitoring and control). Users can access their content and many of the solution's features remotely. Moreover, software updates for the in-home devices that require updating are handled in an automated fashion by the system infrastructure. The service center infrastructure additionally provides a web interface for third (3rd) party service providers to round out the service solutions provided at the appliance for the premises. Gateway Appliance Software and Hardware Architecture The composition of the premises gateway appliance10according to the present invention is now described in greater detail with reference toFIGS.2A-2C. As shown inFIG.2A, the gateway appliance10comprises a layered architecture100enabling the encapsulation of similar functionality; minimization of dependencies between functions in different layers; and facilitates reuse or sharing of logic across the layers to provide a managed service framework120. The service management functionality provided by the framework enables deployment of new services as pluggable modules comprising computer readable instructions, data structures, program modules, objects, and other configuration data in a plug and play fashion. The layered service architecture100additionally provides the appliance with intra process communication and inter process communication amongst the many services and modules in the service framework layer that enables the provisioning, management, and execution of many applications and services130depicted, e.g., Services A, B, . . . , N at the gateway. Additionally, provided are the application service interfaces140that enables communication from user endpoint devices with service environments.FIG.2Athus depicts a high level service framework upon which are built services, e.g., downloaded via the support network as packages that are developed and offered by a service entity for customers. More particularly, as shown inFIG.2B, a base support layer102comprises essential hardware components including a processor device152, e.g., a system on chip central processing unit (“CPU”) that includes processing elements, digital signal processor resources, and memory. The CPU152is also coupled to a random access memory (“RAM”) and additionally, non-volatile hard drive/disk magnetic and/or optical disk memory storage154. Generally, the above-identified computer readable media provide non-volatile storage of computer readable instructions, data structures, program modules, objects, and other data for use by the gateway device. As mentioned, the non-volatile hard drive/disk magnetic and/or optical disk memory storage154is preferably partitioned into a network side which is the repository for storing all of the service logic and data associated with executing services subscribed to by the user and is invisible to the user, and a user partition for storing user generated content and applications in which the user has visibility. Although not shown, the CPU152may be coupled to a microcontroller for controlling a display device. Additional hardware components include one or more Ethernet LAN/WAN interface cards156(e.g., 802.11, T1, T3, 56 kb, X.25, DSL or xDSL) which may include broadband connections (e.g., ISDN, Frame Relay, ATM, Gigabit Ethernet, Ethernet over SONET, etc.), wireless connections, or some combination of any or all of the above, one or more USB interfaces158, and the PCMCIA EvDO interface card160. A data encryption/decryption unit162is additionally provided as part of the architecture for providing data security features. A watchdog timer element or like timer reset element164is provided as is one or more LED devices166for indicating status and other usable information to users of the appliance. As further shown inFIG.2B, the device drivers layer104comprises all of the device drivers for the various interfaces including a device driver for the USB interface, PCMCIA and Ethernet interface cards, a LED controller, and an integrated device electronics (“IDE”) controller for the hard disk drive device provided. Additionally, as shown as part of the hardware and device driver components is the WiFi LAN access point62and corresponding 802.11b/g/n wireless device driver. As mentioned above, the gateway appliance provides an in-premises footprint enabling the service connectivity and local management to client(s). The implementation of functions and the related control such as a router (with quality of service (QoS)), firewall, VoIP gateway, voice services, and voice mail may be embodied and performed within the CPU152. Continuing, as shown inFIG.2B, the device driver layer104comprises a multitude of driver interfaces including but not limited to: a PCMCIA driver104afor enabling low level communication between the gateway and the PCMCIA network interface card wireless interface, an IDE driver104bfor enabling low level communication between the gateway and the local mass memory storage element, an Ethernet driver104cfor enabling low level communication between the gateway and the Ethernet network interface card, a LED driver/controller104d, a USB driver104e, and a wireless network driver104f. The drivers provide the connectivity between the low level hardware devices and the operating system106which controls the execution of computer programs and provides scheduling, input-output control, file and data management, memory management, and communication control and related services for the appliance. With respect to the operating system, the appliance may comprise a computing device supporting any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, or even any operating systems for mobile computing devices as long as the operational needs of the client discussed hereinbelow can be met. Exemplary operating systems that may be employed include Windows®, Macintosh, Linux or UNIX or even an embedded Linux operating system. For instance, the gateway appliance may be advantageously provided with an embedded operating system106that provides operating system functions such as multiple threads, first-in first-out or round robin scheduling, semaphores, mutexes, condition variables, message queues, etc. Built upon the system operating system106is a system services support layer providing both client-like and server-like functions108that enable a wide range of functionality for the types of services capable of being managed by the gateway appliance. For instance, there is provided a Dynamic Host Configuration Protocol (DHCP) client and server software modules. The DHCP client particularly requests via a UDP/IP (User Datagram Protocol/Internet Protocol (e.g., Ipv4, Ipv6, etc.)) configured connection information such as the IP address that the gateway appliance has been dynamically assigned by a DHCP service (not shown), and/or any subnet mask information, the gateway appliance should be using. The DHCP server dynamically assigns or allocates network IP addresses to subordinate client endpoints on a leased, i.e., timed, basis; a Virtual Private Network (VPN) client which may communicate via a proxy server in the service control network according to a VPN protocol or some other tunneling or encapsulation protocol; a SMPT client for handling incoming/outgoing email over TCP in accordance with the Simple Mail Transfer protocol; a Network Time Protocol (NTP) (RFC 1305) for generating and correlating timestamps for network events and providing generally time synchronization and distribution for the Internet; a Domain Name Server (DNS) client and server combination which are used by the IP stack to resolve fully-qualified host or symbolic names, i.e., mapping host names to IP addresses; a HTTP(S) server handles for handling secure Hypertext Transfer Protocol (HTTP) (Secure Sockets Layer) communications for providing a set of rules for exchanges between a browser client and a server over TCP. It provides for the transfer of information such as hypertext and hypermedia and for the recognition of file types. HTTP provides stateless transactions between the client and server; a Secure File Transfer Protocol (SFTP) client and server combination which protocols govern the ability for file transfer over TCP; a SAMBA server which is an open source program providing Common Internet Files Services (CIFS) including, but not limited to, file and print services, authentication and authorization, name resolution, and service announcement (browsing); an EvDO/PPP driver including a Point-to-Point Protocol (PPP) daemon configuration; a PPPoE (Point-to-Point Protocol over Ethernet) client which combines the Point-to-Point Protocol (PPP), commonly used in dialup connections, with the Ethernet protocol, and which supports and provides authentication and management of multiple broadband subscribers in a local area network without any special support required from either the telephone company or an Internet service provider (ISP). This device is thus adapted for connecting multiple computer users on an Ethernet local area network to a remote site through the gateway and can be used to enable all users of an office or home share a common Digital Subscriber Line (DSL), cable modem, or wireless connection to the Internet; a Secure Shell or SSH server implemented with HTTP protocol that provides network protocol functionality adapted for establishing a secure channel between a local and a remote computer, and encrypt traffic between secure devices by using public-key cryptography to authenticate the remote computer and (optionally) to allow the remote computer to authenticate the user. Additionally, provided as part of the system services layer108is intelligent routing capability provided by an intelligent router device185that provides Quality of Service (QoS, guaranteed bandwidth) intelligent routing services, for example, by enforcing routing protocol rules and supporting unlimited multiple input sources and unlimited multiple destinations, particularly, for routing communications to networked digital endpoint devices subordinate to the gateway; and a central database server183for handling all of the database aspects of the system, particularly, for maintaining and updating registries and status of connected digital endpoint devices, maintaining and updating service configuration data, services specific data (e.g., indexes of backed-up files, other service specific indexes, metadata related to media services, etc.) and firmware configurations for the devices, and for storing billing and transaction detail records, performance diagnostics, and all other database storage needs as will be described in greater detail herein. Referring back toFIGS.2A and2B, built on top of the system services layer108is the platform management layer110providing a software framework for operating system and communications level platform functionality such as CPU management; Timer management; memory management functions; a firewall; a web wall for providing seamless WWW access over visual displays via access technologies enumerated herein, e.g., HTTP, SMS (Short Messaging Service), and WAP (Wireless Access Protocol); QoS management features, Bandwidth management features, and hard disk drive management features. Further provided are platform management features110as shown inFIG.2Cthat include a platform manager module which will implement unique rules based notification services on operational failure, i.e., when one of the components or services fails, the platform manager would detect this failure and take appropriate action such as implement a sequence of rules; a scheduler module for managing scheduled device maintenance, managing scheduled services, e.g., back-up services, etc.; a diagnostics module; a firmware upgrades management module for managing firmware upgrades; a resource management module for managing system resources and digital contention amongst the various resources, e.g., CPU/Bandwidth utilization, etc.; and a display management module and a logger management module for storing and tracking gateway log in activity of users and applications, e.g., voice call logs, at the premises. As will be explained in greater detail, the platform management layer in concert with resource and service management components enforces the separation of network side managed service control and user side delegations depending upon service subscriptions and configurations. For example, the platform and resource management encompass rules and guidelines provided according to subscribed services that act to enforce, manage and control input/output operations, and use of hard drives space etc. A demarcation point is thus defined that provides a hard line between what is owned by the customer and what is owned by the service provider. Referring back toFIGS.2A and2C, built on top of the platform management layer110is the Services Framework120providing a library of application support service processes that facilitate data collection and data distribution to and from the multimedia access devices including, but not limited to: authentication management for use in authenticating devices connected to the gateway; billing management for collecting and formatting service records and service usage by endpoint devices, e.g., calls, back-up services etc.; fault management for detecting and managing determined system and/or service faults that are monitored and used for performance monitoring and diagnostics; database management; a control channel interface via which the gateway initiates secure communications with the operations support infrastructure; configuration management for tracking and maintaining device configuration; user management; service management for managing service configuration and firmware versions for subscribed services provided at the gateway appliance; and statistics management for collecting and formatting features associated with the gateway appliance. Statistics may relate to use of one or more services and associated time-stamped events that are tracked. Referring back toFIGS.2A and2C, built on top of the Services Framework layer120is the Application Services Framework130providing library of user applications and services and application support threads including, but not limited to: file sharing functionality; backup services functionality; home storage functionality; network device management functionality; photo editing functionality; home automation functionality; media services functionality; call processing functionality; voice mail and interactive voice response functionality; presence and networking functionality; parental control functionality; and intelligent ads management functionality. The multi-services applications gateway10further provides application service interfaces140that are used to enable a variety of user applications and communications modalities. For instance, the SIP Interface141is an interface to the generic transactional model defined by the Session Initiation Protocol (SIP) that provides a standard for initiating, modifying or terminating interactive user sessions that involve one or more multimedia elements that can include voice, video, instant messaging, online games, etc., by providing access to dialog functionality from the transaction interface. For instance a SIP signaling interface enables connection to a SIP network that is served by a SIP directory server via a Session Border Controller element in the service center50(FIG.1A). The Web Interface142enables HTTP interactions (requests and responses) between two applications. The IM Interface144is a client that enables the multi-services gateway appliance connection to one or more specific IM network(s). The XMPP interface145is provided to implement the protocol for streaming (XML) elements via the appliance in order to exchange messages and present information in close to real time, e.g., between two gateway devices. The core features of XMPP provide the building blocks for many types of near-real-time applications, which may be layered as an application service on top of the base TCP/IP transport protocol layers by sending application-specific data qualified by particular XML namespaces and more particularly, provide the basic functionality expected of an instant messaging (IM) and present an application that enable users to perform the following functions including, but not limited to: 1) Exchange messages with other users; 2) Exchange present information with other devices; 3) Manage subscriptions to and from other users; 4) Manage items in a contact list (in XMPP this is called a “roster”); and 5) Block communications to or from specific other users by assigning and enforcing privileges to communicate and send or share content amongst users (buddies) and other devices. As further shown inFIG.2C, the UpNp (Universal Plug and Play)147interface enables connectivity to other stand-alone devices and PCs from many different vendors. The Web services interface149provides the access interface and manages authentication as multi-services gateway appliances access the service center50(FIG.1A) via web services. Gateway Device Boot Sequence and Initialization FIG.2Ddepicts a boot sequence and initialization process170for the gateway appliance10ofFIG.1A. In response to turning on or resetting the appliance at step172, the circuit board supporting the CPU chip element is initialized at step174, and the board comprising the device drivers, serial console, Ethernet connection, WiFi connection, LED connection, USB connection, IDE hard drive connection, and the watchdog timer are all initialized at step176. Then, the operating system (OS) layer is initialized including the base kernel, base OS modules, to provide basic services as indicated at step178. Then, the basic OS services are initialized at step179including the Sshd, Ftpd, DNS, DHCP, and UpNp. Then, at step180, customized kernel initialization is performed including the kernel modules such as Intelligent Routing, the QoS, the firewall, the web wall, encryption and watch dog timer control at step182, and the kernel threads such as CPU management, memory management, hard disk drive management, and time management threads at step184. Then, at step186, the platform manager component of the appliance is initialized that starts system services and service managers for all services; the system services are then initialized at step190, system management (e.g., Resource manager, Firmware Upgrade manager, Diagnostics manager and the Scheduler manager components, etc.) as shown at step192, and service management (e.g., Service Manager, the network operations support center control channel client, the Billing manager, Stats manager, and Alarms manager components, etc.) as shown at step194that provide the services framework upon which the plurality of applications and services are built as shown inFIG.2A. Finally, at step196, the various subscribed to application services are initialized including, for example, but not limited to: the PBX server, Media server, file manager server, home automation server, presence and networking server, parental control services, and advertisement services, etc. The architecture such as shown inFIGS.2A-2Dprovides a controlled managed services model for an IM messaging infrastructure. Demarcation As shown inFIG.3, in one aspect of the invention, the gateway appliance includes functionality for combining the storage available from its own internal and attached hard drive(s)154with any Network Attached Storage device158on the network to create a single virtual file system that consumers can use like a single drive. The gateway will automatically detect, mount, and manage the connections to the NAS devices and add them to its own file system. Users of the gateway are thus presented with a single consolidated storage device that they can access just like another drive on their PC. Users will not be exposed to the underlying protocols and management features required to provide such a feature. Users will no longer have to use each of the storage devices separately. However, as further shown inFIG.3, a virtual demarcation155is enforced at the centralized disc storage device154of the gateway appliance, e.g., which may comprise one or more physical hard drives, to physically and logically isolate the storage partition156where service logic and associated data for implementing services from service provider and/or downloaded media content are stored, and another partition157where user generated data, e.g., user files, is stored. The partition156belongs to the service center50that is limited to receiving logic and intelligence for the appliance and backed-up user files, all of which is managed by the service control network and enforced locally at the gateway; the other partition157is storage that is user accessible and includes a user accessible graphic user interface which may be accessed by a digital endpoint device, e.g., a PC, programmed to enable visibility if granted to the user. Thus, the user is enabled to skew the demarcation point depending upon the amount of control granted or authorized to the user according to subscribed features and service configurations. This separation within the gateway appliance10is an enabler for delivery of the service logic that resides on the appliance on the network side of the virtual demarcation. That is, the service provider offers all of its services upstream of this demarcation point and the customer can choose which service is selected that is within the control of the service provider's network. While the service center50is responsible for placement of service modules and data beyond the demarcation point, the appliance10is equipped with certain functional elements such as encryption techniques, local directory obfuscation techniques, and local enforcement to prevent user visibility beyond the demarcation point that belongs to the service provider unless the user is enabled with such visibility. The intelligence and service logic that is on the appliance according to the invention is managed by the service center and provides the logic to limit user access. FIG.3illustrates the virtual demarcation point within the gateway appliance located on the customer premise that occurs somewhere within the device allowing the customer and service provider to skew the physical location of the demarcation. The demarcation within this device can occur on a physical storage medium, e.g., hard disk drive154as shown inFIG.2B, that has been sectored for different user, or in a virtual memory location, e.g., locations155a,155bor155c, based on the service levels being offered, e.g., service A, service B or service C, respectively. This approach allows the customer more flexibility in manipulating the service rendered and services offered by the provider. By allowing the demarcation closer to the customer, this allows more control of features from the customer and allows the service provider closer control of the customer infrastructure without owning it all. Thus, with this device in place, the new demarcation moves based on the service. For an example of demarcation control, if some data is required to be stored, e.g., a downloaded movie, the customer can store it locally, securely locally, or securely remotely. While it is the customer's responsibility to do storage locally and securely locally, with the new virtual demarcation, the service of providing locally secure data is now part of an offering of the service provider. While the data is still on site, the data is under control of the service provider and follows service agreements for that storage of data. As another example of demarcation control, two movies may be downloaded and stored at the service center's partitioned side beyond the demarcation point as requested by a user via a user interface through a device connected to the appliance. This user interface, enabled via the user partition of the gateway appliance, is accessed through a PC, a TV, and cell phone. After authentication, the user could select and prioritize movies to purchase, for example, in compliance with the media content service provider. The choice of interfaces and amount of visibility by endpoint devices accessing this user interface has been optimally designed from a contention standpoint from perspective of controls, security, network service control manageability, and cost. In response, the selected movie(s) are downloaded to the service center's side156of the partition as shown inFIG.3. Unless and until the user has purchased the movies for playback via an authentication process, that user will be prevented from accessing the content. Otherwise, the user may initiate streaming of the content directly to a digital endpoint device, e.g., a television, or will be granted permissions to download and play the movie according to the subscription with the media content provider as managed by the gateway device. If the user has purchased the movie, the movies may be transferred physically to the user storage portion of the partition. Otherwise, the content may be temporarily copied for local storage by the user at the user accessible portion of the demarcation point for playback at the user endpoint device. Another example of demarcation control is the manipulation of features for a given service. Currently a subscription order is processed, and these features are manipulated within the service provider's network and sent down to the customer for provisional changes to equipment at the service center's side of the demarcation point. Via a GUI established for the endpoint device when connected with the gateway, when authenticated, files may be unlocked so the customer may locally manipulate services before and after demarcation point, thereby virtually shifting the demarcation point. Thus, a virtual demarcation point allows service providers flexibility in offering different services and features. Example services include, but are not limited to services such as: parental control, advertisement monitoring and replacement, home user habit monitoring, home channel monitoring, and back-up services. Gateway Processing For the in-home services, the multi-services gateway appliance connects the various service delivery elements together for enabling the user to experience a connected digital home, where information from one source (for example, voicemail) can be viewed and acted on at another point (for example, the TV). The multi-services gateway appliance10thus hosts the various in-home device interfaces and facilitates the moving of information from one point to another. Some of the in-home endpoint device processing duties performed by the appliance10include, but are not limited to: 1) detecting new devices and provide IP addresses dynamically or statically; 2) functioning as a (Network Address Translator) NAT, Router, and Firewall; 3) providing a centralized disk storage in the home; 4) obtaining configuration files from the network and configures all in-home devices; 5) acting as a Registrar for SIP-based devices; 6) receiving calls from and deliver calls to voice devices; provide voicemail services; 7) decrypting and securely streaming DRM'd media; 8) distributing media to an appropriate in-home device; 9) compressing and encrypting files for network back-up; 10) backing-up files to the network directly from appliance; 11) handling home automation schedules and changes in status; 12) providing in-home personal web-based portals for each user; 13) providing Parental Control Services (e.g., URL filtering, etc.); 14) creating and transmitting billing records of in-home devices including recording and uploading multi-service billing event records; 15) distributing a PC client to PC's in the home used in support of the various services such as monitoring events or diagnostic agents; 16) storing and presenting games that users and buddies can play; 17) delivering context-sensitive advertising to the end point device; 18) deliver notifications to the endpoint device; and 19) enabling remote access through the web, IM client, etc. Other duties the gateway appliance10may perform include: service maintenance features such as setting and reporting of alarms and statistics for aggregation; perform accessibility testing; notifying a registration server (and location server) of the ports it is “listening” on; using IM or like peer and presence communications protocol information for call processing and file sharing services; receiving provisioning information via the registration server; using a SIP directory server to make/receive calls via the SBC network element to/from the PSTN and other gateway appliance devices; and downloading DRM and non-DRM based content and facilitating the DRM key exchanges with media endpoints. Gateway Appliance Interfaces As mentioned, in one embodiment, the gateway appliance behaves as a DHCP (Dynamic Host Configuration Protocol) server managing and automating the assignment of Internet Protocol (IP) addresses in a premise (home) network and may be installed in the premise (home) network behind the access modem such as DSL (digital subscriber line)/cable/DOCSIS (Data Over Cable Service Interface Specification).FIGS.1B(1) and1B(2) illustrate the gateway appliance connection to the in-premise devices in different embodiments. InFIG.1B(1), a gateway appliance124connects to a broadband modem122for access to the WAN and acts as a replacement to a router in a network, connecting to various endpoint devices132. In another embodiment,FIG.1B(2) shows a gateway appliance129acting as a LAN connection on an existing router128. The existing router128functions as a bridge and the gateway appliance129behaves as the router connecting to various endpoint devices134. In this embodiment, the WAN connection from and to the appliance129is via the exiting router128acting as a bridge. In support of the gateway primary processing for handling all aspects of the digital home as described herein with respect toFIGS.1A-1B(2), the Gateway Appliance provides the interfaces to the following in-home devices: 1) an interface to the Digital Media Adapter (DMA)35bfor television (TV) enabling bidirectional wireline or wireless communication. This interface supports several functions for multiple services including, but not limited to: media (video and music) by enabling the transfer of media (video and music) to the TV via a peer and presence messaging protocol; voice services by providing for Called Line ID and for voice mail control; and provide Home Automation Services including obtaining status and control of networked home automation devices; 2) a bidirectional wireline or wireless interface to a PC device for supporting the transfer of media (video and music) to the computer for storage and viewing; for supporting voice services, e.g., by providing for calls from SIP soft clients; for file sharing via a peer and presence messaging protocol notification, file back-up, and home storage functions, this interface will provide for the bidirectional moving of files; and for Home Automation Services, it will provide status and control of networked home automation devices; 3) a unidirectional wireline or wireless Media Streamer interface for enabling the sending of audio content to a Media Streamer, which in turn will provide the audio to a receiver/amplifier of a Home Sound System (stereo or digital multi-channel); 4) a unidirectional wireline or wireless Internet Radio Interface that provides for sending of audio content to an Internet Radio; 5) a unidirectional wireline or wireless interface to a Portable Media Player (PMP) that provides for sending audio content to a PMP; 6) a bidirectional Phone Adapter/PSTN Gateway (PAPG) Interface that provides for configuring and registering of the PAPG with the gateway appliance via exemplary Session Initiation Protocol (SIP), FTP, HTTP over Ethernet protocols, and provides for sending and receiving of calls to/from the PAPG; 7) a SIP Phone Interface that is similar to the PAPG interface; and a bidirectional wireless or wireline Home Automation Controller Interface that provides for updating the controller of existing devices, changing device states (for example, “light on”) and relaying device status from the endpoint device to the gateway appliance via the controller. The PAPG is a SIP to PSTN adapter having an Ethernet port on one side and a FXS (foreign exchange station) and a FXO (foreign exchange office) port on the other. A user can thus plug a phone into the FXS and can plug a telephone line from the central office into the other. With respect to the media adapter element35bshown inFIG.1A, this device converts audio and video (optionally) to a format suitable for a TV (HMDI/DVI/HDCP/Component/RCA). In addition, the media adapter35bis capable of receiving context-sensitive commands from a remote control device (not shown). This enables the use of menus on the TV for controlling services for various functions. The media adapter element35bmay be physically combined with the gateway10and/or the media adapter element35bmay be physically combined with set top box functions35a. Thus, the media adapter/TV combination is enabled to provide the following features including, but not limited to: display of media; media control functions, when enabled (FF, REW, STOP, PAUSE, etc.); display of CLID (Caller ID); control of voicemail; picture viewing; control of home automation; and some user functions for the gateway appliance. With respect to the Set Top Box35aas shown inFIG.1A, this device handles rendering of media content suitable for television, digital decryption, and other DRM functions, Video on Demand purchases, etc. The Set Top Box/TV combination thus enables: media format conversion (for example NTSC to ATSC); decryption; other DRM functions (such as expiry of leases), prohibition of copying to digital outputs, function restriction, etc.; Video on Demand purchases; and media control functions (e.g., FF, REW, STOP, PAUSE, etc.). With respect to PCs interfacing with the gateway appliance, these devices serve as file sources, control points and hosts for various clients. That is, computers will provide: source/target for files to be shared, backed-up or transferred to home storage; access a personal web page with notifications, RSS, shared photos, voicemail, etc.; browser control for home administrator and user; and a host for IM and SIP softphone clients and other client devices. Further, with respect to PAPG (incl. Integrated Access Devices) and SIP phones, the PAPGs and SIP phones serve as SIP endpoints. Additionally, the IADs connect to FXS phones or faxes and the PAPGs contain IAD functions and also connect to the local PSTN via a FXO port. Thus, the following functions are provided: SIP user agent signaling; voice and video capabilities (SIP phones); FXS signaling (IAD and PAPG); FXO signaling to PSTN (PAPG) and, telephony feature support. Logical Architecture and Support Network Infrastructure While the gateway appliances as described above are each equipped with various logic and intelligence for service features that enable the gateway appliances to provide various integrated digital services to the premise, as described herein with respect toFIG.1A, the network elements50referred to also as support center (SC) supports and manages multi-services gateway appliances, for instance, so as to control the accessibility to functionalities and service features provisioned in the gateway appliances and the ability to communicate with other gateway appliances and various digital endpoint devices connected thereto. Examples of various functionalities performed by the network support infrastructure include, but are not limited to: service initialization of the gateway appliances, providing security for the gateway appliances and the network support infrastructure, enabling real time secure access and control to and from the gateway appliances, distributing updates and new service options to the gateway appliances, providing service access to and from the gateway appliances and remote access to the gateway devices, but not limited to such. In support of these services, the service center provides the following additional services and features: authentication; multi-service registration; subscription control; service authorization; alarm management; remote diagnostic support; billing collection and management; web services access; remote access to gateway appliances (e.g., via SIP or Internet/web based communications); reachability to access challenged gateway appliances; software updates; service data distribution; location service for all services; SIP VoIP service; media services; backup services; sharing services; provisioning; gateway interfaces to other service providers (Northbound and peering); load balancing; privacy; security; and network protection. The logical network architecture for the support network infrastructure delivering these capabilities is illustrated inFIG.4. It should be understood that the functional components described in view ofFIG.4may be combined and need not be running on discrete platforms or servers. Rather one server or component may provide all the functionalities for providing managed network of gateway appliances. In addition, any one of the components shown inFIG.4may perform any one of the functionalities described herein. Thus, the description in the present disclosure associating certain functions with certain components are provided for ease of explanation only and the description is not meant to limit the functionalities as being performed by those components only. Thus, the network elements or components50shown inFIG.4illustrate logical architecture only and the present invention does not require the specific components shown to perform specific functionalities described. Moreover, the functional components may use distributed processing to achieve a high availability and redundancy capacity. The one or more network elements50illustrated inFIG.4support the gateway appliances10that are services point of presence in premises such as the home and the endpoint devices connected thereto. Examples of functionalities provided in the support network50may include, but are not limited to the following. Upgrades to gateway appliance firmware and various endpoint devices may be managed in the support network50, for example, by a firmware update manager server51. VOD (video on demand) functionalities, for example, serviced by VOD servers52, ingest wholesale multi-media content and provide DRM-based premium content to the multi-services gateway appliance and endpoint devices. The support network50also may enforce DRM policies, for example, by a conditional access server54, by providing key-based access and initiating billing processes. The support network50may also provide functionalities such as collecting billing information and processing billing events, which for instance may be handled by billing aggregator sub-system58. The support network50, for example, using one or more connection manager servers60may establish and maintain a signaling control channel with each active multi-service gateway appliance. Message routing functionality of the support network50, for example, one or more message router devices62, may provide intelligent message routing service for the network and maintain gateway device presence and registration status in an internal SC session manager sub-system. Publish and subscribe functionality of the support network50, for example, a publish/subscribe (pub/sub) server sub-system65may provide publish and subscribe messaging services for the multi-services gateway appliances and the network elements. The support network50may provide SIP-based directory services for voice services, for example, by its SIP directory server66. In addition, location service functionality, for example, provided by the location server68may include IP and port level services for all inbound services. DNS services functionality may be provided by a DNS server69for all inbound services. The support network50may also provide virtual private network functionalities, for example, handled by its VPN server/subsystem70, and provide VPN connection services for certain inbound services on multi-services gateway appliances. VPN connection services may be provided on those multi-services gateway appliances that have accessibility challenges, for example, those that are behind external firewalls and NATS. The support network50may also include functionality for determining the nature of the accessibility configuration for the multi-services gateway appliances. For example, the accessibility test determines whether the appliances are behind a firewall, whether NATS is required, etc. In one embodiment, accessibility service may be performed by an accessibility server72that functions in cooperation with the multi-services gateway appliance to determine the nature of the accessibility. The support network50also functions to provide provisioning services to all SC network elements50and multi-services gateway appliances10. Such functionality of the support network50, for example, may be implemented by the provisioning server74in one embodiment. Authentication functionality of the support network50, for example, provided by an authentication server71provides authentication services to all SC network elements and multi-services gateway appliances. Subscription functionality of the support network50, for example, provided by a subscription manager73provides subscription services to all multi-services gateway appliances. The support network50may include functionality for providing managing services for each of the services provided in the gateway appliance. For example, service managers75store and serve to distribute service specific configuration data to the multi-services gateway appliances. Service access test functionality of the support network50performs tests to multi-services gateway appliances to verify the accessibility for each subscribed service. Such functionality may be provided by service access test managers77. The support network50, for example, in an alarm aggregator subsystem82may aggregate alarms received from the multi-services gateway appliances. The support network50also include functionalities to support, for instance, by an alarms, diagnostics and network management server85, network management and network management services. The support network50enables web interface communication mechanism, for example, via a Web services interface server90to, for example, provide access interface and manage authentication as multi-services gateway appliances access the SC for various services. Additional SC network functionalities shown inFIG.4may include providing HTTP redirection services for public web access to the multi-services gateway appliances, which functions, for example, may be provided via a public Web redirect server91. Public SIP redirect/proxy functionality provides, for instance, via a public SIP redirect/proxy server92, SIP redirection and proxy services to public remote SIP phones and devices. The support network50also may include functionalities to provide a SIP-based network border interface and billing services for off-net voice calls. Such functionality in one embodiment may be provided in a Session Border Controller device93a. Another functionality of the support network50may be providing SBC services to SIP roaming SIP callers in certain situations, which functionality, for example, may be provided by a Roaming SBC device93b. The support network50also functions to provide dynamic NAT services during certain SIP roaming scenarios. Such functionality may be implemented in the Roamer Dynamic NAT Server94. The support network50further may provide off-site backup services for the SC network to a Wholesale Back-up Provider96. The support network50further interoperates with Wholesale VoIP Provider97, which may provide VoIP call origination/termination services for off-net voice calls. For instance, as will be described in greater detail herein, the support network50may provide VoIP/PSTN gateway that enables a translation between protocols inherent to the Internet (e.g., voice over Internet protocol) and protocols inherent to the PSTN. Other entities that may be partnered with the support network as shown inFIG.4include the content providers98that provide media-based content (including, but not limited to music, video, and gaming) to the SC network50, gateway interfaces101for billing, alarms/NWM, provisioning interfaces for partnered wholesale providers (e.g., peering interfaces), and service provider customers (e.g., North bound interfaces). Gateway and Service Network Initialization FIGS.5A-5Cdescribe high-level aspects of an initialization technique200for establishing a gateway appliance's connection to and enabling communication with the support network service center, and further the provisioning and management and maintenance of services. After power is applied to the appliance, a boot sequence is executed that loads the software modules of the gateway appliance at step203. As shown inFIG.5A, a gateway appliance device is fully enabled if a subscriber activation code and optionally, the WAN configuration information is provisioned. Thus, optionally, at step206, a determination is made as to whether the appliance's WAN configuration information is provided. If not, the process proceeds to step207where the system obtains from the user the gateway appliance's WAN configuration. At step210, a determination is made as to whether the gateway appliance is fully enabled. If the gateway device is not fully enabled, the process is performed at step213to obtain an activation identifier (ID) from the user. It should be understood however, that before full activation, minimal functionality could be provided. Once the appliance is fully enabled, at step216, there is initiated the process of initializing the router/firewall and establishing the WAN connection. In one embodiment, a Transport Layer Security (TLS) connection is established with the connection manager server functionality at the support network and communications with the support network at step218. This TLS connection in one embodiment is a signal channel that is always-on for transacting various communications with the support network, for example, for the duration that the gateway appliance is powered on and providing its services and functionalities as the in-premise or in-home platform for endpoint devices associated with the premise. Continuing to step220, the appliance then sends an authentication request including an authentication digest using a hardware identifier, an activation code, and a subscriber ID, and waits for an authentication response. At step222, the process waits until the authentication notice or like response is received. If the authentication response is not received, the process terminates as shown at step225. If the gateway appliance becomes authenticated, at step228, the appliance requests from the SC the authentication keys and stores them at the appliance. These keys are used whenever an appliance has to be authenticated, e.g., when conducting a transaction or accessing the support center network, for example, through a web interface or a control signal channel. Continuing to step230inFIG.5A, the gateway appliance sends a request to the subscription manager functionality or the like of the support network, and the appliance waits until it receives a response. The request from the gateway appliance includes, for example, the appliance identifier information. In response, the subscription manager functionality of the support network replies with the latest firmware version and configuration information for that gateway appliance, for example, information associated with one or more services currently subscribed in that gateway appliance, the latest firmware information for the gateway and configuration for all subscribed services. There is also provided an indicator that identifies a change in user specific service data for all of the subscribed services, if any. Continuing toFIG.5B, at step233, the gateway appliance determines whether its firmware versions are up to date by checking the received version numbers with version numbers that currently exist in the appliance. If necessary, the gateway appliance receives the actual firmware/configuration data from the SC, for instance, through a web services interface over a secure HTTPS connection in one embodiment. At step235, a determination is made as to whether the user firmware configuration data235aand user data235bfor each service of up to N services that the user may be subscribed to are up to date. For each service, if it is determined that the firmware configuration data235aand user data235bare not updated, the gateway appliance may receive such data from the support network, for example, over the HTTPS connection. Continuing to step237, the appliance may apply the configuration/firmware updates immediately or schedule them for another time. A user may utilize a GUI to schedule the updates. If certain firmware needs to be updated right away, there may be a prompt presented to the user to acknowledge and approve the updates. At step240, a gateway appliance accessibility test is performed to determine if a VPN connection to the support network is needed. This may happen if the gateway appliance is behind a firewall or the like that protects the appliance from the public access. The test, for example, may be optional. In one embodiment, this test is done on start-up, and, for example, for cases when the appliance is disconnected from the WAN or a new IP address from the WAN is assigned. An accessibility testing functionality of the support network, for example, may send a connection request (such as stun or ping) in order to try to reach the gateway appliance. Different port numbers on a given IP address may be tested for reachability to the gateway appliance. Continuing to step245inFIG.5C, a determination is made as to whether accessibility has been challenged, e.g., the device lies behind a firewall at a private IP address. If accessibility has been challenged, then at step248, a connection with a VPN is established. Step250represents the step of storing the WAN and VPN IP addresses to be used for inbound services. Examples of inbound services may include, but not limited to, voice service, remote web access, etc. At step253, the gateway appliance sends a message to the support network, for example, which message is routed to service manager and subscription manager functionalities of the support network. The message informs the service manager and subscription manager functionalities, the gateway appliance's current version, and configuration information. Registering with those server functionalities may initiate notify handler service that enables asynchronous configuration, firmware, and/or user data updates. At step255, a general multi-purpose registration is performed, whereby a service register request message is sent from the service manager to a location server functionality of the support network. This request message tells the location server functionality that the gateway appliance is ready to accept inbound services on a given IP address and port number. Thus, the information may include the IP address (WAN/VPN) and/or other specific data for informing the location server functionality how to find the gateway appliance. In one embodiment, a clock on a gateway appliance may be set when the appliance re-registers with the support network. Architectural Overview for Establishing Connections and Authentication Process FIG.6Ais an architectural diagram illustrating a manner in which the multi-services gateway appliance makes an initial connection to the support network50in one embodiment of the invention. It is noted that the individual components shown in the support network50illustrate logical components or functionalities provided in the support network. As mentioned above, a signal channel in an exemplary embodiment is established between the gateway appliance and the support network during the appliance's initialization process, and in one embodiment this connection is maintained for the duration that the appliance is powered on and is providing its functionalities. Thus, a connection is established between the gateway appliance and the connection manager server functionality60or the like in the support network, for example, to provide connection services prior to establishing a session state and authenticating the gateway appliance. As shown inFIG.6A, a TCP/TLS connection150is made between the gateway appliance using the appliance's broadband connection and the IP network to connection manager server functionality60of the services support network. The connection manager60or the like of the services support network50receives the session state of the network channel request where control is implemented to initiate authentication. A message router functionality routes the requesting message to an authentication server71or the like as shown inFIG.6A. Prior to establishing any TCP/IP connection, an authentication is performed, as indicated at145. In one embodiment, the connection manager60may aggregate plurality of connection channels150and multiplex these signaling channels to a message router device62. The connection managers60works with the message router62and the authentication server71to authenticate the multi-services gateway appliance and allow its access to the network by enabling the establishment of a control channel150providing an “always on” control channel between the multi-services gateway appliance and the services support center50once the gateway appliance is authenticated. The connection managers60or the like also provides network security and protection services, e.g., for preventing flooding, DOS attacks, etc. In one embodiment, there may be interfaces such as APIs for interfacing the connection managers60or the like to the message routers62and the multi-services gateway appliances10. As the network of multi-services gateway appliances grow, the number of connection managers may grow to meet the demand for concurrent signaling control channel connections. In one embodiment, message router device(s)62or the like provide control signal message routing services and session management services to the multi-services gateway appliance10and the other network elements of the support center50. In one embodiment, the message router device62has control channel interfaces to the firmware upgrade manager server, VOD(s), billing system, content managers, pub/subs, service access test manager, authentication server, service manager, subscription manager, alarms aggregator, network manager and public web proxy redirect, and the multi-services gateway appliances. The message router62or the like may also include a session manager subsystem that maintains control channel state information about every multi-services gateway appliance client in the network. The message router62or the like, and session manager or the like enable sessions to be established to each multi-services gateway appliance10and each network element and provide robust routing services between all the components. The message routers62or the like may additionally connect to other message routers for geographic based scaling creating a single domain-based control channel routing infrastructure. The message routers62or the like may additionally connect to IM gateways and other message routers that provide user based IM services, which may enable users to interact directly with their multi-services gateway appliance via IM user clients. Thus, besides providing routing and session management for all the multi-services gateway appliances and the network elements, the message router element62or the like enables control signaling between all the network elements and the multi-services gateway appliances and connects to IM gateways to provide connectivity to other IM federations. With respect to authentication functionality, an authentication component71provides authentication services for all the network elements of the SC. The SC network elements query the authentication server to verify the identity of elements, including the multi-services gateway appliance, during inter-element communications. The multi-services gateway appliances may indirectly utilize the authentication server to verify the identity of the network elements. The interacting network elements may return data that the multi-services gateway appliance uses to confirm the network element's identity. The authentication server functionality71may interface to the multi-services gateway appliances and other network elements such as the message router or the like and session manger servers or the like, the accessibility server or the like, the service accessibility test mangers or the like, the web services interface or the like, the provisioning server or the like, the NWM or the like, pub/sub or the like, VOD's, CAs, and the billing system or the like. Signaling Control Channel As mentioned herein with respect toFIG.6A, the connection manager servers60or the like functionality in the support network provide connection services and enable the establishment of a control channel, enabling an “always on” control channel between the gateway appliance and the service support center functions of the support network. Thus, in one embodiment, a gateway appliance establishes a TCP/TLS connection to the connection manager functionality in the support network as shown at150. Authentication Once the gateway appliance is physically connected to the network, it registers and authenticates itself on the support network. In one embodiment, this registration and authentication is done through the above established secure connection. In one embodiment data link layer security may be established by implementing, for example, Simple Authentication and Security Layer (SASL). The SASL framework provides authentication and data security services in connection-oriented protocols via replaceable mechanisms (IETF RFC 2222). This framework particularly provides a structured interface between protocols and mechanisms and allows new protocols to reuse existing mechanisms and allows old protocols to make use of new mechanisms. The framework also provides a protocol for securing subsequent protocol exchanges within a data security layer. After establishing the TCP/TLS connection between the home appliance and the support network (e.g., connection manager server or the like) the SASL authentication process is initiated whereupon the gateway communicates authentication details to the connection manager server or the like. The connection manager server or the like of the support network routes the authentication request to the authentication server or the like via intermediary of the control message router device or the like and session manager servers or the like. Once at the authentication server or the like, the gateway appliance is authenticated, e.g., by processing the authentication details at the authentication server or the like. Additionally, control access information is communicated to a locations server or the like which may provide location information updates to, for example, other network functionalities or elements such as a file sharing server, remote web access server, and other elements. Once secure connection (e.g., XMPP connection) is established at step326, authenticated session state between the home appliance and the support network is ensured and messages can safely flow to and from the gateway appliance. In one embodiment, authentication credentials may include: user ID, subscriber ID, and a unique identifier (id) that is hardware dependent. In an alternate embodiment, a unique hardware based id need not be sent, however, may be used to hash a string or digest. At this point, any requests originating from the gateway appliance may be serviced. A chat based protocol or presence and peering based messaging protocol is used for the gateway device to establish connection with the support network. This may comprise a SASL or NON SASL-based XMPP (Extensible Messaging and Presence Protocol) described in IETF RFC 3920 and RFC 3921. For instance, using XMPP, messages are sent and received between the gateway appliance and the support network (e.g., via connection manager and message router functionalities). During the authentication, if the support network does not contain the gateway appliance registration and subscription information, the support network may prompt the user via the gateway appliance for the information. Such information may include, but is not limited to, gateway identifier such as the MAC address, name for a fully qualified domain name (FQDN) which is a complete DNS name such as johndoe.xxx.com, subscriber information such as name, address, email, and phone number. Additionally, service plan information such as file sharing, voice, file backup, media services, personal page, home automation, billing, to which the user is subscribing or desires to subscribe, user name and password for the subscriber and billing options and information may be obtained. In one embodiment, before completing the authentication process, the support network optionally may display to the user via the gateway appliance a list of the enabled services allowing the user to confirm the services enabled, and/or allow the user to add to or delete from the services enabled. Once the authentication process is completed, the support network registers the gateway appliance with other functionalities in the network for enabling different services. For example, for phone service there may be a registration process on the SIP redirect server functionality. Authentication Keys, Service Keys, Dynamic Key Renewal The gateway appliance and the support network utilize keys or tokens for authenticating the gateway appliance, web service interface requests, and other services subscriptions, for instance, to verify that the gateway appliances are valid users of the system and services. In one embodiment, the authentication keys (also referred to as tokens herein) are renewable and may change dynamically for each gateway appliance. For example, the authentication server or the like in the SC may generate updated keys or tokens for all or a selected number of gateway appliances, notify those appliances periodically or at predetermined times to retrieve the new authentication keys. In another embodiment, the gateway appliances themselves may request the authentication server or the like to provide a new or updated key. Yet in another embodiment, the updated keys may be pushed to gateway appliances. This way the keys or tokens are periodically refreshed. Such dynamically changing keys enhance security, for instance, making it difficult for hackers to track the changing keys. Each appliance may have more than one authentication key, for instance, for different purposes. For example, there may be different keys or tokens for allowing access to different services or features provided by the appliance. Thus authentication keys are also referred interchangeably as service keys or tokens. These service keys may also dynamically change and are renewable. In one embodiment, the gateway appliance receives the service keys or tokens when individual services are provisioned on the gateway appliance. Thereafter, the service keys may be updated to change periodically, at a predetermined intervals, or regular intervals. The keys or tokens themselves, in one embodiment, may be hardware based key. In another embodiment, they may be implemented independent of the hardware they are being used on. Web Services Interface The support network may also provide web services interface functionality (for example, shown inFIG.4,90) that forms an application programming interface (API) between the gateway appliances and the support network as a method to communicate between the gateway appliances and the support network. That is, in addition to the established signaling control channel, the gateway appliances and the support network may utilize web services interface to communicate. For instance, the gateway appliances and the support network may exchange information via secure HTTP or HTTPS using SOAP, XML, WSDL, etc., or the like. When an authentication key is used or embedded in the message in order to validate the communication between one or more gateway appliances and the web services interface functionality in the support network. In one embodiment, the gateway appliance10may request from the support network, for instance, from its authentication server functionality, a temporary key, which is to be used when the gateway appliance10requests services via the web services interface90. Preferably, this key is not a service specific key, but rather identifies a particular gateway appliance10to enter the network center through the web services interface90. Every time the gateway appliance10requests a key, the authentication server functionality may store the key and the expiry time of the key. A response message provided from the authentication server has the key and expiry time. In one embodiment, gateway appliances are responsible to determine a status of the key compared to the expiry and to request a new key before the expiry time. In another embodiment, the web services interface authentication key may be assigned during initial registration and may be renewable as described above with reference to dynamic renewable authentication and service keys. The web services interface subsequently directs message requests to the appropriate functionality in the support network. The incoming requests may be load balanced in one embodiment by the DNS server, and loading and performance information may be fed back to the DNS in support of this function. The web services interface may have interfaces (e.g., APIs) to the gateway appliance, the authentication server functionality of the support network, DNS, service managers functionality of the support network, NWM. A gateway appliance may utilize the web services interface to pull data or information from the support network, while the support network may utilize the signaling control channel to push data such as various notification data to the gateway appliances. Appliance Registration and Service Subscription In one embodiment, the support network may further include provisioning manager functionality, which may handle gateway appliance registration and subscription activation.FIG.6Cdepicts conceptually the process of subscriber provisioning in one embodiment provisioning manager functionality74may interface to 3rdparty order entry or provisioning system160that is enabled to accept purchase orders for gateway appliances and services provided therein. In another aspect, the provisioning manager may interface with a user interface provided in the support network for entering and accepting such orders. Thus, for example, gateway appliance registration or subscriber provisioning may occur through an internal customer service representative user interface application, or a customer/subscriber self-provisioning web application, or through a partner service provider application interface. Other registration methods are possible and they are not limited to those listed methods. For instance, the first time registration may occur during power-up and initialization stage as explained above, or any other way. In each instance, the subscriber information may be input via the provisioning server74or the like functionality. As will be described in more detail, provisioning input may include attributes such as the gateway appliance identification information, user information, and service plan information. In one embodiment, the provisioning input data including subscriber provisioning action/data may be classified as accounting/business and operational data and may persist in the provisioning manager74as shown at162. This may be an optional step, for example, where partner service providers have their own existing systems. Examples of subscriber information include, but are not limited to the following. In addition, not all information is required as subscriber information. Examples are subscriber name, address, billing information, email, phone, social security number (SSN), etc.; gateway appliance ID, e.g., MAC address, FQDN such as e.g. [email protected]. This data may be generated and may have different domain base depending on the provider. This ID may be called the JID (jabber ID) or BID (Box ID) or Family ID; a subscriber unique ID (Internal Generated Number); an assigned gateway appliance serial number (the serial number may be an external identifier of the gateway appliance); a gateway appliance model number (e.g., to link the software, configuration to the model); a user access password (this may be different from the gateway appliance access key which is operational system generated); a user service/gateway appliance binding identifier (this may be generated by the system and mailed to user); a locale/region identifier; a list of the subscribed services, e.g., voice, video, remote access, backup; a list of service specific features, e.g., voice—call forwarding allowed, voice feature 2, etc.; a list of service specific user details, e.g., voice—DN, etc.; and Backup—Max GB, Max Bandwidth, etc. In a further step, as shown at163inFIG.6C, the added gateway appliance and/or user, e.g., new subscriber is added to the authentication server functionality71. Thus, for example, the authentication server functionality may maintain the following subscriber information/data for authenticating users and their gateway appliance devices10: the JID/BID; the gateway appliance's serial number; a user access password; a user service/gateway appliance binding identifier; the subscriber active/disable status; the gateway appliance hardware ID; a subscriber/hardware binding: BOOL; a Web interface access key; and associated Web interface access key validity time. In a further step, as shown at step164,FIG.6C, the added gateway appliance and/or user, e.g., new subscriber is added to the subscription manager (server or functionality or the like)73. Thus, the subscription manager, for example, may maintain the following subscriber information/data for providing subscription information to gateway appliance devices10: the model number, the JID/BID or the like to be able to create and distribute the right package of meta information and to identify the firmware ID, configuration and configuration data to the gateway appliance. Additional exemplary data made available at the subscription manager73may include, but not limited to: user ID; gateway appliance serial number; the gateway appliance model; the subscriber locale; current gateway appliance firmware version; and a list of services and enabled features, for example:Service 1Enable/DisableFeature 1 Enable/Disable. . .Feature N Enable/DisableCurrent Configuration VersionService 2Enable/DisableFeature 1 Enable/Disable. . .Feature N Enable/DisableCurrent Configuration Version. . .Service N. . . In a further step, as shown at step165inFIG.6C, the added gateway appliance and/or user, e.g., new subscriber is added to one or more service manager (servers or devices or functionality or the like)75. Service data maintained at the service manager75may include, but is not limited to: configuration files, e.g., voice: dial plans; parental control: black lists, etc. This data may be in a database or versioned files stored on disk. Optionally, the following subscriber data may be maintained at the service manager75: the appliance's JID/BID; the provisioned subscriber data for each service (e.g., a list comprising Data1, Data2, etc.); and the generated subscriber data for each service (e.g., a list comprising Data1, Data2, etc.). It is understood that some services are basic services and some services may not have subscriber data at all. Thus, as an example, if implementing provisioning of Backup Services, the support network50may generate the following account on behalf of the subscriber: Backup Acct ID, KEY. The provisioned subscriber data and generated data are communicated to the gateway appliance. In a further step, as shown at step166ainFIG.6C, the added gateway appliance and/or user, e.g., new subscriber, is added to a SIP directory server or like functionality66and additionally, to the Session Border Controller device93aor like functionality, as shown at step166b. For example, the SIP directory server66may be provisioned with data such as the SIP user identifier (e.g. www.gw10.ros.com); associated gateway DN numbers; and any other data as may be required by the SBC, e.g. realm data or location data for the endpoint device. Additional service data that may be provisioned may include: OFFNET/ONNET DN numbers; and other SIP Service specific data. In a further step, as shown at step167inFIG.6C, the added gateway appliance and/or user, e.g., new subscriber is added to the publication/subscription (Pub/Sub) server or like functionality65. The new subscriber information maintained at the pub/sub may include the subscriber for gateway appliance firmware update events and for service configuration/locale events, e.g., U.S. dial plans, parental controls, etc. The pub/sub may maintain various event channels and the content for event channels (i.e., events per channel) and subscribed users for the event channels (i.e., users for channel). In a further step, as shown at step168inFIG.6C, the added gateway appliance and/or user, e.g., new subscriber is added to the billing sub-system server58or like functionality. The new subscriber information maintained at the billing sub-system server may include, but not limited to: the subscriber name; address; billing information; email; phone; SSN; user ID, e.g. [email protected]; a subscriber unique ID (Internal Generated Number); an assigned gateway appliance serial number (the serial number may be an external identifier of the gateway appliance); a locale/region identifier; and a list of the subscribed services. In a further step, as shown at step169inFIG.6C, the added gateway appliance and/or user, e.g., new subscriber is added to the Alarms, Diagnostics, and Network Management server or like functionality and alarm aggregator sub-system. The new subscriber information maintained at the alarms, diagnostics, and network management server may include: alarms; user identifier and other data required for alarms management system; and diagnostics. Thus, the provisioning functionality or the like74generally provides provisioning services to all SC network elements. The provisioning servers74may send and receive provisioning information via a gateway interface (e.g., APIs) to and from 3rdparty provides such as wholesale VoIP and backup service providers. The provisioning servers may also send and receive to the branding customer service provider (aka “North Bound” interfaces). The provisioning server may provide a graphical user interface for service provider users and customer users to order, initialize, and provision services. The provisioning server further may distribute the order or provisioning information to the following functional elements: subscription manager; authentication servers; service manager(s); SIP directory server; pub/sub servers; VOD(s); CA's; billing system; firmware update manager; location server; the NWM; SBC's; content provider(s); and wholesale providers via the gateway interfaces (APIs). While the provisioning service or functionality was described with respect to registering new gateway appliances or subscribers, functionality for provisioning new services for existing users or gateway appliances may be provided in the similar manner, for example, by the provisioning server74or like functionality. Automatic Discovery and Configuration of Endpoint Devices A customer or user self-provisions endpoint device is on a particular multi-services gateway appliance. The provisioning system or like functionality74may provision how many endpoints and the types of devices that can be self-provisioned by the user. In one embodiment, the gateway appliances are capable of automatically discovering and configuring the appliance compatible devices belonging to enabled services in the premise such as the home or business, for instance, with minimal human intervention (e.g., for security purposes some devices may need administrator level prompting to proceed with configuration actions). For instance, the appliance compatible devices are devices that the appliance can communicate with and thus become the center of management for the services offered by these devices. One or more of these devices may have automatic configuration capabilities such as universal plug and play (e.g., uPNP devices). These devices also referred to as endpoint devices may include but are not limited to: media adaptors, SIP phones, home automation controllers, adaptors that convert IP interfaces to PSTN FXO, and FXS interfaces, etc. In one embodiment, the method of configuration, e.g., automatic discovery and configuration may be based on the specific device's current firmware or software or like version. The appliance in one embodiment also may keep a record or table of configuration information, for example, for those devices configured automatically. Such information may include, for example, for media adaptor, supported formats and bit rates, for home automation controller, information pertaining to the type of controller such as Insteon, Awave, etc. As another example, if the phone service is enabled and if the appliance detects a new SIP/Saporo device, the appliance may prompt a user to determine if the detected device needs to be configured on the appliance. If it does, then the appliance may configure the detected device on its network (home network or other premise network). Yet as another example, when new drives are added to the appliance for storage expansion, the appliance may automate initialization of the new devices. Subscription Management The gateway appliance may request information from the support network for services that the gateway appliance is subscribing to, for example, during initialization stage as mentioned above or at any other time. The support network in one embodiment contains subscriber and gateway appliance identification details. Thus, the support network may respond to the request with the subscription information and version numbers for various configuration data needed for the services that are subscribed.FIG.6Billustrates how a gateway appliance10establishes a service subscription request (service/request check), for instance, via the TCP/TLS/XMPP control channel150to the network support center50. This service/request check may be available to ensure that the multi-services gateway appliance is in sync with the network provisioning system as to what type of services are allowed for the user. This allows finite and real time control of services allowed by the gateway appliance for a user. The service check may also be useful to keep track of the firmware/software of the gateway appliance, and to keep the same software base irrespective of the country/region, but have the ability to load configuration/customization information per user based on locale or other criteria. As an example, during the multi-services gateway appliance initialization process, the multi-services gateway appliance queries the subscription manager, for example, via the control channel, to determine what services and features are enabled for the multi-services gateway appliance. The support network, for example, using its subscription manager functionality73responds with the subscription information associated with this particular gateway appliance. Examples of data that subscription manager functionality73may store in one embodiment may include but not limited to JID/BID, gateway appliance model number, services subscribed to, features subscribed to per service, and revision exception list for each gateway appliance. The multi-services gateway appliance10checks the received subscription information such as version information against the current versions on the multi-services gateway appliance10. If the multi-services gateway appliance determines that the versions are different, it may start initiating download from the configuration data through web services interface90. Preferably, the multi-services gateway appliance's firmware and service configuration are implicit subscriptions and hence the multi-services gateway appliance will receive notifications when new changes are available. The changes indicate the version to download and the same logic of version checking is performed in the multi-services gateway appliance. The multi-services gateway appliance10subsequently enables the subscribed services and features. The subscription manager functionality73also informs all requesting SC network elements what services and features are enabled on a particular network element. The subscription manager functionality73also determines what service specific configuration data needs to be downloaded to the requesting multi-services gateway appliance. In an exemplary embodiment, the subscription manager functionality73determines the data needed by interacting with service manager functionality75, which stores and distributes specific configuration data for services. The subscription manager functionality73may interface to the multi-services gateway appliances (e.g., indirectly) and the following functionalities of the support network: message routers and session manager(s), the accessibility server, the service access test mangers, the provisioning server, the NWM, VOD's, CAs, pub/sub, service manager server, and billing sub-system. The subscription manager functionality73may additionally support some internetworking to other service providers via the gateway interfaces. In one embodiment, the support network includes service manager functionality for each specific service. The service manager functionality75may store and distribute service specific configuration data for each supported service on a per multi-services gateway appliance basis. Thus, service manager functionality75may include service specific configuration managers for voice, back-up, or any other service that are provided. Examples of this configuration data include, but not limited to, VoIP configuration data such as location-related dial plan information and content/media configuration data such as URL links, etc. The service manager functionality or servers75work with subscription manager functionality73to resolve multi-services gateway appliance version requests and ensure that the multi-services gateway appliances10have the correct version of configuration data. In one embodiment, there is a service manager for each supported service. In addition, there may be a service manager or like functionality for storing and distributing baseline multi-services gateway appliance configuration data. Subscriber data per service may exist inside the service manager and also, stored directly in the service component, e.g., SIP Redirect/SBC device. The service managers75or the like functionality or servers or devices may interact with the subscription manager73, provisioning, NWM, web services interface90, pub/sub, message routers, and multi-services gateway appliance. Additionally, 3rdparty wholesale providers, such as a backup service, may interface to the service managers via a gateway interface or an API. In an exemplary application for gateway appliance services, data is brought down to the gateway appliance to enable it to provide various services. Configuration data is provided to the gateway appliance from the support network. For instance, subscription manager functionality of the support network, for example, as part of initialization process, queries the service managers functionality to obtain configuration data that can be sent to the gateway appliance and which versions from configuration perspective to report back to the appliance. Such configuration data may include a web service interface URL of the service manager for where the gateway should communicate. The subscription manager functionality then sends the metadata of the configuration data, that is, information associated with the configuration data back to the gateway appliance. The gateway appliance then may update its configuration if needed by accessing the service manager functionality, for example, via the web services interface, and retrieving the needed data. In another embodiment, the support network (e.g., service manager functionality) may push the needed data to the gateway appliance via the signaling control channel. For each service, the support network provides configuration data to the appliance (e.g., via service manager functionality) and posts a notification if new configuration data is required. When the user invokes the service, the gateway appliance will thus know all that it needs to invoke the service. For instance, data that the gateway appliance needs may be obtained from the service manager functionality. Login information and keys may be obtained from authentication server for a particular service, e.g., for service keys. FIG.8Adescribes details regarding service provisioning in one embodiment. As explained above, in all the descriptions, while the components of the support network are described and illustrated in terms of discrete servers or devices (e.g., message router, subscription manager, service manager, pub/sub server), they are not meant to limit the present invention in any way. Rather, the components are to be understood as logical elements for explaining various functionalities performed in the support network. Such functionalities may be implemented as one or plurality of servers, devices, or the like, and may depend on the design preference. Referring toFIG.8A, a gateway appliance in one embodiment at350initiates a sequence to obtain its subscription information and determine whether any provisioning updates are available. In one embodiment, for each service to be provisioned, subscription information query is communicated from the gateway appliance, for example, via the control channel to the message router, which is forwarded to the subscription manager server. The subscription manager server provides the subscription details (such as service list and latest version list) back to the router, which are in turn forwarded to the appliance. The gateway appliance makes a determination whether any provisioning updates are available and if so, a service specific manager is employed to download the provisioning and configuration information to implement that subscribed service at the gateway. An example of a sequence for downloading of the configuration and provisioning information for the subscribed-to services and initializing the subscribed-to services and the handshaking of the provisioning complete signals are performed for each service as shown at354inFIG.8A. At the end of the sequence, a notification is sent to a pub/sub server or like functionality to register that the appliance has subscribed to receive any new provisioning updates. For instance, a registration for updates may include the gateway ID, service ID, and matching criteria, which generally tells the pub/sub that if there are changes that match the matching criteria in the service identified by service ID to notify the gateway appliance identified by the gateway ID. The gateway appliance may optionally send a message for the pub/sub server that the gateway appliance is ready to receive updates as shown at357. Pub/Sub and Updates As previously mentioned in view ofFIG.4, the publisher/subscribe (pub/sub) server or like functionality65accepts and maintains subscription requests for appliance upgrades and device upgrades from networked services support elements, and particularly, from every gateway appliance10in the system. Networked elements communicate with the pub/sub system or like functionality and publish information that other elements may have subscribed to. The pub/sub matching engine matches the published information with users (typically gateway appliances) that have subscribed for notices of new specific information. If the pub/sub matches a “pub” with a “sub”, a notification message is sent, for example, via XMPP protocol or like peer and presence messaging protocol on the signaling control channel, to the subscribing user, notifying them of the new information. FIG.6Dhighlights how the gateway appliance and the service center network elements utilize the signaling control channel and the publisher/subscribe (pub/sub) function to subscribe for notification of certain events and publish notification of these events in one embodiment. In this high-level example, the gateway appliance subscribes for firmware or software updates for the gateway appliance or endpoint devices that it connects, and is subsequently notified when such an event occurs. It is understood that the pub/sub system provides subscription and publication matching and notification services for both the gateway appliances and the networked service center elements or functionalities. Thus, the logical pub/sub device may have interfaces to all elements that use this mechanism to communicate with each other including, for example, firmware update manager, RMR, provisioning server, authentication server, service manager, subscription manager functionalities, and the gateway appliances. In an example scenario depicted inFIG.6D, the updater component or functionality with knowledge of updates to gateway firmware or software or the like, endpoint device firmware or software or the like, or service configuration files or the like, may publish the update information to the pub/sub server or like functionality65, for example, as shown by the route173. The updater component51may receive a message or notification at171that updates are available from other sources. Additionally, various service managers (or like functionality)75that handle specific services and associated configuration information and data may publish information in the pub/sub that updates are available for those services. Thus, in one embodiment, update manager functionality51may publish information on pub/sub65as to the availability updates for gateway appliance and endpoint devices. Similarly, specific service managers or like functionality75may publish information on pub/sub65as to the availability of updates for the respective specific services. In one embodiment, the update notice published by the updater, service managers, and/or firmware manager may include, but is not limited to, new configuration version information for latest firmware or software for the specific service or devices. A matching engine functionality of the pub/sub server65determines which gateway appliances are subscribed to receive these updates, and generates a notification message175that updates are available for receipt at the gateway appliance10, for example, via IM-like messaging (or any other presence and peering protocol) over the public Internet. FIG.6Eshows at step260the gateway appliance receipt of a notify message indicating the published firmware or configuration update either for itself or a digital endpoint device. At step262, the gateway appliance makes a comparison against the current firmware version and determines if the update is needed. If the update is needed, the appliance initiates a pull operation to retrieve the firmware update, for example, over a secure HTTPs connection at step265and may start or schedule application of the firmware updates at step267. In one embodiment, a descriptor package helps the gateway appliance interpret the command to obtain the software update, e.g., at a certain location in the networked service center. In an orderly and secure manner, e.g., via HTTPS protocol, each of the subscribing devices may seek out where in the network the published software update resides and once authenticated, via authentication server or like functionality, it will retrieve the software. Referring toFIG.6D, from the support network perspective, a request is received from each of the gateway appliances, for example, via a web services interface90, to pull the new firmware version. In one embodiment, this may take place according to a schedule or priority basis. Then, an authentication process is performed, for example, via authentication server or like functionality71, and once the appliance is verified, the available firmware update may be pulled from the updater functionality51(or from individual service managers or firmware update manager or like), and forward to the appliance as shown at177. As mentioned above, consumers may subscribe for updates to digital endpoint devices connected to the gateway appliance as well in one embodiment. For example, a user has a certain type of phone and, if there is an update, the pub/sub notification feature or functionality will notify the gateway appliance of the updates available for that phone type. Thus, all of the gateway appliances that have that phone will be informed with service upgrades. In one embodiment, matching engine functionality notifies all the update information concerning operation of the phone device to the subscribers, e.g., like RSS feeds and/or notifies the matching gateway appliance (that is, the gateway appliance determined as having this phone as one of its endpoint devices) of updates, for example, via signal control channel (e.g., using XMPP), for example, when news or updates are received for this particular phone. The matching engine determines all of the subscribers that are subscribed for that service and will put out update notification to the appliances. Thus, service managers and/or firmware update manager publishes update information availability to the pub/sub functionality, the gateway appliances subscribe to desired updates, for example, by registering the current versions of its firmware and software to the pub/sub functionality, and the matching engine functionality of the pub/sub matches the published data with subscribing appliances and sends notification to each subscribing appliance. FIG.8Billustrates a service provisioning updates push model in one embodiment. As shown inFIG.8B, it is assumed that at step360, the following steps have been performed: gateway registration, firmware updates, and that service initialization have been completed. At steps363, the service specific managers or like functionality publish a service provisioning update to the pub/sub server. The published information, for example, may include but is not limited to: body of the notification, service type, server ID of the service manager publishing the information, matching criteria which may include keywords that indicate service components for which the update is available, and update rate information (e.g., rate or schedule at which the update notification should be performed, for example, to mitigate the effect of too many appliances retrieving the updates all at once). The pub/sub server optionally may check for the gateway appliances that have subscribed for this service provisioning update and may calculate an update notification rate to ensure a sustainable rate. At steps365, the pub/sub server or like functionality sends a message destined to all of the gateway appliances about the service provisioning update, for example, via a XMPP control channel. Once the update information download from the specific service managers or firmware upgrade manager is complete, the support center is notified and the gateway appliance is now responsible for the reconfiguring and provisioning of the appliance for the particular service. As shown inFIG.8B, the process may be repeated367for each gateway appliance subscribed to that service update. In one embodiment, the support network may include a firmware update manager functionality that keeps the gateway appliances updated with compatible software and configuration information for the gateway's and the endpoints connected to the specific gateway appliance. This functionality is similar to the service manager functionality that handles configuration data and updates for specific services provided in the gateway appliance. The firmware update manager (FUM) component or the like functionality may utilize the underlying accessibility framework of the support network to reach the gateway appliance and interoperate with the in-home (in-premise) digital devices. In one embodiment, as mentioned above, the gateway appliances subscribe for updates on behalf of its endpoint devices. In one embodiment, the firmware update manager or the like functionality and the appliances authenticate with each other prior to any transactions. The updates are generally performed automatically. The FUM sends a control signal to the target appliances and the appliance schedules and pulls the data download from the FUM to the gateway appliance. In one embodiment, the FUM may maintain a database of all appliances and endpoints behind the appliance, with firmware version information. This database is indexed based upon the unique identifier and account information for each appliance. To provide this functionality, the firmware update manager may have interfaces to the gateway's RMR, pub/sub, provisioning system, and network to management servers that may further request a “forced update” of endpoint or gateway software to the gateway appliance. The firmware update manager may have network gateway interfaces to other third party partners to gather updates for the partner endpoint devices connected to each gateway. In one embodiment of the invention, referring back toFIG.7B, as part of the appliance registration process, the gateway appliance10may query for its version status as indicated at327. As shown at sequence330inFIG.7C, the firmware details of the appliance and connected devices are forwarded by the appliance to the connection manager server and sent to the firmware upgrade manager to determine whether the appliance is performing with the latest firmware versions and proper upgrades. Any upgrades deemed necessary or available for the gateway appliance are forwarded back to the control message router and sent back to the appliance where the updates are downloaded. Optionally, a package download status sequence333may be initiated where the upgrade patch is installed at the appliance. The gateway appliance may be reregistered or restarted and the patch installation is verified at step336. As part of this sequence, the gateway appliance generates a notification337that it is ready to receive firmware updates (e.g., future updates) which communications are forwarded to the publication/subscription (pub/sub) server of the services support network. FIG.7Dillustrates firmware upgrading processing to connected appliances in one embodiment. As mentioned, the gateway appliance subscribes for certain endpoint firmware updates and is subsequently notified when such an event occurs. Thus, the processing illustrated inFIG.7Dmay apply for endpoint devices upgrades as well as the gateway appliances. At steps340, FUM or like functionality notifies pub/sub server or like functionality of the available updates. The pub/sub server checks whether one or more connected gateway appliance is subscribed to that particular service upgrade. A pub/sub server may calculate the notification rate for providing the firmware update and sends the information back to the control message router which forwards the firmware upgrade information to the appliance in the form of a data structure, for example, including but not limited to IQSet (a type of XMPP message), upgradeDetails (details for upgrade), downloadTime (time it takes to download the upgrade), and timeToUpgrade (time it takes to install upgrade at the appliance). The firmware updates are then downloaded from the firmware download server via, e.g., HTTPS connection, to the appliance. Optionally, a package download status message may be sent to the component or functionality (e.g., FUM) from which the upgrades were downloaded shown at344. Further, after installing the upgrade at the appliance or the endpoint device, package install status message may be sent to the FUM or like functionality to notify the status of the latest upgrade installation. The gateway appliance may be reregistered or restarted and the patch installation verified. The appliance may also generate a notification347that the firmware upgrade patch has been completed which notice is forwarded to the FUM or the like functionality of support network. It should be understood that a firmware upgrade throttling mechanism may be provided such that, dependent upon the load status (resource utilization) as determined by the provisioning firmware download server, the firmware update rate may be modified on the fly. That is, as shown inFIG.7E, when multiple appliances10′ are connected and each are subscribed to receive the firmware upgrades, the load status may be determined based on a resource utilization parameter from the firmware upgrades manager server. This update notification rate is then recalculated to a sustainable rate depending upon the update server load. As described above, one or more gateway appliances communicate with the FUM or like functionality to download compatible software for itself and the endpoint devices. In one embodiment, the appliance is responsible for updating the endpoint devices with the downloaded software. A user of the appliance may have an option that is configurable to have updates automatically downloaded when available or be prompted to initiate the download. For instance, when a new version of appliance firmware is available, the FUM or like functionality notifies the appliance either directly or via pub/sub. If the user is configured for automation, then the appliance would initiate download of the firmware. If the user is configured to be prompted, then the appliance notifies the user and waits for an ok from the user. If the user agrees for the update, then ROS would initiate download of the firmware. In one embodiment, once the firmware is downloaded the appliance performs the automated firmware upgrade when indications are clear that the upgrade will not be interrupting other functions or services provided in the appliance. For determining compatibility with other existing functions or services, the appliance performs a basic set of “acceptance” tests to make sure that the subscribed services are still functional after the firmware upgrade. This may be done, for example, referring to a matrix or table of information regarding compatibility or interoperability among software, firmware, hardware or like of various services, gateway appliance components, and endpoint devices. In one embodiment, this matrix or table of information is received as part of configuration data from the support network, for example, during initialization procedure and during other communication session and may be maintained in the gateway appliance. In another embodiment, the compatibility test may be performed before the upgrades are downloaded, thus necessitating only the compatible versions of upgrades to be downloaded. The appliance in one embodiment has the capability to fall back to a previous release in the event of a software upgrade failure. In one embodiment, as described above, FUM or like functionality keeps track of the various appliances that it communicates with and the firmware version on each appliance. In another embodiment, FUM does not have knowledge of which appliances need which upgrade. Rather, FUM simply publishes information regarding any updates to the pub/sub server or like functionality and it is up to the pub/sub server to notify the appropriate gateway appliances. Similarly, for the end point device a user may have the option to automate the download or be prompted to initiate the download when an update is available in the FUM, for example. For each appliance, the FUM or like functionality may be responsible for tracking the software version status and upgrade availability for the devices that each appliance communicates with. Thus, in one embodiment, the FUM or like functionality may maintain a matrix that may include, but not limited to the following information: the appliance version; the services enabled on each appliance; currently connected devices on each appliance; the software version currently on each device; and the software versions of the end devices that are compatible with the existing appliance version. When a new version of software or firmware for an end device that is supported on an appliance is available on the FUM or like functionality, the FUM may do the following for each ROS: check to see if the new version is supported on the current version of the appliance firmware; if the new software load and appliance version are compatible, then FUM notifies the appliance if that end device is supported on the appliance; if the user is configured for automation, then the appliance may initiate download of the firmware; if the user is configured to be prompted, then the appliance notifies the user and waits for an ok from the user; if the user agrees for the update, then the appliance may initiate download of the firmware; and if the appliance chooses to download the update, then the FUM or like functionality allows the appliance to download the new version. Once the software or firmware or like is downloaded, appliance may perform the automated firmware upgrade of the end device when indications are clear that it will be not be interrupting the rest of the functions and services. The appliance may perform a basic set of “acceptance” tests to make sure that the end device is still functional after the firmware upgrade in the similar manner described above with reference to the appliance firmware upgrade. The appliance also may have the capability to fall back to a previous release in the event of an upgrade failure. In one embodiment, as described above, FUM or like functionality keeps track of the various appliances that it communicates with and the firmware version on each appliance and/or its endpoint devices. In another embodiment, FUM does not have knowledge of which appliances need which upgrade. Rather, FUM simply publishes information regarding any updates to the pub/sub server or like functionality and it is up to the pub/sub server to notify the appropriate gateway appliances. With respect to FUM and specific service managers providing update and configuration information to various gateway appliances and/or network elements, there may be a plurality of ways in which such notification may occur. In one embodiment, different methods may depend on different categories of configuration and upgrade data organized, for example, in the individual FUM or service managers or like functionality. For example, data may be classified into different categories such that for one class of data there should be notification available to all appliances and/or network elements. For this class of data, FUM or service managers or like functionality may publish the available information via the pub/sub functionality and allow pub/sub to determine which appliances or network elements should be notified and handle sending of notifications. Another class of data may be data that is directed to a subset of elements, for example, regional data that are directed to appliances located in certain regions or locales. For this type of data, pub/sub feature may also be utilized. Yet another class of data may be data that is solely for a specific appliance or network element. For this type of data, the service mangers or FUM or like functionality need not utilize pub/sub feature, rather the data may be communicated directly to the individual appliance directly, for instance, using an XMPP control channel, or to the individual network element via interfaces. Accessibility Testing In one embodiment, the accessibility testing feature determines whether the gateway appliances are accessible from a signaling point of view, from the support network. As shown inFIG.8C, it is assumed that at step361, the following steps have been performed: service provisioning, accessibility testing, and determining whether public access is available to the appliance at a public IP address either with or without network address translation services provided, or whether its determined address is a local IP address with VPN and additionally, that authenticated service is initiated between the appliance and the network operations service entity. At step364, the gateway appliance sends a message including service access details including IQ set, gwId, service ID, and access details that may include IP address and port number to the network operations service message router which routes the information to the service specific manager which then forwards the gwId and service access information including a “notActive” indication to the location server. One or more service managers may then request a service accessibility tester or like functionality to determine accessibility for the gateway appliance as shown at364. The accessibility tester performs an accessibility test, for instance, from a public interface utilizing UDP/TCP/HTTPS and sends the results of the accessibility test information to the requesting one or more service managers. The accessibility tester in one embodiment may test different port numbers on the IP address for accessibility. Once the accessibility test is successfully performed, the service specific manager updates the location server component with the gwid, served, and an “active” indication. The service accessibility response is then forwarded back to the message router for routing ultimately back to the gateway appliance. Peer to Peer Accessibility Testing As described above, in certain environments, the gateway appliances are behind a firewall making it difficult to communicate with them from a signaling point of view. From a signaling viewpoint, messages should communicate back and forth between the two gateway appliance devices and ultimately to the digital endpoint device in the home, e.g., sharing or posting a digital photo to grandma's TV, requesting transfer of or sharing list of music lists, favorites, songs, over the Internet between two gateway devices. This negotiation may be initiated via a presence and peering based communication protocol such as an IM-based messaging over the signaling control channel as the network state characteristics of the appliance are known at the support network. For example, the support network may determine whether one appliance is behind a firewall having a private IP address making it hard for the other device to signal back via HTTPS signaling. Appliances have this awareness that it is behind a firewall, for example. Thus, according to one embodiment, a method of negotiating directly over the control channel to establish peer-to-peer connectivity, i.e., a peer-to-peer accessibility testing feature functionality is provided to ensure service accessibility. Thus, in one embodiment, the peer-to-peer accessibility testing feature negotiates and creates using a control channel, a media path to share data between the peers. FIG.8Dis a process diagram that illustrates peer-to-peer accessibility testing in one embodiment utilizing two gateway devices (appliance1and appliance2) that are enabled to communicate once both devices are authenticated in the manner described herein. Peer-to-peer accessibility test in one embodiment provides a utility that can be used to check the accessibility of a gateway appliance. At step371, both gateway appliance devices send a message including service access details including IQ set, gwId, service ID, and access details to the support network message router which routes the information to the service specific manager which then forwards the gwId and service access information including a “notActive” indication to the location server. Further, accessibility test information including access details is sent by message to a service accessibility tester or like functionality. Service accessibility tests are performed by both appliances1and2to ensure the service specific connectivity between the appliances via, e.g., the public interface utilizing UDP/TCP/HTTPS and an accessibility test response is received at the service specific manager at step374. Once the accessibility test is successfully performed, the service specific manager updates the location server component with the gwId, served, and an “active” indication. The service accessibility response is then forwarded back to the message router for routing ultimately, back to the gateway appliance at step377. In another aspect of peer-to-peer accessibility testing, an accessibility tester or like functionality may request one gateway device to determine whether it can talk to another gateway device, for example, for determining whether that another gateway device can receive inbound services. In operation, the accessibility tester functionality via, for example, message routing functionality sends a message to one gateway device to ping or try to access in other ways a second gateway device. The message may, for example, include each gateway device's identification information and access details such as IP address and port number. The requested gateway device then pings the second gateway device to determine whether it can reach the second gateway device and sends the results back to the accessibility tester functionality. Gateway Appliance to Gateway Appliance Communications Another feature made available in the system and method of the present disclosure is communication capability between the appliances. This feature, for example, may be utilized for enabling secure peer-to-peer sharing of data between or among the gateway appliances.FIG.6Fis an architectural diagram that illustrates an overview for communicating or sharing between the appliances. In one embodiment, signaling information is communicated via the signaling control channel, for instance, using XMPP, and then the gateway appliances10,10nnegotiate the subsequent transfer of media or data path. In one embodiment, this media or data need not travel via the signaling control channel. Thus, for example, HTTPS path may be negotiated between the appliances10,101. Services As mentioned,FIG.2Adescribes the high level service framework upon which are built services130, e.g., downloaded via the support network as packages that are developed and offered by a service entity for customers. These services may be offered as a part of a default service package provisioned and configured at the gateway, or provisioned and configured subject to user subscription and may be added at any time as plug-in service modules in cooperation with the service center. It is understood, however, that while the gateway appliance includes much of the intelligence or service logic for providing various services, it is also possible that for some services, some or all of service logic may reside in the support network and/or with a third party provider. Backup and Storage Services The gateway appliance interoperating with the network support may further provide data backup and restore services. For instance, the gateway appliance may include a user interface and application or like functionality for allowing users to select files, for example, stored on user's PC, on the gateway appliance or other endpoint devices for the backup and restore services. The term “file” as used herein comprehensibly refers to files, folders, directories, any data in any format, for example, media, ascii, etc. The gateway appliance may encrypt and compress, and transfer the files to a backup storage. In one embodiment, the backup storage is a storage provided by a remote third party backup service data center. In this embodiment, data is backed up to and restored from the backup service data center, for instance, via interoperating with the support network, which, for example, interfaces to the remote third party backup service data center. In another embodiment, this backup storage may be at the gateway appliance itself, for instance, on the non-user accessible region of the gateway appliance storage that is only accessible by the services support network. Yet in another embodiment, files may be distributedly backed-up on the non-user accessible region of other gateway appliances, for example, which may reside at other premises (it should be understood that one premise may have more than one gateway appliances). For instance, a file may be divided into multiple parts and each part may be backed up on different gateway appliances. Further, one or more parts may be backed up redundantly, that is, on multiple gateway appliances. Combinations of any of the above-described embodiments may be utilized for backup and restore services. In one embodiment, a user may provision and subscribe to the type of backup services desired with the provisioning and/or subscription service as described above. Two-Stage File Back-Up In one embodiment, the gateway appliance and support system architecture provides a file management feature generally including functionality that enables a user to back-up files or content to a virtual memory storage area provided in the gateway appliance, and then subsequently forward the backed-up files to an external wholesale service provider providing the backup service. Thus, gateway storage device provides the protected storage for user files and user content at its side of the demarcation point in a two-stage storage process: 1) storing the content across the virtual demarcation point (partition); and then encrypting the content; and 2) dispersing the stored content to other gateway appliances, or at another storage location provided by the service center or by a partnered 3rdparty back-up storage service provider. This could be performed automatically or on a scheduled basis. The gateway appliance knows where the pieces will be stored based on the service configuration and subscription. The locations of appliances that may back up content pieces are known at the network level, e.g., hardware IDs of each of the gateways are known based on the unique identity of the appliance, and the mappings of the IP addresses that change dynamically of the appliances are known at the location servers so the location of backed-up content for a user is always known. FIG.14Adepicts a process800for back-up file services using a third party storage provider according to one aspect of the invention. As shown inFIG.14A, in a first step801, the appliance has been programmed to initiate an automatic back-up process at the PC endpoint. Then, at step803, the files to be stored from a user device, e.g., a PC, are first compressed, encrypted, and transferred to the gateway appliance10. Referring back toFIG.3, this service may be configured to automatically transfer158user data or files from an attached user controlled hard drive storage device to be backed-up, optionally, compress and encrypt the data for storage at the network side of the demarcation point (the encrypted partition) where the service control network has visibility. Then, the appliance file manager functionality starts the backup manager module, which performs the file backup to the service center data center. The backup manager checks to see if the user is subscribed and if so, proceeds to create and index the backup data and gets the access key from the service center, as indicated at step806. Once authorized, the back-up service key is provided to the appliance at step807. Then, in stage 2 of the back-up process, as indicated at step810, the backed-up files are transferred with the service key to the third party storage provider96. Then, once successfully stored at the 3rdparty back-up storage service provider, a positive acknowledgement is communicated from the service provider to the appliance as indicated at step812. It is understood that, in connection with the implementation of back-up services provided by partnered third party providers, for example, the gateway appliance is configured to communicate with the back-up file service provider via the web interface and thus requires the URL of the service provider for where the gateway should communicate. Configuration data is provided to the gateway appliance from the subscription manager as part of initialization process that queries the service providers to obtain configuration data that can be sent back to gateway appliance, and tell which versions from configuration perspective to report back to the appliance. For back-up services, this may be a version 1 at URL 1 so the gateway appliance should go to this location or, based on location of the gateway appliance, may be sent to URL 2. For each service, configuration data is provided to the appliance. This is all based on handshaked communications. When the user invokes the service, the gateway appliance knows all that it needs to invoke the service. As controlled by the service center, in an alternate embodiment, the encrypted content to be stored are transmitted to another gateway appliance's storage locations beyond the respective demarcation points for storage thereat the other gateway appliances in a distributed, safe, and redundant manner. That is, each file may be partitioned into a plurality of pieces for further transfer or storage in a redundant and secure manner, e.g., and transferred to the service control partitions behind the demarcation point. These pieces may then be encrypted and sent out externally for further storage, automatically, e.g., at time of log in, on a scheduled basis, or upon user initiation. FIG.14Billustrates an example process demonstrating this “peer-to-peer” file backup in which files are backed up on different gateway appliances. At1802, gateway appliance1determines backup files and may save the files on the gateway appliance1. At1804, gateway appliance1creates backup file label. Backup file label, for example, may be a label associated with a backup file. At1806, gateway appliance1hashes backup file label to generate backup file ID. At1808, gateway appliance1routes backup request with address backup file ID to peer-to-peer node, gateway appliance2, whose zone covers the backup file ID. This routing to gateway appliance2in one embodiment, uses the gateway-to-gateway peer-to-peer communication mechanism described above. At1810, gateway appliance2determines available backup space over its neighbor zones. This determination may also be performed by the gateway appliance2querying the gateway appliances in its neighboring zones using the gateway-to-gateway peer-to-peer communication mechanism described above. At1812, gateway appliance2receives reports of disk availability from other gateway appliances in its neighbor zones. At1814, gateway appliance1receives IP addresses of gateway appliances with available storage space in the gateway appliance2's neighbor zones. At1816, if space reservation is not successful, steps1804to1814may be repeated to reserve storage for backup in other gateway appliances. At1818, if space reservation is successful, gateway appliance1encrypts the backup file at1818. In one embodiment, appliance1breaks up the back file into n blocks at1820. At1822, appliance1generates n+m blocks of erasure codes. In general, an erasure code transforms a message of n blocks into a message with greater than n blocks such that the original message can be recovered from a subset of those blocks. At1824, appliance1transfers, for example, using the secure gateway-to-gateway peer-to-peer communication mechanism described above, the blocks to n+m gateway appliances, that is, those determined to have storage space available, for example, those gateways in the neighboring zones of appliance2. In one embodiment, different blocks may be transferred to different gateway devices, for instance, each block may be stored on different gateway devices. Further, each block may be stored redundantly, for example, on more than one gateway device. At1826, information associated with this backup, for example, gateway appliance1's ID, backup file label, and area boundary coordinates of gateway appliance2and IP addresses of the gateway appliances that have storage space available for backup may be reported to the support network. FIG.14Cillustrates an example processing for restoring files backed up using the method described with reference toFIG.14Bin one embodiment. At1852, appliance1determines backup file label associated with a file being restored and area boundary associated with the gateway appliances storing the file. At1854, appliance1hashes backup file label and generates backup file ID. At1856, appliance1routes, for example, via peer-to-peer communication as described above with reference to gateway to gateway communication, retrieval request with address backup file ID, and area coordinates to another gateway node, appliance3, whose zone covers backup file ID. In one embodiment, appliance3need not be the same appliance2described inFIG.14B, although it can be. At1858, appliance3transmits retrieval inquiry over neighbor gateway appliances within the area coordinates. At1860, gateway appliances of appliance3's neighbor or area zone report whether they have one or more file blocks associated with backup file ID. At1862, appliance2reports IP addresses of the gateway appliances holding file blocks associated with backup file ID. At1864, appliance1fetches the blocks from those gateway appliances storing the file blocks, decodes erasure codes into a file at1866, and decrypts the file1868. At1870, appliance1may inform a user that file restore has completed successfully. At1872, old backup file blocks may be cleared. Web Butler The gateway appliance is provided with a service that functions as a proxy for taking action on a user's behalf and includes the computer readable instructions, data structures, program modules, software agents, and objects that may be integrated with the actual service packages as a user feature. This proxy function may be configured to automatically upload pictures, for example, to a service provider via service module located at the device, or taking actions for other services on a user's behalf. Via the web butler proxy, implementing a search or in accordance with a user subscription, content from different internet-based media feeds (e.g., free content) may be aggregated and automatically downloaded to the gateway appliance. Automated Failure Recording and Recovery and Rules Based Notification Services: The gateway appliance is provided with a service that provides maintenance aspects of the gateway architecture managed at network service level. Such a service comprises automated failure recording and recovery platform management whereby a rules-based engine will be automatically notified and queried to implement a fix upon the detection of a system or service failure. The rules-based engine comprises a fix in the form of a process that may be performed at the service framework and/or platform management levels for each type of service failure. More particularly, the rules based engine is provided as part of the service management feature through the platform management heartbeat connections with processing threads. Upon detection of a failure or alarm by the platform manager component, the rules based engine will be requested and request what actions to perform, e.g., a sequence of rules that would direct functionality to go to network and get new firmware upgrade, for example, or go back to previous firmware version or configuration based on the rule specified to render the service operational. This will enable a service to be always available with service failures automatically addressed without having to restart platform. Additionally, notifications are sent to the service provider when failures occur at the appliance. File Sharing Services File sharing service of the present disclosure in an exemplary embodiment allows users to share files, for example, pictures, music, videos, documents, presentations, grocery list, bookmarks, etc., with friends and family members or other users. The files can be shared with user's “buddy list” or other contacts maintained at the gateway appliance at a premise such as the home. Once users are authenticated, the gateway appliances may communicate with each other, for instance, using the mediated or negotiated media or data path between each. Gateway appliance may also track functionalities that the user is enabled to do, e.g., send photos at a digital picture frame to a buddy or other gateway appliance of a member of a community of interest, e.g., a family or friend, or share a picture for display on a television of a buddy. In one embodiment, presence and peering messaging protocols such as IM-based protocols may be used for sharing, and may interact from a protocol perspective, to push to a subordinate device at another gateway appliance, e.g., a digital endpoint such as a television or digital picture frame. To accomplish this, a negotiation is made to determine who transfers what to which device based on the stored rosters, and determine a signal path to accomplish the transfer accepting files for users in the home, and a process for acceptance of files for a particular user at the home, e.g., specific files may be accepted for some user devices to the exclusion of other devices, e.g., belonging to a teenager or minor. In this manner, for example, a file sharing (e.g., pictures, documents), movie list sharing, music playlist sharing, application sharing, and video cam sharing, all can be a managed by the community or network of gateway appliances that are designated as buddies. The gateway appliance, in one embodiment, maintains directories of access and sharing and which services are involved to access and transfer content. In addition to sharing data and files with different gateway appliances and endpoint devices connected thereto, data and files can be shared among the endpoint devices connected to the common or same gateway appliance. Thus, for example, a photo stored on a PC can be transmitted to a digital picture frame on the same gateway appliance network, emails received via the PC can be displayed on the television connected to the network, etc. Additional examples of functionalities associated with file sharing on a gateway appliance may include, but is not limited to: allowing a user to tag or add comments, descriptions to the files for sharing, allowing friends and family or the like viewing the shared file to leave their comments, providing scratchpad function to share, allowing users to share widgets, RSS feeds, and tabs on their personal page with family and friends or the like, allowing users to create slideshow with media and share it with family and friends or the like. File sharing functionalities may be performed manually, semi-automatically or automatically. For example, in addition to allowing users to select files and one or more user or user groups for sharing, there may be provided a personal page access or the like, which, for example, may present the user with predefined parameters for sharing such as files or folders designated previously for sharing, and a list of contacts preset for sharing. The list of contacts may have been imported from other applications such as email or IM buddy lists and/or entered manually. Thus, with a set of predefined parameters in place, file sharing may be invoked with minimum user interaction, for instance, as one-click function. In another aspect, file sharing functionality may be setup such that, for example, when a change or new file is detected, the file sharing is invoked automatically. For example, a file or folder may be designated as an automatic share file or folder, and if any change in the data of that file or folder is detected, file sharing is initiated automatically. In a further aspect of the invention related to file-sharing, the gateway appliance and support system architecture provides a hosted service-virtual space on the centralized disk storage at the gateway for users. Dedicated areas of user storage may be designated as shareable including application level sharing for any application that has data. As shown inFIG.3A, this virtual storage area159may be combined from its internal hard disc storage with any network attached storage device located at the network to create a single virtual file system that consumers can use like a single drive. Through the roster or buddy list enabled by the peer and presence messaging protocols implemented over TCP, the users may dynamically share in a virtual space via their PC or other endpoint devices connected with the gateway. Any type of data may be shared including user generated data such as, but not limited to: files, photos, slide-shows, video and audio files, .mp3 playlists, web-links or bookmarks, or any information (e.g., web-blogs, comments, discussion forums, personal information, and to-do lists) via secure gateway to gateway communications. Thus, for example, via the gateway, buddies could configure RSS feeds to their personal page on this share space. The shared file is at virtual, programmatical area in application level space at the gateway. This data or file or information to be shared may be designated by the user and tagged, via an endpoint device user interface, to indicate the data or file to be automatically stored at the virtual file location for that user or information as shared content. The shared content communicated may have associated privileges depending upon the recipient, e.g., content is delivered with permissions given, e.g., read-only permission, or an update permission, e.g., to invite comments for sharing or discussion among buddies, at the virtual level. Thus, the invention provides for community sharing with a built in management structure that enforces service subscriptions for such service. Coupled with permissions functionality whereby connected users may have permission sets associated with them, a dynamic virtual space sharing environment is provided where select users can be notified of certain events on a scheduled basis, for example. Permissions are enforced locally on a buddy by buddy basis, e.g., privilege granted to add comments via the messaging infrastructure. The gateway provides a granular privileged support, e.g., read, write only, update privileges, etc., and the notification may be via telephone, IM, e-mail program, etc. Via the file-sharing interface provided by the gateway, buddies could “subscribe” for changes to such shared spaces. When there are changes or additions to the subscribed share space, the buddies will get notifications through email or IM or through their personal web page. Thus, if granted the privileged, via the peer and presence communications protocol implemented, a notification may generated that is packet transferred to the buddy's gateway device via TCP for indicating to a user that shared data is available. For example, that a shared space session is being initiated by a buddy, e.g., for purpose of sharing an application, or adding comments. Other functionalities include, but are not limited to: viewing a to-do list on TV, or providing scratch pad capabilities; and sending a signal from the gateway appliance to generate for display at the TV device the to-list or any user generated data, via messaging infrastructure, provision of a single click-share service. This is especially applicable for VPN closed user groups environments via a VPN providing a virtual closed network environment within which users (buddies, friends, family) may interact, e.g., share a common interface to enable real-time video gaming. As mentioned, file sharing may occur between and among different digital endpoint devices, among different gateway appliances, and among different digital endpoint devices associated with one gateway appliance and various endpoint devices associated with another gateway appliance, etc. For instance, a user may send a photograph (or any other file or media) from a mobile phone (or other digital endpoint devices) to a gateway appliance. The gateway appliance may forward that photograph to another digital endpoint device connected to the same gateway appliance. The gateway appliance may forward that photograph to another gateway appliance, which in turn may forward the photograph to an endpoint digital device associated with that other gateway appliance, for instance, another mobile phone, a digital picture frame, a PC, etc. As digital endpoint devices may include WIFI or other wirelessly enabled digital cameras, sharing of files from those wirelessly enabled digital cameras may occur in a similar manner. Remote Web Access Remote web access feature in an exemplary embodiment allows a secure mechanism to connect to and access the gateway appliances from anywhere through the web. A public web proxy/redirect servers or like functionality of the present disclosure in one embodiment provide HTTP redirection and proxy services for public web access to the gateway appliances. In one embodiment, for the gateway appliances that reside behind external firewalls, VPN accessibility is provided. In one embodiment, a user may access a web page provided by the web proxy/redirect servers. The user is prompted to enter information such as user identifier (ID) and password. Steps are initiated to enable establishment of a path or channel via which the information may be safely exchanged that enable a secure communications session to be established between the remote web browser and the gateway appliance. For example, the web proxy/redirect server encrypts the user information (e.g., ID and password) and transmits the encrypted information over the always-on control channel described above to the target gateway appliance, that is, the gateway appliance the user is attempting to access via the remote web. The gateway appliance then authenticates the user ID and password, that is, determines whether the user ID and password are valid for accessing the gateway appliance. If the user ID and password are valid, the gateway appliance communicates to the web proxy/redirect server that the user can access the gateway appliance. The web proxy/redirect server in turn provides the IP address for the gateway appliance to the user for directly connecting to the gateway appliance, for instance, via secure HTTP or HTTPS connection. In one embodiment, all service decisions with respect to further communications are decided at the home appliance. Thus, a user may remotely order a movie and have it downloaded to the user at his/her gateway appliance or remotely control home automation devices for controlling various devices at a premise. FIGS.9A-9Cillustrate the messaging flow scenarios for enabling remote access to functionality of the gateway appliance and any endpoint digital devices in several embodiments.FIG.9Aillustrates an example scenario400for enabling web port access to the premises gateway appliance when a web port is available (open);FIG.9Billustrates an example scenario of web port access when a web port is unavailable (closed); andFIG.9Cillustrates an example scenario of web port access to a VPN appliance at a private IP address. As shown inFIG.9A, steps shown at403in the enabling remote access to the gateway appliance and/or subordinate endpoint devices via a web browser device access the gateway appliance at its URL (public IP address). In this example scenario, the web port is available (open). The browser device is actually directed to a web proxy/redirect server that sends the location request to the location service component of the support network and waits for a response. The response may comprise the HTTPS location and other data for connection, e.g., the publicip webPort. A web request message is then communicated from the web proxy/redirect server to the gateway appliance and the gateway appliance returns its login page back to the web proxy/redirect server, and subsequently, the remote browser device. Then, the browser sends the user's login details and session details back to the gateway appliance via the web proxy/redirect server as indicated at step406. The gateway appliance responds by sending its home page over a secure HTTPS session to the web proxy/redirect server which then redirects it back to the requesting browser along with connection information such as the gwIp, port, and session details. Subsequently, at step409the browser may initiate services at the gateway appliance via a web request, passing session details back to the appliance. In response, the appliance's home page may then be presented by the appliance to the browser over a secure IP based communications channel. As now described with respect toFIG.9B, at step404enabling remote access to the gateway appliance and/or subordinate endpoint devices via a web browser device may initiate access to the gateway appliance at its URL (public IP address). In this example scenario, the web port is closed, the web browser requests access to the gateway appliance at its URL. The browser device is actually directed to a web proxy/redirect server that sends an open web port request message to the message router or like functionality, which routes the message to the gateway appliance. The appliance supplies the web port details and sends them via messaging to the web proxy/redirect server. It should be understood that this method obviates the need to go to a location server in order to enable remote access to the gateway appliance. The next step407depicts the handshake messaging to establish and present a respective login page and home page redirect at the requesting web browser device, and step411depicts the handshake messaging to establish and present a home page for receipt at the requesting web browser device as described with respect toFIG.9A, step406and409, respectively. Alternatively, in one embodiment, the step shown at407can be omitted. For instance, once the login user information or authentication information is input at the public web address and communicated to the gateway appliance as shown at steps404, the gateway appliance may validate the user and allow the remote web access request to come in through HTTPS connection as shown at steps411. As shown inFIG.9C, a step at413in the enabling remote access to the gateway appliance and/or subordinate endpoint devices via a web browser device initiates access to the gateway appliance at its URL (public IP address). In this example scenario, the gateway appliance is configured as a node in a virtual private network VPN at a private IP address. After the web browser requests access to the gateway appliance at its URL, the browser device is directed to a web proxy/redirect server that sends the open web port request message to the support network's message router or like functionality, which routes the message to the gateway appliance. At step415, the gateway appliance responds by sending a VPN routing request including its web IP address and port information to the web services manager component or like functionality which forwards the routing request including web IP and port information to the Network Address Translator (NAT) service. The NAT server sends a routing response including the extIP and port back to the web services manager component and back to the gateway appliance. The gateway appliance responds by providing a message with the web port details to the web proxy/redirect server which formulates a web request message back to the gateway device through the intervention of NAT service. The next step417illustrates the handshake messaging to establish and present a respective login page and home page redirect at the requesting web browser device. Next step421illustrates the handshake messaging to establish and present a home page for receipt at the requesting web browser device. IM Server As mentioned, the gateway appliance is the central communication platform that interoperates with multiple devices in the home to form a home networking environment. In the context of home automation, the gateway appliance10is additionally provisioned with a home automation controller device that communicates with the IM server function to facilitate home network management, including: a home automation controller that interfaces with a TV/Web interface that interfaces with the digital media adaptor component and a device driver (e.g., USB) that interfaces with the home automation network (e.g., Zigbee network) via a home automation control node that is responsible for communicating with the “smart” devices designed for home automation. The digital media adaptor component further communicates with the TV device at the premises, and the TV/Web interface further interfaces with the computing device, e.g., PC630at the premises. Further, the IM server functionality interfaces with an IM client that is either local (at the premises) or remote and may include a SIP phone or a PC. In the context of home automation services, the appliance supports multiple types of home automation controllers and multiple protocol standards including a variety of non-IP protocol standards and vendor specific proprietary protocols such as Insteon, Zwave etc. This enables the user to integrate multiple vendor devices in the home. It is further understood that the controller device itself may support more than one automation protocol such as Insteon or (legacy) x10 devices and these protocols will be transported via RF or electrical path. The gateway appliance only communicates with controllers via vendor specific protocols. Via the IM server functionality610, the local or remote IM client may be provided with IM-based state notification messages, e.g., messages of any alarm generated. The IM client device may receive device state notification messages166via the appliance's e-mail application, a phone call, or at a PC directly, without implementing functionality at a central server. Thus, when events are detected, for example, a change in the device's status or parameter(s) the appliance10generates alert notifications166, via the notification manager which is part of the presence and networking module shown inFIG.2C, for receipt at the IM client device. Moreover, as shown inFIG.15via the IM server functionality, a user is able to control home networking devices665or home automation devices locally or remotely. For example, this functionality specifically provides means to configure and control networking devices and home automation devices, e.g., networked light switch166controlling light fixture167to show up as controllable entities, via a list (not shown), on another device, e.g., the PC630or television632via a STB or DMA such as shown inFIG.15. Users thus receive immediate notifications of changes or check on connectivity or status of the home devices via communications from the gateway. Thus, the gateway may be programmed through a service offering or as a default to enable the IM notification directly on the TV via overlay onto a video signal at the home. Additionally, the gateway, through its device registry which is part of the presence and networking module, provides a list of the device state/parameters (status) of many devices that are connected to the gateway for additional control services, e.g., via a local PC client notification message161. One example of such a notification is shown inFIG.16which depicts an example user interface675showing a generated list680of devices connected to the gateway and their current status. For instance, as shown inFIG.16, the presented columns include the device, device identifier, the device status (e.g., ON/OFF), a type of device it is, and its scheduled operation/activity. Via the interface, a user may be able to control or change the status of a device, e.g., lights, by selecting on/off functionality embodied as user selectable buttons677. The home automation controller functionality of the appliance responds by generating appropriate signals that are forwarded to the home automation control node625to effect the status change of a particular device. In one embodiment, an additional control interface685is provided to effect a change in analog type devices, e.g., dimmer switch. Thus, via this example interface, a user may check on the status of each of these devices and send commands to change the status information. Any change in status of these devices will come as notifications or alerts. For example, an assisted living device665, e.g., a sensor, monitors user behavior or biological function and checks behavioral patterns against stored patterns. If there is determined a break in the pattern, when detected by the system, an automatic notification may be generated and provided to a user endpoint device, e.g., the PC or TV, etc. Gateway appliances are able to communicate with each other to share information through this IM server functionality provided at the appliance. All the messages and commands are communicated through a secure network connection. Appliance GUI For ease of operation, the appliance provides a GUI interface that supports functional test, diagnostics, and control capabilities for itself and for the other home network devices that it communicates with. The test and diagnostics include logs, statistics, and alarms (alerts) for use by service support centers and users. The control capabilities include automated configuration and management. To this end, users of gateway appliances10, . . . ,10naccesses the web/internet via a personal computer/computing device, mobile or laptop computer, personal digital assistant, or like device implementing web-browser functionality, e.g., Firefox 1.5 and Internet Explorer® 6.0 or later, or other browsing technology that may be compatible. In an exemplary embodiment, the browser interface employs the latest user interaction techniques, e.g., Web 2.0, and implements web development technologies such as AJAX (Asynchronous JavaScript and XML). With respect to accessing the gateway appliance and services via a web interface, users will log-in to a home page screen (not shown) via a web-based communication by entering a username and a password. Upon submitting this login information, both the username and password will be validated. If either the username or password is invalid, then an appropriate error message is displayed explaining the nature of the error. If the login is successful and the gateway appliance has already been initialized, a user's personal page will be loaded by default which page is user is configurable. A tooltip functionality is provided for more details about the status. If the status is red, the user can select the status indicator to get a diagnostic screen with a network map (not shown). This screen will display the current status of all devices managed by the gateway appliance and includes a button to allow the user to test the current status. A top bar is also used to indicate the progress status of any backup jobs currently running. A tooltip is provided to indicate the schedule name and the progress percentage. The top bar is also used to indicate the space usage of the user. A tooltip is provided to indicate the percentage of the space used by the user of the allocated space configured by the administrator user. There is a label provided that displays the current user information (e.g., administrator), and next to the label is a link to logout of the home center. When the user clicks on the logout link, the users' web session will be invalidated and the login page will be displayed. A further link is provided to change the user preferences. For example, when the user clicks on the “Preferences” link, a dialog box will be displayed that will allow the user to change the user preferences settings such as color, font, and themes. If the feature represented by the icon is not available, then the icon will be grayed and a tooltip will be provided to display an explanation. Although not shown, notifications for each feature are displayed as an animated icon below that feature in the second bar. A tooltip is provided with more details for each notification. When the user clicks on the notification icon, that feature page will be loaded to display the detailed notification information. As shown inFIG.17A, the list of user-selectable tabs or icons is provided that enable user interactivity with the services provided by the gateway appliance. These icons include: a personal page icon712that displays the personal page allowing a user to organize and configure a set of useful “widgets” provided by the gateway device; a photos icon714that displays a photos page allowing a user to browse stored images; a music icon716that displays a music page allowing a user to browse music stored; a file sharing icon718that displays a file sharing page allowing a user to view and manage the shared files with buddies; a calendar icon724that displays a calendar page allowing a user to manage their own calendar; a phones icon720that displays a “phones” page allowing a user to view and manage the list of voicemail and call logs stored at the gateway appliance; a backup icon726that displays a backup page allowing a user to view and manage the backups managed by the gateway appliance; and a home automation icon724that displays a home automation page allowing a user to view and manage the home automation devices. Backup Services GUI As shown inFIG.17A, upon selection of the Backup icon726, there is displayed a backup page730such as shown inFIG.18A. The backup page allows a user to view and manage the backups managed by gateway appliance. A title bar of the content area displays the total number of backups performed by the gateway appliance. There is a search box731provided that allows the user to find any files that have been backed up. Each matched file should be displayed in a list with all metadata. The submenu on the right provides options for the user to access the backup history and schedules. As shown inFIG.18A, via backup page730, in response to selecting a history option732, a list or pop-up display is generated to display the backup history data in a table with the following columns: 1) schedule—the name of the backup schedule, when the user clicks on the name, a backup schedule screen will be displayed; 2) status indicator—the status of the backup; 3) files—the number of files that were backed up; 4) date—the date and time the backup was done, when the user clicks on the date, the backup details will be displayed so that the user views the list of files that were backed up; 5) size—the total size of the files that were backed up; 6) type—the type of backup, e.g., recurring—full backup of all files every time the backup runs, or once—immediate backups; and 7) actions—the actions that can be done on each backup. When the user clicks on the restore icon, all of the files in the backup will be restored to their original location. As shown inFIG.18B, via backup page730, in response to selecting a schedules option734, a list or pop-up display735is generated to display the total number and types of scheduled backups. The content area lists the schedule data in a table with the following columns: the name of the backup schedule, when the user clicks on the name, the schedule details will be displayed so that the user can modify the schedule; the status indicator—the status of the backup, for a backup in progress, a dynamic progress bar and percentage indicator will be displayed; the last backup—the date and time of the last backup for this schedule; the next backup—the date and time of the next backup for this schedule; the type—the type of backup, e.g., a recurring or full backup of all files every time the backup runs, or immediate backups, a single back-up or an incremental backup; and the actions that can be done on each schedule as implemented by selecting an icon from a group736of icons, e.g., a stop icon, which when selected, stops the current backup in progress. When the user clicks on the stop icon, the user will be prompted to keep the files that have already been backed up; pause/start—pause the current backup in progress or start a backup that is not in progress; a report icon which when selected, causes for display a report of the backup (report screen design); and an icon for deleting the schedule. When the user clicks on the delete icon, the user will be prompted to confirm the delete operation; and an immediate backup (backup now) option. FIG.18Cdepicts the resulting backup schedule screen737resulting from selection of the “documents” backup name from the screen depicted inFIG.17A. Via this screen, functionality is enabled that allows the user to edit an existing schedule for each of the files and folders738managed by the schedule as displayed. The content area displays the current schedule information; however, the user can change the schedule settings and press the update button to modify the schedule. Particularly, the user may change the list of files for the schedule; clone and modify an existing schedule; and change the list of files. It is understood that the file backup feature may be additionally integrated with use of a calendar application. The user may additionally press the cancel button to return to the list of scheduled backups. FIG.18Ddepicts the resulting backup report screen739resulting from selection of the report action icon from the screen depicted inFIG.18B. This allows the user to view a status report of a backup schedule with the content area displaying the status information about the backup job. Filesharing Services GUI Returning toFIG.17A, upon selection of the fileSharing icon718, there is displayed a filesharing page740an example page of which is shown inFIG.19A. The filesharing page allows a user to view and manage the shared files (shares) with buddies. The shares are grouped by the following type: files, photos, slideshows, playlists, and tabs, etc. The submenu742allows the user to see the list of shares of each type. As shown inFIG.19A, the shares are displayed as a list743including the name (e.g., documents), date created, date modified, expiration date, and number of views for each share is displayed. The shares can be sorted by each column by clicking on the header label for that column. The total number of shares of each type is displayed in the title bar of the content area (in displayed list743).FIG.19Adepicts an example screen display showing file type shares displayed in list743. The user is additionally enabled to delete a share, e.g., by clicking on a “delete” icon (not shown). For example, by moving the mouse over a share name, an icon is displayed that allows the user to delete the share. The user can view the files that make up the share by clicking on the name link for each share. Continuing toFIG.19B, upon selection of the shares name, e.g., documents, via example page740shown inFIG.19A, there is displayed a list745of files of that shared file type, e.g., documents. The files and folders in the share are displayed as a list and include a thumbnail, file name, title, description, tags, date, size, buddy rating (which buddies downloaded the file and when), and the total number of comments added by buddies are displayed. The title, description, and tags can be modified by inline editing. For folders, users can drill into the folder item and then see the list of files shared in that folder. The buddies that make up the distribution list for the share may be displayed individually as shown in content area744. Each buddy that has not viewed the share is highlighted. When the mouse is moved over a buddy name, an icon is displayed next to the name providing functionality to remove that buddy from the distribution list. There is a link next to the list of names to add a new buddy. When the user clicks on the add buddy link, a list of other buddies is displayed. The user can select which buddies to add from the list. The expiration date of the share is additionally displayed. The user can change the expiration date using inline editing. Upon moving a displayed mouse cursor over an item, an icon is displayed to allow the user to remove the item from the share. An icon is displayed next to the number of comments; when the user clicks on the icon, the list of comments is displayed inline below the metadata of the shared file. The user can collapse the comments by clicking on the icon again. Scratchpad GUI Additional functionality is implemented such as adding items to the share by using the scratchpad which functions as a visual clipboard to collect items which are used at a later time. To display the scratchpad, the user would click on the Show Scratchpad link747in the header shown in example display ofFIG.19B. Any items in the scratchpad can then be drag-and-dropped onto the list of items to add them to the share. With respect to use of the scratchpad, as shown inFIG.17B, a show/hide scratchpad link is provided on the right side of the header to display the scratchpad. When the user clicks on the show Scratchpad link, a scratchpad area is displayed on the right of the content area. The scratchpad will have window controls to maximize and minimize its content area. Users can then drag-and-drop items from the content area705onto the scratchpad area. Items can also be added on other devices such as a TV; the TV is only used to flag such an item; any operations on the items in the scratchpad are done from the web GUI. Items are displayed as thumbnails with metadata. When the user moves the mouse over the thumbnail, a tooltip is provided with all of the details for that item. Each item in the scratchpad has a link to remove it from the scratchpad. Items in the scratchpad can be grouped into collections and the total file size of the items in each collection is displayed. By default, there is a collection called “My Collection”. The user can change the name of the collection by using inline editing. When the user clicks on the new collection link711, a new collection boundary is added to the bottom of the scratchpad. Users can move items between collections by using drag-and-drop functionality. Each collection has a link to remove it from the scratchpad. When the user right clicks on a collection, a context menu may be displayed providing an option for sharing the files in the collection. A dialog (not shown) is presented that displays the list of buddies to share with. A right-click context menu (not shown) additionally allows the user to save the collection as a slideshow, photo album or as a music playlist depending on the type of items in the collection. Home Automation Services In the home networking environment, the gateway appliance operates as the management center for managing the various services and devices that form the home network. One of the services offered by the gateway appliance is the home automation service. Via a home automation page, the user is enabled to view and manage the home automation devices. The home automation service is enabled/disabled by the service center. When enabled, the gateway will be able to communicate simultaneously with multiple home automation vendor controllers installed in the home. If the installed controllers are supported by the gateway, they may be automatically discovered by the gateway. When being provisioned, the following elements are processed: 1. system configuration; 2. map builder; 3. event builder; 4. scene builder; and 5. group builder. System Configuration When the gateway appliance is first introduced in the home network and if the home automation service is enabled by the service provider, then the gateway appliance detects and automatically discovers the following components: all the controllers that are part of the home automation network and whose protocol is supported on the gateway appliance; all the end devices supported on each of those controllers; and the firmware versions on each controller and end device. Once the controllers are detected, the gateway appliance allows the administration user to configure the detected controllers. As mentioned herein, once the controllers have been detected and configured, the gateway appliance updates the Firmware Update Manger (FUM) with the controllers and end devices supported on the home network along with their current firmware versions. For each gateway appliance on the managed network, the FUM maintains the knowledge of the controllers and end devices. It is the responsibility of the FUM to keep track of firmware updates for controllers and end devices and inform the gateway appliance when an upgrade is available. The gateway appliance additionally maintains a table of the controllers and supported end devices on each controller. This is later associated with user defined labels used for the GUI display as will be described in greater detail herein. Map Builder The map builder component provides the computer readable instructions, data structures, program modules, objects, and other configuration data for enabling a user to configure the home automation service. In this process, two types of maps are generated: a general map and a detailed map. The general map allows the user, during configuration, to label end devices, i.e. “stairway lamp”, “joe's dimmer desk lamp”, “downstairs HVAC unit”, etc. The user selects or designates a specific device and can turn it on, off or change it to a specific setting (for example, set the “joe's dimmer desk lamp” to 50% power, set the thermostat of the “downstairs HVAC unit” to 75 degrees). The detailed map extends the capabilities of the general map by including a floor plan to associate with the labeled end devices and enables the following: 1) constructing a floor plan of the house; 2) labeling end devices, i.e. “stairway lamp”, “joe's dimmer desk lamp”, “downstairs HVAC unit” etc.; and 3) associating devices with specific rooms by dragging and dropping icons in specified locations in the room. The user may also generate an automation network map of the home and select a specific device and turn it on/off or change it to a specific setting (for example, set the “joe's dimmer desk lamp” to 50% power, set the thermostat of the “downstairs HVAC unit” to 75 degrees). An administrator/user has the ability to create two types of maps: home automation network map (termed as “network map”) and the controller map that is used by authorized personnel (service provider/home user) for diagnostics. The network map includes the gateway appliance, all the controllers, and all the controlled devices in their specific location; and the controller specific map (termed as “controller map”) includes the map of each controller and the devices controlled by that controller. Thus, the gateway appliance supports a map builder process to enable the admin/user to build the maps. In this process, a user is enabled to: 1) create a floor plan of the house; group each room as part of a section such as “upstairs”, “downstairs”, “east”, “west”, “basement” etc. If the user does not want to use the section, the default value can be “downstairs”. Then it shall be possible to label each room with an appropriate name such as “Joes'room”, “living room”, “kitchen”, etc. Hence, the gateway appliance may provide a list of standard labels as given below: living room, formal dining, family room, kitchen, breakfast room, second living room, third living room, foyer, front porch, patio, <username> bedroom (this label could be used multiple times with a different username), master bedroom, master bath, hall bath, <username> bath (this label could be used multiple times with a different username), media room, and user specified. Each of the icons representing the controlled devices can be labeled with a unique user defined label (such as Joe's desk lamp, kitchen lamp, etc.), or comprise standard labels. Each of the controlled devices is additionally assigned status indicators. The gateway appliance shall provide pre-defined status indicator templates for each type of end device (e.g., if the lamp has a dimmer switch, then that lamp will have a dimmer switch template). Hence, the status indicators are assigned either automatically (gateway appliance communicates with the controller and get the status indicators for each end device) or manually (the user would have to assign the status indicator). Examples of status indicators include, but are not limited to: on/off for a lamp, dimmer setting on the lamp, temp for A/C/heater unit, etc. The gateway appliance may provide a set of standard status indicators as well as shown in the table below. No.Device typeStatus indicator1gatewayPacket throughput statistics (WAN/LAN, AP)appliancetemperature conditions of the system board(s)(performancedisk usagestatistics)disk fragProcess management (to be displayedonly to the service provider):1. All processes running on the gatewayappliance2. All configuration3. Start/stop control of allprocesses running on the ROSIM agent:1. Each IM user client2. Status and statistics for IM-based notification services2AutomationStatus of each end point device connectedcontrollerto each controller(map display)3PCConnected/Not connected; last time backup was done; last time file was shared4PhoneLast reboot date; Last registration date;operational status5MediaFile formats supported; Max throughputadaptorsallowed; DRM supported; operational status6MOCANode configuration; MAC controladaptor/parameters; PHY control parameters;HomePNAVendor ID parameters; traffic statistics;operational status (what channel etc) Event Builder The gateway appliance supports an event builder process to further automate the home by enabling detection of an “event” that enables/disables the activity of a device. Example events may include, but are not limited to: rain threshold exceeded, an alarm going off, or motion detected by the motion sensor. For example, a trigger maybe set to turn off the sprinkler system if a “rain level exceeded” event occurs. Another example is to record a video snapshot if a motion detector event is received. When an event enabled through the event builder gets activated, the user is notified. An event trigger is built as part of the set up procedure. This builder includes events that when detected will trigger an action, e.g., to automatically enable/disable the activity of a device such as shown in the Table below: Trigger eventsTriggered actionsRain threshold exceededThe sprinkler system is deactivated until therain threshold level is not exceededAlarm goes offThe alarm is again set for activation to thepreset configuration until deactivatedMotion detected by theCaptured on the camera/Light comes onmotion sensor Scene Builder The gateway appliance supports a scene builder process to enable the setting of “scenes” or scenarios enabling users to control multiple devices simultaneously. For example, the user may have a “sleep time” scene, either scheduled to occur automatically or invoked by the user at a certain time. When the “sleep time” scene is invoked, lights are automatically turned off, blinds are drawn, thermostats are adjusted, night lights are turned on, etc. Instead of the user manually moving about the house and making these adjustments, the user schedules this automatically in the gateway appliance or the user will simply invoke this scene via a web-based graphical user interface. The scene builder enables a user to: construct, modify, or delete scenarios, schedule scenes to be automatically or manually invoked, obtain a status check or user control of constructed scenes from all local and remote interfaces, and create user defined scenes. The gateway appliance includes a default scene builder for the user to use and modify if needed. One exemplary default scene builder is configured as shown in the Table below: LightingBlindsDeviceThermostatDevicelabels forDimZoneTemplabels forScenethe roomsSettingnumberdef F.the roomsRaise %NightLiving50%175Kitchen0sceneroomblind 1Lamp1Kitchen100%285MB blind 1100lamp 13off Preferably, the device labels for this default scene builder is constructed based on the labels that the user has created while building the network map. Group Builder The gateway appliance supports a group builder process to enable the grouping of items together and give it a labeled name. All the devices in this group will go to the same state. This is in direct contrast to the “scene builder” where all the devices included in a scene may be set to different levels. If light1, light2, and light3are in a group, then a single command (e.g., “ON”) executed on the group will cause all the lights in the group to be in the “ON” state. Alternately, in another group configuration, if light1, light2, and light3belong to a “night scene”, then when the night scene is invoked, light1could be at lit at 50%, light2at 20%, and light3may be “OFF”. The built group may become a part of a scene which can be invoked automatically or manually, or be invoked as a group whereby all the individual components are set to the same final state. Controller Status Indicators In order for the user to control the various automation devices, it is important that these devices are monitored on a regular basis and the result of the monitoring is displayed to the user. The status indicators of the controlled devices provide a means to monitor the automation device. The home automation controllers are capable of tracking the status of the devices that it controls through the methodology implemented to communication between itself and the devices (e.g., z wave/Insteon protocol). These controllers may communicate with the end point devices using multiple protocols. Some of these protocols may have a “closed loop” design, i.e., the devices provide acknowledgement back to the controller so that if the acknowledgement is not received, the controller retries the command (e.g., the Insteon protocol). These types of controllers can send control and simple data between devices (i.e., a light switch turning on multiple lights/devices) within the home. To give an example of the above mentioned scenario, assuming that the controller is controlling an Insteon-based lamp A, the gateway appliance knows (through status indication) that the lamp A is on and communicates to the controller that lamp A needs to be turned off. The gateway appliance tells the controller to turn off lamp A; in response, the controller (e.g., Insteon based) transmits this signal (RF and/or electrical) to the physical entity lamp A; when the lamp goes off, then the controller gets an ACK/NACK back from the device acknowledging that the lamp A was turned off/not turned off (if the end device is an X10 protocol-type device then there is no acknowledgement received); the gateway appliance then updates the status indicator of the device depending on whether it was an ACK/NACK. Thus, if the ACK/NACK is not received within a configurable period of time, then the gateway appliance reissues the command to the controller and restarts the timer. If there is no ACK/NACK received by the time the timer expires, then the gateway appliance alerts the user. The gateway appliance polls all the controllers at a configurable time interval, e.g., 5 minutes. Alternatively, the gateway appliance may receive events from the controllers informing gateway appliance of the status of the end devices. Either way, the gateway appliance maintains the status of the devices based on the polling/event result. The status of each device is reflected on the network/controller map. For all X10 devices, the status indicates a value of “unknown”. If a controller was configured by the user as a managed device but the gateway appliance does not receive any communication message from the controller, then the status indicator reflects the lack of communication. If an error code is received from the controller, then the gateway appliance either translates the error code and displays it in common language or directs the user to a help page where the error code is explained. If a controller is able to detect (possibly through an error code) that a managed end device is not responding to it, then gateway appliance interprets that detection and conveys it to the user by either displaying the error in common language or directing the user to a help page where the error code is explained. When the gateway appliance receives an indication from the user through any of the access methods to execute a particular command on a device or a group of devices, the gateway appliance responds by performing the following: it maps the user command to a corresponding message to be sent to the controller and then sends the message to the controller; and it waits for the acknowledgement message from the controller for a configurable preset period of time. If the message is received within the pre-set time period and the message indicates that the activation/deactivation was a success, then the gateway appliance sets the status indicator of the corresponding device based on the message. If the message is received within the pre-set time period and the message indicates that the activation/deactivation was a failure and a reason code accompanied the failure indication, then the gateway appliance maps the reason code to a user friendly message and displays the message. If no reason code was indicated in the failure message, then the gateway appliance displays the message “unknown reason”. In the event that the controller did not get an ACK back from the controlled device, the controller may send a NACK message to the gateway appliance indicating that the device did not respond. When the gateway appliance receives this message, the gateway appliance displays the message “device not responding” to the user and does not change the status indicator of the corresponding device. If the acknowledgement message is not received within the pre-set time period, then the gateway appliance retransmits the message one time and restarts the acknowledgment timer. If the controller does not send an acknowledgment message the second time before the timer expires, then the gateway appliance displays the message “acknowledgement not received” to the user and does not change the status indicator of the corresponding device. The acknowledgement message is received after the acknowledgement timer expires, the gateway appliance ignores the message. User Access and Control The home automation device is accessible remotely or in-home. Each of these accesses can be enabled through multiple interfaces as defined in the following table: InterfaceRemoteIn-homeNo.Type of interfaceaccessaccess1WebXX2IMXX3phone (touchtone, IVR)XX4TVX5ManualX6Remote controller (vendor supplied)X7A Mobile deviceXX The administrator has the ability to enable or disable remote access of any of the automation entities given in this table. As defined herein, the home automation feature is password protected. The system supports two levels of user access, User and Administrator. The administrator is able to perform all operations, including setting privileges for each user. The system implements default settings for each new user. The methods of access and control is individually enabled or disabled by the administrator. For example, a user may have access to see the status of the automation device, but not reset the devices. The gateway appliance supports configuration and provisioning activities via the remote access (through the web) as defined herein. Thus, when a gateway appliance is powered up in a home, an administrator/user who has remotely, through the web, logged into the home automation service on the gateway appliance is capable of configuring the gateway appliance. Web Access and Control Via a web-based interface providing access to gateway appliance functionality, the gateway appliance generates a map-like view of the automation devices and their status. Wherever possible, the graphical user interface device status provides graphical representation of the current status, i.e. “light on” “light off”, door open, etc. When accessed via the web, the system provides a menu driven method of control, or a map driven (network map and controller map) method of control. An administrator/user may be responsible for setting the remote access privileges for all the users. When the control is menu driven, the display consists of: a device name (such as lamp, blinds); a room label on the device (Joe's desk lamp, kitchen blind); a status indicator (lamp dimmed 50%, blinds raised 50%); and an action to be taken. The following table presents example room labels such as described herein and the examples of status indicators for each devices: Device name(includingroom label)Status indicatorAction to be takenBlindsRaised: 0%, 25%, 50%, 75%, 100%Raise what %LampOn/offSwitch on/off orDim - 25%, 50%; 75%; offDim what %DoorOpen/CloseOpen/CloseGarage doorOpen/CloseOpen/CloseWindowOpen/CloseOpen/CloseAlarmMotion sensor - on/offOn/offFront door- open/closeopen/closeBack door- open/closeWindow1- open/closeWindow2- open/closeWindow3- open/closeGarage door - open/close Each action entered by the user is recorded temporarily and once the user has input all the actions and confirmation is received to apply the actions, then the actions are executed. After the actions are executed the web page is refreshed with the updated status indicator. Access to the network map of the home automation system may be governed according to privileges where users have the ability (privilege to be set by the admin) to view the network map of the home automation system through the web. The status indicator of each device is displayed on the network map. Once the network map is displayed, the user is able to change the setting on each device. The user does this by clicking on the device that he/she wants to set. At this point, a configuration window is displayed to the user that includes the status indicator parameters that can be changed by the user. Once the user completes configuration of all the chosen devices to the new setting, an updated view of the network map is displayed to the user without the user having to refresh the view. The administrator or user is additionally enabled to zoom in on a particular controller on the network map and view in another window the controller map which displays each device that the controller controls. The status indicator of each device is displayed on the controller map. IM Access and Control A gateway appliance has the ability to connect to the home automation service in the gateway appliance device through IM from a PC or any other IM interface that is supported. Particularly, the gateway appliance is configured to appear as a buddy in the user's buddy list. The name for the gateway appliance IM buddy client user agent as it appears on the buddy list is configurable. Once the user clicks on the gateway appliance buddy the following events are initiated: the user is entered into an IM “chat” mode; a menu option with “home automation” as one of the options is displayed to the user; and when the user chooses the “home automation” the user is prompted for a password. Once the password is authenticated, the user is capable of asking status, and then changing status and optionally receiving notifications via IM. An example IM interface dialog is presented to the user that will display one or more of the following: whether any unexpected events have occurred in which case the user may be prompted to enter an instruction; request a status check; change a device status; and review an event log. In one example, upon selection of change device status request, the user will be prompted with choices for selecting a device type, e.g., light switch; garage door; outlet; sprinkler system; or a main menu option. Furthermore, in one example, upon selection of a device, e.g., a garage door selected, the user will be prompted to select the actual garage door, e.g., door1, door2, and main menu. Thus, the user interaction is text based and menu driven. TV Interface Access and Control The TV interface support both menu option and network map options as described herein. The user is able to designate particular events and their updates (such as time and temperature) to be continuously displayed on the TV when a media is playing on the TV. The TV interface displays notifications of events as designated, e.g., A/C breakdown; water leak; and motion detected. Telephone Interface and Control The user is additionally able to connect to the home automation service in the gateway appliance device through dialing their home number, e.g., a 10 digit home number. A sequence of events may be executed in response to a received call that has been answered, e.g., the call is considered answered if the voice mail gets connected. The user may be given the option to escape out of the voice mail if so configured. The menu option for IM with text to speech conversion is available and shall be offered when the home automation choice is selected; the user is prompted for a password. If the right password is entered, then the user will receive a confirmation, e.g. an audible tone, to the user that he/she is in the home automation command interface. The same commands offered in IM with text to speech conversion are offered for the phone interface. Once the user connects to the gateway appliance, then the user is connected to a menu driven IVR type or functionality. The menu presented is exactly the same as the IM interface as described herein. The home automation interface on the gateway appliance is deactivated when the phone goes off hook. The home automation interface on the gateway appliance is activated even if the voice mail picks up the call. Mobile Device Interface The system optionally supports WiFi-IP for interacting with mobile devices. Thus, a user may access the home automation service through a HTML supported mobile in a manner similar to the web interface as defined herein. Additionally, the user screen is modified to fit the mobile device. For example, the users may optionally connect wireless IP cameras to the gateway appliance system and stream or store IP video and review this data from a web or TV interface. Home Automation Services GUI A home automation page can be generated to display the following for all of the controlled devices: 1) the user defined name for the device, after the devices are automatically detected, the gateway appliance will default the name to the type and the user can modify the name by using inline editing; 2) a status indicator icon or label indicating the current status and possible value of the device; 3) a user defined room in which the device is located, when the devices are automatically detected, the default room will be the empty, however, the user can change the default room by using inline editing; 4) the manufacturer and type of device; and 5) any actions that can be done on each device. As shown inFIG.20B, via home automation screen750in response to selecting a room option754, a list or pop-up display760is generated to allow the user to view and design all of the rooms in the house. A total number of rooms can be displayed and an icon is provided that enables the user to add a new room to the list. When the user clicks on the add icon, a new room screen will be displayed. The content area lists the rooms in a table with the following columns: 1) a user defined name for the room, the user can modify the name by using inline editing; 2) the floor on which room is located, the user can change the default room by using inline editing; and 3) the actions that can be done on each room, e.g., delete the room. When the user moves the mouse over each room in the table, the room plan is displayed inline. As shown inFIG.20C, via home automation screen750a user may edit an existing room. The content area thus displays the current room name and floor that the user can edit to change. As shown inFIG.20C, the content area displays the layout editor763for the room that includes a list of icons for devices that can be drag-and-dropped on a work area for that room. The devices are grouped into categories that can be selected from a drop-down menu764. When a category is selected, the icons for that category are displayed in the list. Once an icon is dropped onto the work area765, it can be moved around using direct manipulation and be labeled by using inline editing. An icon can be removed from the work area by using the delete option from the right-click menu or by dropping it outside of the work area. Each icon has a status indicator to see the current status of the device if it is managed by the gateway appliance. The user can change the room settings and press an update button to modify the group. As shown inFIG.20D, via home automation screen750, a user may view and control all of the groups of devices created by the user upon selection of the group menu option756. When the user clicks on the name for a device, the settings for that group will be displayed. An icon is displayed to allow the user to add a new group to the list. When the user clicks on the add icon, the new group screen will be displayed such as shown inFIG.20E. Particularly, inFIG.20D, the content area provides a list of the groups in a table770having the following columns: 1) the user defined name for the group which can be modified; 2) the list of devices that make up the group, and 3) actions, the actions that can be done on each group. InFIG.20E, the home automation group screen allows the user to edit an existing group. The content area displays the current group name and the list of devices772. The user can use inline editing to change the group name. An add device drop-down menu773allows the user to select a new device to add to the list. When the user clicks on the add icon, the device will be added to the list of devices that display the names of the devices. The actions column allows the user to delete a device from the list. When the user clicks on the delete icon, the user will be prompted to confirm the delete operation. The user can change the group settings and press an update button to modify the group or press the cancel button to return to the list of groups screen. As shown inFIG.20F, via home automation screen750, a user may view and control all of the scenes created by the user upon selection of the scenes menu option758. Generally, upon selection of the scenes menu option758, the total number of scenes is displayed via the list775shown inFIG.20F. An icon is displayed to allow the user to add a new scene to the list via functionality implemented via the example GUI shown inFIG.20G. When the user clicks on the add icon, the new scene screen will be displayed. The content area775lists the scenes in a table with the following columns: 1) the user defined name for the scene, the user can modify the name by using inline editing; 2) the schedule to activate the scene, if defined; 3) the current status of the scene, e.g., either on or off, the user can click the icon to toggle the status of the scene; and 4) the actions that can be performed on each scene. InFIG.20G, the home automation group screen allows the user to edit an existing scene. The content area777displays the present current scene name and schedule. The user can use inline editing to change the scene name, e.g., “morning”, change the scene settings and/or press the update button to modify the scene. The user can press the cancel button to return to the list of scenes screen. As shown inFIG.20H, via home automation screen750, a user may view and control all of the events generated by the gateway appliance. In the exemplary screen display shown inFIG.20H, the total number of events is displayed. An icon is displayed to allow the user to add a new event to the list. When the user clicks on the add icon, a new event screen is displayed for user configuration. The content area lists the events in a table778with the following columns: the user defined name for the event; the automation event; the scene to be invoked for the event, a hyperlink is provided to the scene screen; the current status of the event, e.g., either on or off, the user tests the event mapping by clicking the icon to toggle the status of the scene; and the actions that can be done on each event. As shown inFIG.20I, the home automation group screen allows the user to edit an existing event. The content area displays a list779providing the event name, automation event, and scene. The user can use inline editing to change the scene name, or change the event settings and press the update button to modify the event. The user can press the cancel button to return to the list of events screen. Integration with the Calendar When the user sets a schedule on any of the home automation devices, the schedule will be integrated on a calendar application. For example, if the user has scheduled housekeeping tasks at specified times, then the calendar automatically reflects those tasks. In each user's calendar, only the tasks assigned by that user are reflected. Similarly, if the user has utilized the scene builder to generate a “night scene” that is initiated at 8 p.m. everyday, then the calendar shows the scheduling of the “night scene” everyday at 8 p.m. The user is then able to click on the scheduled tasks and modify the task. When the tasks are displayed on the calendar, the user may click on the task and make any changes on the task. The calendar is updated to reflect the changes. Thus, if the time of the scheduled task was moved from 8 p.m. to 9 p.m., then the calendar automatically refreshes to show the new scheduled time. Alarms, Logs, Statistics and Diagnostics Referring toFIG.4, the support services network50also may process alarms, logs, and statistics. For example, alarm aggregator82of the alarm subsystem may collect alarm statistics from the gateway appliances, for example, using the signaling control channel, and from other network elements, preprocess, and screen the collected alarms and pass them to the alarm subsystem for appropriate processing. Alarm aggregators, for example, may request alarm and diagnostic information from selected gateway appliances at predetermined intervals. Alarms may also be collected on demand, for instance, when a user requests diagnostic information for a selected gateway appliance. Further yet, the gateway appliances may communicate alarms as they occur. The alarm subsystem82,85may log and provide diagnostic access to all the network elements and utilize the underlying accessibility framework for hard to reach gateway appliances. The alarm subsystem82,85further may run diagnostics, configuration control, and provisioning control. FIG.6Gillustrates an architectural overview of alarms and statistics aggregator functionality in the support network in one embodiment. In an exemplary embodiment, the alarms and statistics aggregator functionality82provides a method for monitoring and troubleshooting the gateway appliances. The aggregator functionality82in general may provide a collection point for all alerts and statistical data from the gateway appliances. These alerts and statistics may be massaged, e.g., filtered or reformatted, and forwarded to a 3rdparty network management system85for monitoring and managing the gateway appliances. In another aspect, the aggregator functionality82may serve as a conduit for querying information from each gateway appliance10. Information requests may be performed through simple networking management protocol (SNMP) that gets initiated by the network management system85. Alarms from the gateway appliance10may travel through the connection manager or like functionality60, message router or like functionality62to the aggregator(s)82or like functionality to reach the NMS85. In one embodiment, the aggregator(s)82receive alarms, logs, and statistics in the format of XMPP messages. The following depicts an example alarm message structure generated by the gateway appliance for communication to the service center. The alarm message shows a SNMP Trap encapsulated in XMPP. <stream:streamxmlns=’jabber:client’id=’c2s_345’from=’gateway [email protected]’to=’[email protected]’version=’1.0’><SNMP Trap> Version number = SNMPv2Community name = gateway appliance-NetworkPDU type = TrapRequest ID = 1Error status = 0Error index = 0Variable bindings = {Alarm = rhcCallRoutingFailureSubsystem = gateway appliance-VoiceRegion = 75080}</SNMP Trap> In one example embodiment, the SNMP Trap is formatted in XML or like mark-up language. In operation, the aggregator82translates messages from the gateway appliance to a SNMPv2c format and then forwards the messages to the NMS85. Aggregators82may also translate SNMP gets queries from the NMS85to XMPP messages for sending to the gateway appliances. Regardless of the direction of communication, the aggregators82translate the message to the appropriate protocol. Other network elements may utilize the aggregator82for alerts and statistical requests. In one embodiment, alarms from any network elements of the support network may travel directly to the NMS85without passing through the aggregator82. In one embodiment, a mechanism for load balancing and redundancy may be provided for the alarms and statistics aggregators82. One mechanism may include performing load balancing across the aggregators82through a separate application or functionality referred to as alarm component. Alarm components may manage connectivity between the message routers62and the aggregators82as well as evenly distribute incoming messages across all aggregators82. The aggregators and components may run in N+1 configuration, which may permit an aggregator or component to be unavailable without affecting the collection of alarms and statistics. In addition, there may be aggregators dedicated to translating and passing SNMP gets for querying information from the gateway appliances. These aggregators may communicate directly with the alarm components to forward the XMPP translation of a SNMP get to a message router62, which forwards the XMPP message to the gateway appliances. In one embodiment, the majority of aggregators may be dedicated to routing messages to the NMS85, while a fewer such as one or two aggregators may handle routing messages to the gateway appliances. In another embodiment, connectivity to the message router functionality62may be integrated directly into the aggregator82. In this embodiment, the routing of messages from the gateway appliance10to an aggregator82may be based on the following: each gateway appliance may establish a static connection to a message router62. Each connection manager60may have a static connection to a message router62. Each aggregator82may establish static connections to multiple message routers62, for instance, with no router having more than one aggregator connection. Messages from an appliance10may then flow through a common path to the same aggregator. If an aggregator is unavailable, then messages for that aggregator may route to the closest available aggregator through the message routers62. Aggregators or like functionality82may themselves generate an alarm, for instance, upon failure of the aggregator to translate a message to SNMP. Similarly, an alarm may be generated upon failure of the aggregator to translate a message to XMPP and forward the message to a gateway appliance. The alarms and statistics aggregator82or like functionality may also generate alarms, for instance, upon establishing successful connection to a message router, failed connection, when an active connection is lost, or upon failure to forward a message to a gateway device. Such alarms may include the IP address or the FQDN of a message router and reason or description for the event. In one embodiment, each NMS may be associated with a service provider. For enabling aggregators or like functionality82to route messages received from the gateway appliances to a specific service provider NMS85, the alarm message may include an identifier for the service provider or the aggregator82may query the service provider, for instance, from an external source, based on the gateway appliance's ID such as Jabber ID. In addition, the aggregator82may track a list of IP addresses and ports for each NMS85. Further, the aggregator82may support the option to route messages to one or more NMS85based on the service provider associated with the gateway appliance sending the message. In one embodiment, the alarms and statistic aggregator or like functionality82may support different states when active. One state may be unlocked. During unlocked state, the aggregator receives incoming messages and translates messages. Another state is locked. In locked state, the aggregator is no longer accepting incoming messages, however, the application, i.e., aggregator may be still translating messages. This state may be useful for gracefully halting or shutting down an aggregator. Generally, an administrator may be given privileges to be able to move an aggregator instance into a locked state or unlocked state. In addition, an administrator may be enabled to shut down an aggregator instance. Further, the aggregator cluster82may be designed such that a single aggregator instance may be upgraded or shutdown without affecting or having to shutdown or stop all aggregator instances. In another embodiment, the alarms and statistics aggregator or like functionality82may be monitored. A monitoring agent may oversee the various aggregator processes and watch over the state of its servers or like functionality, for example, to ensure the aggregator service remains available. The monitoring agent may perform appropriate notifications to appropriate components or functionality if any problems are detected during monitoring process. In another aspect, there may be monitoring agents for other elements or functionalities in the support network. In one embodiment, the alarms and statistics aggregator or like functionality82may maintain various counters and statistics relating to the number of messages and events occurring within each aggregator instance. For example, each aggregator instance may track a list of gateway appliances from which it has received incoming messages and/or the number of incoming messages it receives from a gateway appliance; track the number of messages discarded due to its inability to translate a message from XMPP to SNMP, or from SNMP to XMPP; and track the total number of messages discarded due to the unavailability of a gateway appliance, etc. This information may be queried from each aggregator instance through the use of a SNMP get and stored in the NMS85for near real-time and historical reporting. The report may be available to the network administrator for monitoring traffic levels across the aggregator instances. There may be an option, for example, for an administrator to reset or clear one or more or all counters or information. Logging is a useful function for troubleshooting events that occur within an application. The alarms and statistics aggregator may include a generic process responsible for logging messages. Logs of processing performed by the aggregators may be recorded and stored. Such processing may include, but not limited to, attempts to connect to message routers, failed connections including IP addresses and port numbers or FQDN of message router, and lost connections, etc. In addition, incoming or outgoing messages in the aggregator may be logged including, for example, messages it failed to forward. In addition, other network elements, servers or service functionalities may be capable of logging events, statistics, and generating alarms based on various processing performed specific to each server or functionality in the support network. The alarms and statistics aggregator may also interface with those network elements to collect various alarm and statistical data related to processing. From the gateway appliance perspective, the gateway appliances may have the ability to generate alarms when a pre-configured threshold value is exceeded on the device. A user may have an option to set the method by which the user may be notified when an alarm is generated. In one embodiment, multiple notification events may be defined on the appliance. These notification events may be capable of being associated with different roles so that assigned user can be notified when the event occurs. Examples of different methods of notification may include but not limited to e-mail, a text or SMS message, instant messaging, personal page, TV, and telephone. Every role (types of users) may have the ability to receive notification for any notification event. In one embodiment, the same notification or alarm event may be notified in multiple ways to the same user. Analogously, the same notification or alarm event may be notified in multiple ways to different users. The generated alarms may be logged and their statistics generated. Similarly, other information may be logged and their statistics generated. Alarms, logs, and statistics kept in the support network may be accessed by a user at the gateway appliance using web services interface in one embodiment. Further, HTML GUI may be provided for the user to access the alarm, logs, and statistical information associated therein. Different levels of logging may be enabled or disabled depending on the access privileges set through configuration. The gateway appliance, in addition, may be enabled to filter logs, alarms, and statistics based on search criteria. Example functionalities based on which an appliance may generate logs and statistics are defined in Table 1. TABLE 1Sample of SystemCondition for logs and statistics to befunctionalitygeneratedmonitor the CPUif the CPU utilization exceeds preconfiguredutilization on athreshold values the system may create a logcontinuous basisMemorythe amount of memory usagemonitor the diskif the disk utilization exceeds preconfiguredutilization on athreshold values the system may create a logregular basisto capture:preconfigured threshold valuecurrent utilization levelKeep track ofif unsuccessful login attempts are noticed thelogin attemptslog shall capture:date and time of the login attemptthe userid of the login attemptnumber of attempts madeTrack firewallProvide firewall process logsprobe attemptsIf the number of attempts exceeded themaximum allowed value within a preconfiguredduration, the events may be logged.Track bandwidth1. Log the statusmanager status2. Log any throttling actions3. Log any pre-configurable thresholds With respect to alarms generation, the gateway appliance is capable of: 1) displaying alarms on the network map of the user; 2) sending alarm to the service provider; and 3) sending an alarm to the user. Any/all of these methods can be configured against a particular alarm. Example conditions under which alarms are generated are given in Table 2: TABLE 2Name of alarm (examples)Condition for alarms to be generatedActions to be takenTEMP_ALARMIf the temperature thresholds on theshutdown the GW device anddevices that are temperatureoptionally: send an alarm tocontrolled are exceededthe OAMP Server notify theconfigured userDISKUTIL_ALARMthe disk utilization exceeds criticalnotify the configured userlevelsSYSDOWN_ALARMwhen applications or subsystems aresend alarms to the supportdown or service faults are detectednetwork notify the configureduserFIRMUPFAIL_ALARMfailed firmware upgradenotify the configured userLOGIN_ALARMIf the number of login attemptsnotify the configured userexceeds the configured maximumvalue within a preconfigured duration Billing The gateway appliance in an exemplary embodiment is an interactive device for a premise such as the home that enables users to purchase and activate services. The support network50thus further may provide bill collecting capabilities for services rendered at the gateway appliance. Examples of services, for example, may include voice, media such as movies and music, backup services, home automation, file sharing, and parental control, etc. Referring toFIG.4, billing aggregator or like functionality58may communicate with the gateway appliances and the third party service providers (e.g., VOD(s), CA(s), wholesale voice providers, backup services, etc.) to collect and correlate billing records. In one embodiment, the gateway appliances and other network elements may generate records of billable events, which may be used for billing, verifications, troubleshooting, and other purposes. The gateway appliances, for example, may record all billable events and send the data to the billing aggregator or like functionality58using, for example, the signaling control channel, for instance, via the message router. This transmission of billing data may occur at a regular interval or a predefined interval or at other desired time or period. Thus, from a gateway appliance perspective, an appliance may keep records of usage information and events (referred to as event records) associated with services such as those associated with voice calls, media services, etc. In one embodiment, it may be possible to derive billing data from a single event record without having to correlate with any other event record. The gateway appliance10interfaces, e.g., transparently through the routing manager functionality, with billing collector58and sends the event records to the billing collector or like functionality58, which collects event records from all gateway appliance platforms. In one embodiment, the collection may be executed at predefined intervals, for example, as configured on individual gateway appliance10. In one embodiment, the gateway appliance may be capable of initiating the transfer of the generated event records to the billing collector, e.g., at configurable intervals. An example protocol used for communicating the event records between the gateway appliances and the billing collector is XMPP, although not limited to such. XMPP is defined in IETF RFCs 3920 and 3921. For example, the process of transferring the records generated by the appliances may be through the XMPP protocol and the application layer protocol attributes. Example attributes of the XMPP protocol may include: the appliance initiating a “message” stanza; the “to” attribute containing the full JID of the billing collector or like functionality; a stream unique “id” assigned to the message; and the body of the message containing the appliance generated event record in a string format. An example application layer protocol may contain data such as a unique message ID which may be different from the message of the XMPP layer, message sequence number (e.g., 1, 2, 3, etc.), and total number of bytes in the event record contained in the body of the XMPP “message.” As mentioned, one or more gateway appliances10may communicate billable events via XMPP messages that include billable events to the billing collector or like functionality58via a message router or like functionality62. In one embodiment, the message for the event record transfer from a gateway appliance10to the billing collector58may be a two-way handshake. Thus, in one embodiment, the billing collector58sends an acknowledgment to the appliance for every message received. The appliance may resend the message if it does not receive an acknowledgment, e.g., within a predetermined time. In one embodiment, when the billing collector58receives the message from the appliance, it checks the message for errors (e.g., whether the total number of bytes in the enclosed event records is equal to the total number of bytes mentioned in the application layer attributes parameter). If there are no errors, the billing collector58writes the data to a file and stores it, e.g., on a storage device. In one embodiment, the billing collector58then sends the acknowledgment to the appliance. The acknowledgment message, for example, may contain the same message ID as the received message, so that for instance the appliance can identify that it is a receipt of the message sent. As mentioned above, in one embodiment, if the gateway appliance does not receive the acknowledgment message within a predetermined wait-time, it may resend the message. Thus, in one embodiment, the billing collector58may receive a re-send of the previous message. However, in one embodiment, the billing collector58need not know that it is a re-resend. Instead, in one embodiment, the billing collector58may treat the message as if it were the first message of its type. If an error occurred in the message, for example, the number of bytes in the received event records does not match the number of bytes in the message attribute, then the billing collector58may formulate an error message and send it to the appliance. The message may contain an error reason, for example, “error in bytes received.” Parental Control Yet another functionality that may be provided in the appliance gateway in conjunction with the support services network includes parental control. The parental control functionality in one embodiment may allow parents to track what their children are doing on their PC's or what content children are watching, for example, on a media device such as the TV or PCs, and provide an easy way for parents to grant permissions for children to watch a show on a remote TV or watch pay per view content. Furthermore, the parental control functionality in one embodiment may allow parents to monitor and control access to media devices such as a telephone providing voice services associated with the gateway appliance. In one embodiment, software running on the PC as a background service may record all desktop activity as a video and distribute it to the gateway appliance to be stored on a hard drive. The video may be published on the local network using a protocol such as UPnP. Parents can then view the video by connecting to the video stream hosted on the gateway appliance from a TV by using a set top box, which acts as an UPnP renderer. This unique method provides a near real-time view of all PC desktop activity from a remote and convenient location. Parents need not have to go to the child's room to check up on any PC activity. Parents can track their child's PC habits while they are watching TV. If the set top box is capable, the PC view could be shown with picture-in-picture overlaid on the live TV signal. In another embodiment, video content managed by the gateway appliance as a service may be accessed in the home on TV's using a set top box. The service provides a mechanism for parents to manage the parental controls of the service content. When a child tries to watch a movie, but cannot since it is blocked by parental controls, he may press a button on the remote to send a notification to parents TV to get permission to watch. On the parent's TV, a notification appears. Parent opens the notification on TV, sees data about the movie, and then indicates with the remote if the movie is allowed. If it is allowed, then the movie may be unblocked for the child. If it is not allowed, the movies are still blocked and the child may get a notification on his TV that the parent has not given permission. This way the child does not need to run to the parent and provide explanation of the movie, or the parent need not independently search for information about the movie in order to decide whether to provide permission for viewing. In yet another embodiment, the gateway appliance may provide a pay per view service. The gateway appliance may provide parents with configurable mechanisms to allow children to ask permission to watch pay per view content. When a child wants to watch a pay per view movie, he presses a button on the remote which then sends a notification to the parents, for example, using TV, SMS or email. The parent receives the notification with all the information about the movie and the cost. The parent can then indicate whether the movie can be watched. The gateway appliance may have a web server to allow the parent to remotely specify permission by hosting a web page, which can be accessed by a browser on a mobile phone or a link embedded in an email. If the parent grants permission, then a notification may be sent to the gateway appliance to allow the payment. A notification may also be sent to the child's TV indicating permission was given. If permission is not given, then a notification may be sent to the child's TV indicating that decision. This mechanism allows parents to give their children permission to watch pay per view content remotely, without being at home. Parents can see the information about the show and can directly control the payment transaction. In another exemplary embodiment, the gateway appliance may provide parental control functionality with respect to voice services associated with the gateway appliance. For example, the gateway appliance may be associated with a telephone device (i.e. a traditional PSTN telephone through an adapter, a session initiation protocol telephone, IM client) to provide managed voice services. An adult, by utilizing the exemplary service may monitor and manage all voice conversations the service provides. As an example, a parent can press a button on a TV remote, which may display a list of all voice calls that have occurred in the home with a particular telephone device. The gateway appliance may be configured to associate users with particular telephones such that the parent can monitor and manage access to specific voice services available to their children. By way of example, an adult can monitor a particular telephone device usage by observing the time-of-day the call occurred, the day-of-the-week the call occurred, the type of call (i.e. local, long distance, international), the length of the call, the date on which the called occurred, the number called, and the number of the calling party. Furthermore, through the gateway appliance, an adult can manage a child's usage of a telephone device by limiting accessibility of the device according to various parameters that may include, but not limited to, the identity of the user, time of day, day of week, and type of call. Thus, the gateway appliance provides parents a real-time view of the voice services being as well as a method for managing the use of telephone devices within the home. The gateway appliance may keep track of all content that is being watched on the services it provides. A parent can press a button on a TV remote, which may display a list of all content that is currently being watched in the home. The gateway appliance may be configured to associate users with particular media devices such as TVs such that the parent can see what content their children are watching on their TVs. This provides the parents a real-time view of the kinds of content their children are watching. This mechanism also allows the parents to keep track of how much TV their children are watching. In operation in one embodiment, functionality in a set top box (STB), for example, may overlay GUI on the live TV signal. The STB may have a universal remote control, which allows the user to control TV/cable/satellite functions in addition to one or more features provided by the system and method of the present invention in one embodiment. By pressing a special button on the remote control, a gateway appliance menu system may be overlaid on the TV signal. A user may then use the navigation buttons on the remote to select one or more gateway appliance features. The parental control functionality may be established by a parent setting preferences on the gateway appliance through a media device. Specifically, the parent may utilize a media device such as a TV or a PC that is connected to the gateway appliance to establish certain parental control parameters. For example, a parent could limit their child's access to certain media devices based on the various parameters the parent entered into the gateway appliance. Parental control parameters that may be established by a parent include, but not limited to, (i) limiting access to certain media devices for a certain user because of their identity, (ii) establishing a password for a specific media device thereby not allowing an end user to access that device without the password, (iii) limiting access to a specific media device based on the time of day, (iv) limiting access to a specific media device based on the time-of-week restrictions, and (v) utilizing media content ratings thereby restricting an end user from accessing certain media on a media device that is deemed inappropriate for that end user based on the media content ratings. Thus, a parent through a media device associated with the gateway appliance may establish accessibility parameters that enable the parental control functionality within the exemplary system. In one embodiment, when a child tries to watch a movie that is stored on a gateway appliance, the STB may query the appliance if there are any restrictions on playing the movie. The gateway appliance in one embodiment may keep a database of all of the parental control settings. The gateway appliance may use this data to decide if the movie can be played, for example, based on the ratings metadata of the movie. If the movie is locked by parental controls, the gateway appliance may inform the STB to display a list of options on the TV. The list of options may be overlaid on the TV signal by the STB. One of the options may be to ask the child's parent(s) or like for permission to watch the movie. When the child selects that option, a message is sent to the gateway appliance. The gateway appliance may have a notification mechanism, which allows notifications to be sent to any device it manages. When the gateway appliance receives the request for parental controls permissions message, it may use its knowledge of how to contact the parent(s) or like and which device to send the notification. All users of gateway appliance can use an interface such as a Web GUI to configure how they should be notified and for which notifications which devices should be used. This notification configuration data may be stored on the gateway appliance, for example, in its database. If the parent(s) or like are watching TV in their own room, then the gateway appliance may send the notification and the metadata about the movie to the STB for that TV. The STB may overlay the message and movie data on their TV signal. The parent(s) or like may be able to read the message and then select from a list of options how to respond to the child's request. When the parent has made a selection using their remote control, that STB may send that response back to the gateway appliance. The gateway appliance may send that response as a notification to the child's TV using its STB. If the response was to allow the movie to be watched, the RHC may remove the lock on the movie, and it may allow the movie to be streamed to the child's STB. If the response was not to allow the movie to be watched, the child may not be able to play the movie and may be given the option to pick another movie. FIG.13Aillustrates a call flow among the gateway appliances and a plurality of network elements in the support infrastructure for provisioning the parental control (PC) service in the gateway appliance in one embodiment. As shown at step1002, the gateway appliance in conjunction with the network elements performs the steps to register the gateway appliance and update it with the firmware. As shown at step1004, the gateway appliance requests subscription information from the subscription manager or like functionality via the message router or like functionality, providing it with the gateway appliance identifier. The subscription manager responds with detailed information such as the latest version numbers of parental control application and identifiers for configuration data and files that the gateway appliance needs in order to provide the parental control service. As shown at step1006, the gateway appliance downloads from the parental control service manager or like functionality the configuration data as identified by the subscription manager over a secure communication channel established, for example, using the web interface component or like functionality of the support network. Configuration data may include blacklisted URLS and other provisioning information for parental control. As shown at step1008, the gateway appliance configures blacklisted URLs and other provisioning information for performing parental control at the gateway appliance, sends a message to the parental control service manager or like functionality that the provisioning is completed, and subscribes for parental control provisioning updates to the pub/sub network element or like functionality in the support network. The configured parental control service is then started, providing various controls according to the configuration as to what may be accessed or displayed by various users of various endpoint devices that connects to the gateway appliance. FIG.13Billustrates a call flow for updating parental control service provisioning in one embodiment of the present invention. According to steps1010, the gateway is registered, updated with firmware, and has completed service provisioning for parent control feature. At step1012, the parental control service manager or like functionality sends a message to the pub/sub server or like functionality that updates for parental control service are available. At step1014, the pub/sub server checks connected gateway appliances that subscribe to this service and also optionally calculates update notification rate to sustainable rate. The pub/sub server then notifies the identified gateway appliances via the message router or like functionality of the available updates as shown at step1016. At1018, the gateway appliance downloads the updates from the parental control service manager, for instance, using the web services interface element or functionality of the support network to access the parental control service manager. The gateway appliance also sends a message to the pub/sub server when it completes the download. The pub/sub server, depending on design implementation, may send the notification to other gateway appliances of the available updates at1020. At steps shown at1022, the gateway appliance reconfigures its parental control service configurations using the updated data, notifies the service manager that updated provisioning is complete, and continues with providing the parental control service with the new configuration. Voice Services Voice services support is another capability provided by the gateway appliance and networked services support infrastructure of the present invention. Subscribers have at their disposal a rich set of voice services including, but not limited to: anonymous call rejection; call forwarding (unconditional, call forwarding on busy, call forwarding on not available); call hold; call logs; call pickup; call transfer; call waiting; call waiting with caller ID; caller ID delivery; caller ID/caller name blocking; caller name delivery; contacts/address book management; do not disturb; emergency call handling; fax support; international dialing support; message waiting indication for voicemail; national dialing support; selective inbound call restrictions; speed dial; three-way conference calling; and voicemail. Where applicable, subscribers may configure features and services via a web interface or using Vertical Service Codes (VSC). Call history for calls received, calls originated, and calls missed may be provided in the subscriber's personalized call portal. Complete voice package to customers may be offered as extensive voice network architecture. FIG.10Aillustrates an example process for call services provisioning in one embodiment. As described in the typical service provisioning ofFIG.10A, after registering and receiving its firmware updates, at steps shown at503, a subscription manager receives information associated with the gateway appliance and provides specific service provisioning information as it relates to voice services as indicated at step506. In one embodiment, the subscription manager or like functionality provides the subscription information as well as configuration data information for voice provisioning details to the gateway appliance. For instance, the subscription manager or like functionality may receive metadata regarding provisioning, e.g., information associated with configuration data needed to provide the service from the voice service manager or like functionality. In another embodiment, the gateway appliance may request the configuration data information from the voice service manager or like functionality. The gateway appliance may then download the needed configuration data described in the metadata from, for example, the voice service manager or like functionality. The actual downloading may occur at a later time or at the time the voice service is being provisioned. Exemplary voice services that may be provisioned at a gateway appliance include dial plan configuration and inbound/outbound routing configuration supported auto configuration device list. At step509, provisioning is completed and the notification is sent back to the voice services manager which is further relayed to the pub/sub server. A voice service initialization procedure is then performed, for example, as now described with respect toFIG.10B. FIG.10Billustrates an example of call service initialization as carried out by the appliance and system of the present invention in one embodiment. Step510perform service provisioning, accessibility testing optionally, and determining optionally whether public access is available to the appliance at a public IP address either with or without network address translation services provided, or whether its determined address is a local IP address with VPN and port forwarding and starting voice services. At step514, the appliance sends voice service access details message including, for example, IQ set, gwId, voice and signalingPort to the network message router or like functionality which routes the information to the voice service manager or like functionality which then forwards the gwId, and voice access info information including a “notActive” indication to the location server that provides location services. Further, voice accessibility test information including accessDetails may be sent by message to a service accessibility tester or like functionality. At step514, a voice service accessibility test is performed from a public interface utilizing, for example, SIP/RTP, and a voice accessibility test response is received at the voice service manager. Once the accessibility test is successfully performed, the voice specific manager updates the location server component with the gwId, voice and an “active” indication. The voice service access response may be then forwarded back to the message router for routing ultimately, back to the gateway appliance at step517. FIG.10Cillustrates an example of call service provisioning update as carried out by the appliance and system of the present invention in one embodiment. In one embodiment, the update provisioning process may occur after the step at520have been performed: gateway registration, firmware updates, and that service initialization has been completed. At step523, the voice service manager or like functionality or device may publish a voice service provisioning update to the pub/sub server or like functionality, for example, including provDetails, voice, applicableGW, and updateRate (notification rate) information for that service. The pub/sub server checks for the gateway appliances that have subscribed for this service provisioning update and may optionally calculate an update notification rate to ensure a sustainable rate. At step525, the pub/sub server may send a message destined to all of the gateway appliances that subscribe to this service provisioning update, about the service provisioning update including, for example, the IQ Set, voice, and provDetails, for example, via a signaling control channel, for instance, using XMPP. A gateway appliance then may download the provisioning data needed, for example, via a secure HTTPS connection. The downloading step may occur at the time of receiving the notification or alternatively at another time, as desired. Once the service provisioning data download is complete, the support network may be notified and the gateway appliance is responsible for the reconfiguring and provisioning of the appliance for the particular voice services. As shown inFIG.10C, the process is repeated527for each gateway appliance subscribed to that voice service update. With respect to call services,FIG.11Aillustrates an example message flow scenario for providing automatic detection and configuration of SIP devices that communicate through the gateway appliance. As shown at a first step530, it is assumed that the gateway registration, firmware updates, and that service initialization have been completed. At step533, the gateway appliance has detected that a SIP phone has initiated a connection with the appliance by receiving a Dynamic Host Configuration Protocol (DHCP) request for an IP address. The gateway appliance responds with an IP address, thus enabling SIP phone connectivity via the appliance. Then, at step535, the gateway appliance performs a check to determine if the SIP phone device is new to the network and further, whether the SIP phone device is supported by the gateway appliance. If it is supported, at step537the gateway appliance queries the device as to whether it is already configured and waits to receive registration information from the user of the phone device which can be forwarded to the network for updates subscription purposes. The gateway appliance submits a message to subscribe to the voice service manager or like functionality for firmware updates for the SIP phone device which message is forwarded for retention in the pub/sub server at step539so that they can be made to this SIP phone endpoint device in the manner as described herein. With respect to call services, theFIG.11Billustrates an example message flow scenario for managing upgrades for SIP devices. As shown at first step540, a voice service manager has provided a “SIP device firmware update available message” to the pub/sub server or like functionality, which in response performs a sequence543to check for all of the gateway appliances that subscribe to receive that particular service upgrade. In one embodiment, the pub/sub server or like functionality may optionally calculate the update notification rate prior to providing the firmware updates and send the upgrade information details back to a message router or like functionality, which forwards the firmware upgrade information to the appliance, for example, in the form of a data structure that may include IQ set, upgrade details. The firmware updates may be downloaded from the firmware download server or like functionality, for example, via HTTPS connection, to the appliance. The downloading may occur at the time of notification or at a later time as desired. Once the downloading of the updates is complete, a download complete notification may be sent back to the pub/sub server or like functionality at step545. It is understood that the pub/sub server may send notifications to the remaining gateway appliances subscribed to receive the SIP phone firmware upgrade in the similarly manner. Finally, at step547steps are initiated to physically upgrade the firmware at the SIP phone and/or reconfigure the phone. FIG.21illustrates an example architectural diagram for providing call (e.g., voice or other media) processing in one embodiment. In-premise or in-home components may include gateway appliance2002of the present disclosure, SIP and FXS endpoints, possible in-home gateways, and software clients. Support network components or like functionality may include billing2004, provisioning, network management2008, and session border controllers2010. Examples of end user functionality may include, but is not limited to, voice services to home owners globally in multiple countries; instant messaging contact list downloads with presence functionality; office or email application contact lists downloads; ability to make calls from IM clients to VOIP or PSTN terminations; ability to make calls from IP Phones to IM clients, VOIP or PSTN terminations; ability to make calls using “click-to-call” from web browser; ability to make video calls; ability to make calls from traditional phones (through a Phone Adapter) to IM clients, VOIP or PSTN terminations; ability for user to manage services from TV; if the home owner has subscribed to a PSTN line, PSTN routing can be used as an overflow option or as the primary choice, based on station; whole house voice mail services; ability to make intercom calls based on public numbers or speed dial numbers; ability for the user to manage gateway appliance contact lists, and other service attributes; voice services integrated with other services on the same platform; ability to use interactive voice prompts to manage services; and ability for one user to have multiple gateway appliance devices geographically scattered. Such a gateway appliance sub network may have common billing and allow the user to move endpoints between gateway appliances. From the support network or management perspective, call services may be managed and provisioned centrally and remotely (e.g., using web browsers or other interfaces), billing information and statistics records may be collected from the gateway appliances. In one embodiment, if a user has subscribed to a PSTN line, PSTN access may also be provided. For example, a phone adapter/PSTN gateway2012may be used to provide access to PSTN and an interface for traditional PSTN phones. Optionally, PSTN access may be provided through a break-out from the internet at a later point in the network. Shown inFIG.7, the appliance2002resides in-home, connecting to various end-user devices. The devices may be traditional phones2014and faxes2016(e.g., through a phone adapter/PSTN gateway2012), IP devices2018connected over wired LAN or IP devices2020connected wirelessly. Appliance2002may also support connections to the in-country PSTN2022(e.g., through a phone adapter/PSTN gateway or through break-out from a VOIP service providers network) to the Internet via broadband access2024(e.g., via 802.11x WiFi, WiMax). In one embodiment, the functionality between RHC and PAPG remain country-independent as much as possible. A user may be able to manage service aspects utilizing a TV display and remote control, for example, using a media adapter (UPnP AV or other). A media adapter may be able to handle context-specific (e.g., in relation to what is displayed on the TV) input from a remote control and communicate this to the appliance2002. This interface may be used for a user to select menu-driven items. In one embodiment, services may be provided between in-home devices (e.g.,2012. . .2018) and remote devices (e.g.,2022. . .2026). In-home devices may include, but are not limited to: SIP endpoints (e.g., a SIP phone connected over ethernet or over 802.11x or a computer connected over ethernet or over 802.11x, or other means); IM endpoints (e.g., devices capable of IM signaling over ethernet or 802.11x or other means such as computers and wireless phones); traditional FXS phones and Fax machines, for example, connected via a phone adapter/PSTN gateway or like devices, for example, making them appear like SIP endpoints to gateway appliance2002. Remote devices may include devices located outside the home, worldwide. The actual end devices may be of multiple types and is not required that they are visible to RHC. For instance, they may interact via SIP on a broadband interface or through the phone adapter/PSTN gateway or like device or functionality. Remote devices may include, but are not limited to: SIP phones (wireless or wireline) associated with a gateway appliance; external IP Phones (both gateway appliance-based and non-gateway appliance based); PSTN phones; and IM clients. In one embodiment, the gateway appliance2002may serve as an in-premise server responsible for call processing. PAPG2012may connect traditional FXS phone/faxes or like, the PSTN, and gateway appliance platform. Media adaptor2028may be used to allow a user control of services such as voicemail from such devices as TV. Further, there may be provided media services for conferencing. IM2030may provide instant messaging services to users with IM-capable platform and handle contact lists and associated management. Provisioning2006in general provides provisioning services and gateway appliances and users. NMS2008may manage and process alarms and other information from the gateway appliance2002, for example, received via aggregator functionality in the support network. Billing2004, for instance, collects billing information or billable events and records from the gateway appliance2002. SBC2010in general serves as an interface to a VOIP service provider or like. This functionality2010may also provide security functions. SIP directory/redirect server (shown at2032) may provide routing and dynamic DNS (DDNS) server (also shown at2032) may enable gateway appliance behind NAT, for instance, by correlating gateway appliance FQDN to IP address. SIP location server (also shown at2032) may provide location mapping functionalities. Interfaces at the premise interfacing to the gateway appliance2002may include wired local area network supporting, for example, including 10/100 ethernet, multimedia (MOCA), and homePNA. The appliance2002may also support wireless interfaces such as 802.a, 802.b, and 802.g network, etc. In addition, external interfaces such as traditional PSTN line interface between PAPG and CO, Cable, DSL or Fiber-based interface to ISP, and wireless broadband (Wimax or any other standard) interface may also be supported. Example services, which may be supported by functionalities in the gateway appliance2002may include, but are not limited to: calls from one home endpoint to another in-home endpoint (e.g., intercom calls); calls from a FXS phone behind a PAPG to the PSTN, VOIP network, or an IM endpoint; calls from a wireline or wireless SIP phone to the PSTN, VOIP network, or an external IM endpoint; calls from an IM client on a computer to the PSTN, VOIP network, or another external IM endpoint; a SIP phone may be registered in-home or externally from a remote location (such a location may be a wireline network or wireless (e.g., WIFI) hotspot; and origination using a click-to-call mechanism, where the termination may be selected from a contact list using a browser from a computer or phone or like. The origination may be another registered device. Further exemplary voice call scenarios are now described with respect toFIGS.12A-12G. In a first example scenario shown inFIG.12A, there is demonstrated a PSTN call flow from an origination endpoint communication device, e.g., an analog phone or SIP phone, to the gateway device with a public IP address. At step550, a SIP invite message is sent from the VoIP gateway device to the system network that delivers the caller identity of the calling party to the service subscriber on inbound calls. The message is routed to the service network's Session Border Controller (SBC) device which routes the invite message including the calling number (pstnNumber) and called number to the SIP redirect server which initiates a request to the location server that maps the caller from the inbound SIP INVITE message to a identifying gateway device associated with the service subscriber and the SIP port upon which the gateway appliance is listening. At step553, the location service sends a response message back to the SIP redirect server which routes it back to the SBC which then forwards the contents of the inbound SIP INVITE message to the identified gateway device. Then, at step555, the establishment of the voice communication session with the IM Client via the SBC using standard SIP messages such as 100 trying and 180 ringing messages is performed. Then, upon receiving the proper acknowledgements, the gateway device forwards a 200 OK message to the IM client which is acknowledged and sent back to the appliance. At this point, a media path is established as illustrated at step556by the RTP connection accommodating a media communication session between the endpoint communication device via the VoIP gateway and the gateway appliance. The step557occurs when either the origination endpoint communication device or the gateway initiate a disconnect sequence. In this example, the instant messaging client sends signals to terminate the voice communication session, e.g., a BYE message. In this example call flow, the SBC is a SIP-based session border interface providing a wholesale network interface and billing services for on-net and off-net voice calls. Signaling for SIP based on-net and off-net calls to and from the gateway appliances traverses through the gateway appliance. The media associated with the SIP-based calls may, or for optimization purposes, may not traverse the SBC. The SBC also may provide Lawful Intercept services (CALEA) and security; DOS attach prevention, and signal rate limiting. The SBC may hide the details of the networked support services infrastructure from the wholesale provider for inbound calls and hide the details of the wholesale providers' VoIP network from the networked SC. On calls inbound from the wholesale provider, the SBC sends a SIP invite to the SIP redirect server and subsequently extends the call based on a redirect (3XX) message received from the SIP directory server. On call requests received from the appliances, the SBC sends a SIP invite to the SIP directory server and subsequently extends the call based upon a redirect (3XX) message received from the SIP directory server. Media may or may not be anchored to the SBC, depending on network optimization requirements. The SBC also may record billing records and events for all SIP calls and send these records to the billing collector or system. Public SIP redirection and proxy servers or like functionality in one embodiment may provide SIP proxy/redirect services to public remote SIP phones and devices. The public SIP proxy/redirect servers provide a similar function for SIP requests as the public web servers for HTTP requests, described above. The users of these servers may be remote based WiFi or IP SP phones that need to register with the “home” gateway appliance or place a call, which routes through the gateway appliance. The request is resolved by the DNS and directed to the public SIP server, the public SIP server queries the location server and then, depending on the type of request and the accessibility of the gateway appliance, the public SIP server may either proxy or redirects the request. In one embodiment, all remote phone registration requests to gateway appliance may be proxied by the public SIP server. These servers or like functionalities may have interfaces to the location server, the SBC, the VPN router/server or like functionalities and the gateway appliances. FIG.12Billustrates another example call scenario as carried out by the appliance and system of the invention in one embodiment. In the scenario depicted inFIG.12B, there is demonstrated a PSTN call flow from an origination endpoint communication device, e.g., an analog phone or SIP phone, to the gateway device at a private IP address. This call flow is similar to the call flow ofFIG.12A, however, in response to receipt of the calling number (pstnNumber) the location service responds by providing the gwVPNIP address of the identified gateway device associated with the service subscriber and the SIP port upon which the gateway appliance is listening. The SBC in response may thus maintain separate media paths as illustrated at step559by the RTP connection accommodating a media communication session between the endpoint communication device via the VoIP gateway and the SBC and between the SBC and the gateway appliance. FIG.12Cdepicts another call processing example as carried out by the appliance and system of the invention in one embodiment. In the scenario depicted inFIG.12C, there is demonstrated a PSTN call flow originating from the gateway appliance at a public IP address. This call flow is initiated by a SIP device routing a SIP call through the gateway appliance. At first step560, a SIP invite message is sent that is routed to the SBC, which routes the invite message with requested called number (pstnNumber) to the SIP redirect server. In one embodiment, the invite message may also include an authorization token, which the SIP redirect server or like functionality may use to authenticate the invite. SIP redirect server initiates a request to the location server that maps the requested PSTN number to a destination VoIP device (VoIPGWIP identifier) and the SIP port upon which the gateway appliance is listening. At step563, the location service sends a message back to the SIP redirect server which routes it back to the SBC which then forwards the pstnNumber contents to the identified VoIP gateway device. At step565, the establishment of the media communication session with the originating SIP phone via the SBC using standard SIP messages such as 100 trying and 180 ringing messages is performed. Upon receiving the proper acknowledgements, the VoIP gateway device forwards a 200 OK message to the SIP phone, which is acknowledged with an ACK message being sent back to the VoIP gateway device. At this point, a media path is established as illustrated at step566by the RTP connection accommodating a voice communication session between the SIP phone subordinate to the gateway appliance and telephone via the VoIP gateway. The next step567occurs when either the origination SIP phone device initiates a disconnect sequence to terminate the voice communication session by sending a BYE message. FIG.12Dillustrates an example call flow scenario for authenticating gateway to PSTN calls as carried out by the appliance and system of the present invention in one embodiment. This example call flow is initiated by a gateway appliance routing a SIP phone call, at step560′ which generates a SIP invite message. The SIP invite message is routed to the service network's Session Border Controller (SBC) device or like functionality, which routes the invite message with requested called number (pstnNumber) to the SIP redirect server. In this scenario, the SIP redirect server initiates a 407 authentication (challenge), which is received by the SBC as illustrated at step564meaning that the outbound proxy has challenged the SIP messages from the phone. This 407 challenge is forwarded back to the gateway appliance10from the SBC, which at step568, responds by initiating a message including a session description parameter (“SDP”) of the gateway, and credentials for overcoming the challenge. The message with PSTN of the called number and the challenge and credentials are forwarded from the SBC back to the SIP redirect server which forwards an authentication request message to the voice service manager. Once an authentication response is received back to the SIP redirect server, process is initiated for performing the gateway to PSTN call at step569. FIG.12Eillustrates another example of call service processing as carried out by the appliance and system of the present invention in one embodiment. In the scenario depicted inFIG.12E, there is demonstrated a SIP call flow originating from one gateway appliance (Appliance_2) at a public IP address to another SIP phone subordinate to another gateway appliance10(Appliance_1). This call flow is initiated by a SIP phone device at first step570for routing a SIP call through one gateway appliance (Appliance_2) by sending a SIP invite message that is routed to the support network's SBC device or like functionality which routes the invite message with calling number (pstnNumber) to the SIP redirect server which initiates a request to the location server for mapping the caller from the inbound SIP INVITE message to the gateway device associated with the service subscriber and the SIP port upon which the gateway appliance is listening. At step573, the location service or like functionality sends a response message back to the SIP redirect server which routes it back to the SBC which then forwards the contents of the inbound SIP invite message including the gateway's public IP address (gwIP1) and the portNumber on which the gateway appliance is listening. Then, at step575, the establishment of the media communication session with the SIP phone associated with Appliance_1(gateway appliance) via the SBC using standard SIP messages such as 100 trying and 180 ringing messages is attempted. Upon receiving the proper acknowledgements, the originating gateway device (Appliance_2) forwards a 200 OK message to the SIP endpoint which is acknowledged and sent back to the appliance. At this point, a media path is established as illustrated at step576by the RTP connection accommodating a media communication session directly between the endpoint gateway appliances. FIG.12Fdepicts another example of call processing as carried out by the appliance and system of the present invention in one embodiment. In the scenario depicted inFIG.12F, there is demonstrated a SIP call flow originating from a gateway appliance (Appliance_2) at a private IP address to another SIP phone subordinate to another gateway appliance10(Appliance_1) also at a private IP address wherein the private IP addresses are part of a VPN network. The steps described inFIG.12Fare similar to the steps described inFIG.12E, however, with the difference being that the session description parameter in the initial SIP invite message indicates the vpnIP_2address of the gateway Appliance_2from where the call originates as indicated at step570′ and the location response from the location server indicates the gwVpnIP_1of the recipient gateway Appliance_1as indicated at step573′. The remaining call processing methodology depicted inFIG.12Fis similar to the call processing depicted inFIG.12Eto establish a gateway appliance to gateway appliance calls via the RTP protocol at step576′ when both gateway appliances are nodes of the same VPN network. FIG.12Gdepicts another example of call processing as carried out by the appliance and system of the present invention in one embodiment. In the scenario depicted inFIG.12G, there is demonstrated a SIP call flow originating from a gateway appliance (Appliance_2) at a private IP address of a first VPN at private address IP1to another SIP phone subordinate to another gateway appliance10(Appliance_1) at a private address IP2wherein the private IP addresses are part of separate VPN networks. The steps described inFIG.12Gare similar to the steps described inFIG.12E, however, with the difference being that the session description parameter in the initial SIP invite message indicates the corresponding vpnIP_2address of the gateway Appliance_2in its private VPN from where the call originates as indicated at step570″, and that the location response from the location server indicates the gwVpnIP_1address of the recipient gateway Appliance_1in its private VPN and the SIP port upon which the gateway appliance is listening as indicated at step573″ that is routed to the SBC. The remaining call processing methodology depicted inFIG.12Gis similar to the call processing depicted inFIG.12Fwith the SBC functioning to maintain separate voice RTP communication paths as illustrated at step579with a first RTP connection accommodating a media communication session between the endpoint communication device via the gateway appliance_2and the SBC and between the SBC and the gateway appliance_1. Off-Premise Voice Extension An embodiment of the present invention allows an off-premise phone to register with a gateway appliance as an extension to the phone service provided via the gateway appliance. In this embodiment, a gateway appliance or device may serve as an IP-based residential Private Branch Exchange (PBX). This PBX may serve as a switchboard to route calls among extensions as well as off-premise extensions, for example, for a phone such as an IP-based WiFi phone, which accesses the public internet through WiFi connection, or a computer-based soft client which communicates in voice over IP technology and runs on a computer. One or more functionalities at the support network50relay the registration message, the call setup message as well as voice stream between the home PBX and the off-premise user. As an example, with reference toFIG.4, when a phone is powered on, it first connects with an available WiFi network, and then finds the support network50, for example, a session border controller (SBC)93or like functionality and sends out the registration message. For example, the call from the phone may be addressed to a particular domain, which may be resolved by DNS and directed to SBC93. Similarly, a user may start a soft client (soft phone) and key in user name, password, and the identification of the home PBX. The soft client sends the registration to the support network50and the support network50determines where the corresponding home PBX is and forwards the registration message. The home PBX records the location information of the phone. All consequent calls can be established between the phone and the called party through the support network50and home PBX. The voice stream is established after the signaling messages are exchanged. In one embodiment, SIP server (SRS) or like functionality92may provide session routing, re-direction, and authentication on a session basis. Routing to gateway appliances10may be based on information (e.g., appliance address, etc.), which for example, may be contained in a database, for example, updated by provisioning servers and location servers68or like functionality. The information may be updated on a real-time or near real-time basis. Thus, for example, location server or like functionality68may provide dynamic location data such as IP addresses and port numbers of gateway appliances and presence (e.g., voice service availability) indicator of a gateway appliance. SRS92in addition may have capabilities to log events and generate alarms based on various processing it performs. The Session Border Controller (SBC) or like functionality93may provide a secure network border control, for example, for voice and video services. The SBC93may act as a back-to-back user agent and may provide varying degrees of topology hiding, call routing, and access screening, etc. In one embodiment, the SBC may be relied on to provide the routing towards the various wholesale providers based on the destination address returned by the SRS or like functionality92. In one embodiment, the information in the SRS92may originate from provisioning functionality (subscriber data) and/or operational data (provisioned on the SRS). In one embodiment, SIP interface is utilized between the SRS92and SBC93. A SBC may request call routing from the SRS92. The SRS determines the appropriate routing of the call and returns a response indicating how the SBC should handle the call. In one embodiment, the SRS92is capable of receiving register messages from SBC, locating a gateway appliance information for voice service using gateway appliance ID. If the appliance information is found, the SRS92returns the IP address and port number. If the SRS does not find the appliance, the SRS92may return a not found message to the SBC. The SRS92is also capable of receiving invites from the SBCs93, from the appliances10. The SRS92may map the user part of the To address to the destination address key, verify that the domain portion of the destination address is correct. The SRS92also may determine whether the type of a call, e.g., “support network origination”, “off-premise extension origination”, “non-support network origination”, for example, by receiving the invites over separate IP/port address combinations. If the call is a “support network origination” call, the SRS92may authenticate the originating party. The SRS92may use the destination address key to identify if the number belongs to a gateway appliance or is an outbound call, and try to find the destination IP address and optionally port number. If the SRS92finds that the address entry exists in the database or like and that the gateway appliance and the voice service on that gateway appliance is available, it returns the address. If the SRS92finds that the address entry does not exist, it may assume that the call is for outbound. In this case, SRS92may return the original To address as the contact address. This way, the SBC93may determine routing to the appropriate wholesale provider based on the target address in the contact header. The SRS92may return a logical identifier in the contact header to identify the logical routing to the SBC93. For example, a header may be “[email protected]” For a “non-support network origination” call, similarly processing described above for a “support network origination” call may be performed except, for example, the origination party is not authenticated. If the call is an “off-premise extension origination”, the SRS92may map the domain portion of the From address to the destination gateway appliance key and queries database to find the matching gateway appliance record. If it finds a matching record, the SRS92returns the IP address and port for the gateway appliance. If the SRS92does not find the gateway appliance, it returns a not found response to the SBC93. Location server or like functionality68in one embodiment, generally is responsible for updating support network databases which, for example, may require real-time accessibility information. For instance, although not required, as discussed above, a SRS92may comprise a database or like which it queries for location information. A location server or like functionality68in one embodiment interfaces with this database (e.g., via database supported interface) to update or load dynamic location data. In addition, the location server or like functionality68interfaces with a gateway appliance, for example, using XMPP control channel. This interface may be used when a gateway appliance updates its information during initialization or when its contact data changes. Thus, some example functionalities at the location server68may include, but is not limited to the following: location server68may receive IP address and port combination from the gateway appliances as they complete initialization or as they change IP address; access challenged gateway appliances may send the VPN accessible IP address and port; the location server68may set the availability indicator to “available” when it receives an IP address/port update from a gateway appliance; the location server68may be capable of receiving availability updates from XMPP framework; and in addition, the location server68may receive service indicator from a gateway appliance that tells which service the IP address and port applies. Like other network elements of the support network50, the location server or like functionality68may be capable of logging events and statistics and generating alarms based on its various processing. Utilizing an off-premise extension facility in one embodiment disclosed herein, an off-premise phone user, thus, may initiate/receive external calls as if the user was still home. For example, an out-of-state college student can dial hometown buddies with this off-premise extension phone and vice versa. An off-premise extension soft client user can call friends and families from overseas using internet access. Further, calls from the off-premise extension phone can be consolidated with the rest of the home extensions and can be reviewed at any time, for example, for billing and parental control. In one embodiment, the existence of the support network50ensures the constant connectivity between the home PBX and the off-premise extension phone. Referring now toFIG.12H, there is demonstrated an off-premise extension SIP registration process carried out by the appliance and system of the present invention in one embodiment. In the scenario illustrated inFIG.12H, an off-premise SIP phone generates a SIP REGISTER request message as indicated at step580. As known, this message is typically destined to be received by a registrar acting as the front end to the location service for a domain, and reading and writing mappings based on the contents of REGISTER requests. Thus, for example, the off-premise SIP phone registration message reaches the support network50via a SBC and SRS, for example, as described above. The location service (e.g., database updated by location server or like functionality) responds by providing the gwIP address (gateway appliance IP address) and the sipPort (gateway appliance port number) upon which the gateway listens and which are forwarded to the SIP registration proxy (e.g.,92). The SIP registration proxy forwards the SIP register information to the gateway appliance at step583. In the example scenario depicted, the gateway appliance issues a 401-type (unauthorized) response as the gateway appliance does not wish to accept the credentials sent with the request. The decision whether to send a challenge may be based on statistical factors such as deciding to challenge requests, for example, 95% of the time. This response401authentication challenge is forwarded back to the SIP registration proxy which forwards it to the SBC which ultimately forwards the challenge back to the SIP phone at step585. As discussed above, a SBC may be used in cases where there is a SIP service request from an off-premise extension WiFi-SIP or SIP phone and the off-premise extension WiFi phone desires to register with the “home” gateway appliance and subsequently process calls via the “home” gateway appliance. The SBC may provide endpoint anchor services to the off-premise extension IP phones. These anchor services may include NAT/firewall traversal and protocol repair, DOS prevention, signal rate limiting, call admission control, QOS session monitoring, and lawful intercept services. The SBC queries the SIP directory server to determine the appropriate IP address or VPN IP address and port and may “proxy” the SIP request to the appropriate public IP or VPN IP/port combination. The SBC also may provide termination services for calls that originate on a “home gateway appliance” and are extended to the off-premise extension IP phone. In one embodiment, there may be a SBC dedicated to handling off-premise extensions. In another embodiment, a SBC may handle both off-premise extensions and regular in-premise calls. Referring now toFIG.12I, there is demonstrated a SIP authentication process carried out by the appliance and system of the present invention in one embodiment in follow up to the off-premise extension SIP registration process scenario depicted inFIG.12H. In the scenario depicted inFIG.12I, in response to receiving the SIP 401 auth challenge, the SIP phone responds with the register request message providing the challenge credentials as indicated at step587. This is received and forwarded by the SBC who forward the request to the SIP registration proxy and the location service. Then as indicated as step588, the challenge and credentials are forwarded from the SIP registration proxy to the gateway appliance and the SIP registration proxy waits for the SIP 200 OK message which is forwarded back to the requesting SIP phone at step589. FIG.12Jillustrates an off-premise extension SIP to gateway appliance, station to station call process carried out by the appliance and system of the present invention in one embodiment. At step1202, an off-premise extension SIP phone sends an invite message, which reaches the SBC of the support network. The invite message is routed to the SIP registration proxy, which determines (e.g., using location service or database) the address information of the destination gateway appliance and returns the information to the SBC. In one embodiment, the SIP registration proxy may directly interface to the database and query the database for location information. In another embodiment, the SIP registration proxy may interface with a location server, which in turn retrieves location information and provides that information to the SIP registration proxy. At step1204, the SBC sends the invite message to the gateway appliance using the location information and initiates establishing of a call session. At step1206, RTP (real-time protocol) call session is established between the appliance to the SBC and between the SBC and the off-premise extension SIP phone. At step1208, call ends using SIP bye messaging. FIG.12Killustrates gateway appliance to an off-premise extension SIP phone, station to station call process carried out by the appliance and system of the present invention in one embodiment. At step1220, a gateway appliance sends a SIP invite message to the SBC or like functionality to make outbound calls to an off-premise extension SIP phone at “[email protected]”. The invite is challenged and authenticated via, for example, the SBC and voice service manager or like functionality. At step1222, the SBC forwards the invite message to the destination off-premise extension SIP phone. At step1224, call establishment is initiated and at step1226, a RTP session between the gateway appliance and the SBC, and SBC and the off-premise extension SIP phone is established. Billing Requirements VoIP As mentioned, the gateway appliance is an interactive home device that enables the home user to purchase and activate services offered by the service provider. Some of these services are premium services such as movies and music whereas others are non-premium services such as home automation and file sharing. For the VOIP service in particular, call records are generated by the gateway appliance that are maintained, not only for billing purposes, but alternately utilized for other purposes such as diagnostics, performance studies, statistics, and billing adjustments, etc. The billing collector is responsible for collecting the call records from the gateway appliance and transferring them to the backend billing system. A billing interface is provided between the gateway appliance and the billing collector element of the service center. Particularly, records are generated at the gateway appliance and are transferred to the billing collector via XMPP protocol transfer using XML file structure. Accounting Framework for the Gateway Appliance In one example related to voice services, the gateway appliance captures usage information associated with the VoIP events generated during the voice call. Although the SBC generated records are utilized for billing the call, if necessary, the gateway appliance generated event records may be utilized for billing the call. The event records are self contained in that no correlation of the event records is required for billing purposes. Generally, associated with the accounting functionality programmed for the gateway appliance, before the event records generated by the gateway appliance are transferred to the billing collector the following occurs: gateway appliance is initialized; and the billing collector has established a session with the routing manager. The gateway appliance may initiate the transfer of the generated event records to the billing collector at configurable intervals. The gateway appliance utilizes the XMPP protocol as defined in IETF RFCs 3920 and 3921 for transferring the files to the billing collector. Hence, the process of transferring the records generated by the gateway appliance shall be defined through the XMPP protocol and the application layer protocol attributes. The following define the attributes of the XMPP protocol: 1) the gateway appliance shall initiate a “message” stanza; 2) the “to” attribute includes the full JID of the billing collector; 3) a stream unique “ID” shall be assigned to the message; and 4) the body of the message includes the gateway appliance generated event record in a string format. The application layer protocol includes the following data: 1) a unique message ID which is different from the message ID of the XMPP layer; 2) a message sequence number (1, 2, 3, etc.); and 3) total number of bytes in the event record contained in the body of the XMPP “message”. As mentioned, with respect to the gateway appliance-billing collector record transfer process, the message for the event record transfer from the gateway appliance to the billing collector is a two way handshake message. The billing collector sends an acknowledgment to the gateway appliance for every message received. If an acknowledgment is not received, then the gateway appliance resends the message. The role of the gateway appliance in the gateway appliance-collector record transfer process is defined as followed. The gateway appliance formulates a XMPP protocol and application layer protocol as per the attributes defined hereinabove and starts a timer in the application layer when the message is sent out to the billing collector. For the first message of its type, the value of the timer is equal to a pre-configured value. When the value of the timer has exceeded its pre-set value (e.g., an acknowledgement has not been received from the billing collector), then the gateway appliance shall resend the message. This message shall contain the same value for all the XMPP and application layer attributes defined above, except for the following application layer attributes: a) the message sequence number shall be incremented by 1; and 2) the gateway appliance shall restart the wait-timer in the application layer with the value of the timer incremented to e.g., previous value+5. After the message is sent to the collector, the gateway appliance waits for the acknowledgment from the billing collector. If an acknowledgment has not been received by the time the wait-timer has exceeded the set value, then the gateway appliance shall repeat steps 4 and 5 each time incrementing the message sequence number by one and the wait-timer by 5 seconds. If after the 5thattempt (i.e., message sequence number=5) the gateway appliance does not receive an acknowledgement, then the gateway appliance will stop sending the message and generate a critical alarm. If the gateway appliance receives an error response form the collector with a reason value of “Error in Bytes received”, then the gateway appliance shall resend the message to the collector. The message shall contain the same message ID (in the application layer) as the previous message, but the message sequence number shall be incremented by 1. The value of the wait-timer shall be set to the same value as the previous message for which the error response was received. The event records generated by the gateway appliance for the VOIP service are now described. These records are generated in response to significant events detected by the gateway appliance during a call. These events are: 1) start; 2) stop; and 3) inter. The format of the records is mostly based on the Internet Protocol Detail Record (IPDR) standards. The following table specifies the fields that are generated by the gateway appliance. The fields contain a subset of the IPDR and the fields given in italics are proprietary fields. FIG.1Cdepicts the concept of the ingress and egress with respect to the gateway appliance and this concept is used to capture values in the fields described on the gateway appliance CDR. The components, servers, services, etc., described in the present disclosure illustrate logical functionalities and they are not meant to be limiting in any way. Rather, they may be implemented as one or more applications, devices or the like, depending on design and implementation choice. In addition, the various functionalities may be implemented on a distributed or central platform. AttributeData typePossible valuesDescription1CDR sequence number2Hostnamegateway appliance JID3IpAddressgateway appliance IP address4gateway appliancefirmware version5ingressAccessStartTimedateTime (GMT)2006-11-3-T22:50:00.000ZWhen the first INVITE isreceived from the calling party6ingressStartTimedateTime (GMT)2006-11-3-T22:50:00.000ZWhen ACK is received for 200OK7ingressAccessEndTimedateTime (GMT)2006-11-3-T22:50:00.000ZWhen the first BYE is received/sent8ingressEndTimedateTime (GMT)2006-11-3-T22:50:00.000Zthe same as above9egressAccessStartTimedateTime (GMT)2006-11-3-T22:50:00.000ZWhen the first INVITE is received10egressStartTimedateTime (GMT)2006-11-3-T22:50:00.000ZWhen the ingress call is answeredwith 200OK11egressAccessEndTimedateTime (GMT)2006-11-3-T22:50:00.000ZWhen the first BYE is received/sent12egressEndTimedateTime (GMT)2006-11-3-T22:50:00.000Zthe same as above13ingressCallDurationInteger (ms)18000Value in milliseconds fromingressStartTime to ingressEndTime14egressCallDurationInteger (ms)18000Value in milliseconds fromegressStartTime to egressEndTime15timeZoneOffsetInteger−300Time offset, in minutes, of localtime zone referenced to GMT.Local time zone should reflectcalling party time zone for correctbilling16ingressCallIDStringUnique call ID of ingress call17egressCallIDStringUnique call ID of egress [email protected] Party ID of the ingress call19IngressSubscriberIDExtension number of the gatewayappliance endpoint (if A −> B, inan inbound call, this will beempty; in an outbound call, thiswill be the ext of A)[email protected] Party ID of the egress call(if A −> B, in an inbound call, thiswill be the ext of B; in anoutbound call, this will be the DNof B)21EgressSubscriberIDExtension number of the gatewayappliance endpoint (if A −> B, inan inbound call, this will be theext of A) in an outbound call, thiswill be empty)[email protected] Destination ID of theingress call (If A −> B CF C, thenthis field shall contain B which isthe originally dialed number)[email protected] ID of the ingress call(If A−> B CF C, then for aninbound call where B is agateway appliance SIP endpointthen this field shall contain C; foran outbound call where A is thegateway appliance SIP endpointthen this field shall contain B)[email protected] Destination ID of theegress call; (If A −> B CF C, thenfor an inbound call where B is agateway appliance SIP endpointthen this field shall contain B; foran outbound call where A is thegateway appliance SIP endpointthen this field shall contain B)[email protected] ID of the egress call(If A −> B CF C, then for aninbound call where B is agateway appliance SIP endpointthen this field shall contain C; foran outbound call where A is thegateway appliance SIP endpointthen this field shall contain B)26DNISDialed Number IdentificationService for 2-stage dialing.27PINPersonal Identification Number28Ingress signal typeEither WAN or LAN (FIG. 1C)29Egress signal typeEither WAN or LAN (FIG. 1C)30gatewaygateway appliance ingressapplianceingressSignaladdress(Note2)signaling IP address (FIG. 1D)31gatewaygateway appliance ingressapplianceingressSignalPort(Note2)signaling IP port (FIG. 1D)32gatewaygateway appliance egressapplianceegressSignaladdress(Note2)signaling IP address (FIG. 1D)33gatewaygateway appliance egressapplianceegressSignalPort(Note2)signaling IP port (FIG. 1D)34gatewaygateway appliance ingress mediaapplianceingressMediaaddress(Note2)IP address (FIG. 1D)35gatewaygateway appliance ingress mediaapplianceingressMediaPort(Note2)IP port (FIG. 1D)36gatewaygateway appliance egress mediaapplianceegressMediaaddress(Note2)IP address (FIG. 1D)37gatewaygateway appliance egress mediaapplianceegressMediaPort(Note2)IP port (FIG. 1D)38silenceCompressionModeOn/OFF39thirdPartyIDStringBob.ros.comThird party ID if PaymentTypevalue is charged to 3rdparty40callTypeStringA (Administrative)Type of CallI (IVR)N (no answer)V (voice)D (data)F (fax)41paymentTypeStringToll-free,Indication of which party payscharge_to_calling_partycharge_to_called_partycharge_to_3rd_party42callProgressStateState to whichthe callprogressed43callCompletionCodeStringCC: Call completedFinal call completion code fornormallybilling use.CAD: Abnormal disconnectCIP indicates event-driven IPDR,UCN: unconnected networkwhich is generated ruingfailurecall/connection progress.UCI: unconnectedinvalid addressCIP: Call in Progress44DisconnectReasonStringNormalCallClearingReason that call was disconnectednoAnswerbased on Call Completion Codebusyfailure45extendedReasonCodeFurther disconnect information46Disconnect initiatorIndicates if Bye was sent orreceived by the gatewayappliance47proprietaryErrorgateway appliance-specific use.48ingressFeatureStringR(Roaming)Indicates what type of featureL(Line)(For an inbound call A −> BE(Extension)where A is roaming and B isextension, then this field shallcontain R)49egressFeatureStringR(Roaming)Indicates what type of featureL(Line)(For an outbound call A −> BE(Extension)where B is roaming, then thisfield shall contain R; if A isroaming and B is a DN, then thisfield shall contain L)50ingressCodecStringG711Alaw, G711ulaw,CODEC being usedG726, G729, G729a,G.729e, iLBC51egressCodecStringG711Alaw, G711ulaw,CODEC being usedG726, G729, G729a,G.729e, iLBC52supplementaryServiceStringCall forwarding,Name of supplementary servicecall transferused in this call53ingressInboundPacketCountIntegerNumber of packets received on ingress54ingressOutboundPacketCountIntegerNumber of packets sent on egress55egressInboundPacketCountIntegerNumber of packets received on ingress56egressOutboundPacketCountIntegerNumber of packets sent on egress57ipAddressIngressDeviceString66.226.243.247SBC's address for incoming line call58ipAddressEgressDeviceString66.226.243.247SBC's address for outgoing line call The following table defines the association between theFIG.1C(signaling type) andFIG.1D(gateway appliance interface IP address and port). IngressEgressgateway appliancegateway appliancesignalingsignalingingressegresstypetypesignaling/mediasignaling/media(FIG. 1C)(FIG. 1C)interface (FIG. 1D)interface (FIG. 1D)LANLAN43LANWAN42WANLAN12WANWAN13 Call Forwarding and Transfer Scenarios There are two types of call forwarding (CF) scenarios: 1) unconditional; and 2) no answer. A call is considered transferred when A calls B and B answers the phone and then transfers to C. The following example cases describe the population rules for the corresponding CDR fields for CF and call transfer scenarios. Call Forwarding Scenarios for the Inbound Call Consider the scenario A→B CF C. Call forwarding of B can only be performed to an off-net number and not to another extension. Hence, the following example cases 1)-3) are considered: in all the cases, only one CDR is generated by the gateway appliance and all the egress information will be that of the B→C call leg. CasesA partyB partyC party1Off-netIn-houseOff-net2RoamingIn-houseOff-net3RoamingRoamingOff-net Example Case 1:1) ingressAni DN of A (offnet number)2) IngressSubscriberID empty3) Egress AniDN of B (DN of gateway appliance)4) EgressSubscriberID extension of B5) ingressOriginalDestinationID: B6) ingressDestinationID C7) egressOriginalDestinationID B8) egressDestinationID C (offnet number)9) ingressFeature: L10) egressFeature: L (since call cannot be forwarded to another extension this value will be always L)11) ingressSignal type WAN12) egressSignal type WAN13) supplementary service call forwarding Example Case 2:1) ingressAniDN of A (DN of gateway appliance)2) ingressSubscriberID Extension of A3) egress AniDN of B4) egressSubscriberID Extension of B5) ingressOriginalDestinationID: B6) ingressDestinationID C (offnet number)7) egressOriginalDestinationID B8) egressDestinationID C9) ingressFeature: R10) egressFeature: L (since call cannot be forwarded to another extension this value will be always L)11) ingressSignal type WAN12) egressSignal type WAN13) supplementary service call forwarding Example Case 3:1) ingressAniDN of A (DN of gateway appliance)2) ingressSubscriberID Extension of A3) egress AniDN of B4) egressSubscriberID Extension of B5) ingressOriginalDestinationID: B6) ingressDestinationID C (offnet number)7) egressOriginalDestinationID B8) egressDestinationID C9) ingressFeature: R10) egressFeature: L (since call cannot be forwarded to another extension this value will be always L)11) ingressSignal type WAN12) egressSignal type WAN13) supplementary service call forwarding Call Forwarding Scenarios for the Outbound Call For the outbound call, the following example cases 1)-5) are considered. All the egress information will be that of the B→C call leg. CasesA partyB partyC party1In-houseIn-houseOff-net2In-houseOff-netOff-net3In-houseOff-netIn-house4In-houseOff-netroaming5In-houseRoamingOff-net Example Case 1:1) ingressAniDN of A (DN of gateway appliance)2) ingressSubscriberID extension of A3) egress AniDN of C4) egressSubscriberID Empty5) ingressOriginalDestinationID: Extension of B6) ingressDestinationID C (offnet number)7) egressOriginalDestinationID C8) egressDestinationID C9) ingressFeature: E10) egressFeature: L (since call cannot be forwarded to another extension this value will be always L)11) ingressSignal type LAN12) egressSignal type WAN13) supplementary service call forwarding Example Case 2: The gateway appliance actually has no knowledge of the B→C call. So the CDR will be populated with the A→B call leg values.1) ingressAniDN of A (DN of gateway appliance)2) ingressSubscriberID Extension of A3) egress Ani DN of A (DN of gateway appliance)4) egressSubscriberID Extension of A5) ingressOriginalDestinationID: DN of B (offnet number)6) ingressDestinationID DN of B (offnet number)7) egressOriginalDestinationID DN of B (offnet number)8) egressDestinationID DN of B (offnet number)9) ingressFeature: E10) egressFeature: L11) ingressSignal type LAN12) egressSignal type WAN13) supplementary service empty Example Case 3: This scenario is considered as 2 independent calls and hence 2 CDRs are created, one for AB call leg and the other for BC call leg. AB call leg will be an outbound call and BC call leg will be an inbound call. AB Call Leg CDR: 1) ingressAniDN of A (DN of gateway appliance)2) IngressSubscriberID Extension of A3) Egress AniDN of A (DN of gateway appliance)4) EgressSubscriberID Extension of A5) ingressOriginalDestinationID: DN of B (offnet number)6) ingressDestinationID DN of B (offnet number)7) egressOriginalDestinationID DN of B (offnet number)8) egressDestinationID DN of B (offnet number)9) ingressFeature: E10) egressFeature: L11) ingressSignal type LAN12) egressSignal type WAN13) supplementary service empty BC Call Leg CDR:1) ingressAniDN of B (DN of gateway appliance)2) IngressSubscriberID empty3) Egress AniDN of B (DN of gateway appliance)4) EgressSubscriberID empty5) ingressOriginalDestinationID: DN of C (DN of gateway appliance)6) ingressDestinationID DN of C (DN of gateway appliance)7) egressOriginalDestinationID DN of C (DN of gateway appliance)8) egressDestinationID DN of C (DN of gateway appliance)9) ingressFeature: L10) egressFeature: E11) ingressSignal type WAN12) egressSignal type LAN13) supplementary service empty Example Case 4: This will be considered as 2 independent calls and hence 2 CDRs will be created, one for AB call leg and the other for BC call leg. AB call leg will be an outbound call and BC call leg will be an inbound call. AB Call Leg CDR: 1) ingressAniDN of A (DN of gateway appliance)2) IngressSubscriberID Extension of A3) Egress AniDN of A (DN of gateway appliance)4) EgressSubscriberID Extension of A5) ingressOriginalDestinationID: DN of B (offnet number)6) ingressDestinationID DN of B (offnet number)7) egressOriginalDestinationID DN of B (offnet number)8) egressDestinationID DN of B (offnet number)9) ingressFeature: E10) egressFeature: L11) ingressSignal type LAN12) egressSignal type WAN13) supplementary service empty BC Call Leg CDR:1) ingressAniDN of B (DN of gateway appliance)2) IngressSubscriberID Empty3) Egress AniDN of B (DN of gateway appliance)4) EgressSubscriberID Empty5) ingressOriginalDestinationID: DN of C (DN of gateway appliance)6) ingressDestinationID DN of C (DN of gateway appliance)7) egressOriginalDestinationID DN of C (DN of gateway appliance)8) egressDestinationID DN of C (DN of gateway appliance)9) ingressFeature: L10) egressFeature: R11) ingressSignal type WAN12) egressSignal type WAN13) supplementary service Empty Example Case 5: The ingress CDR includes the information of the AB call leg and the egress CDR contains the information of the BC call leg.1) ingressAniDN of A (DN of gateway appliance)2) ingressSubscriberID Extension of A3) egress Ani DN of B4) egressSubscriberID Extension of B5) ingressOriginalDestinationID: Extension of B6) ingressDestinationID C (offnet number)7) egressOriginalDestinationID Extension of B8) egressDestinationID C (offnet number)9) ingressFeature: E10) egressFeature: L (since call cannot be forwarded to another extension this value will be always L)11) ingressSignal type LAN12) egressSignal type WAN13) supplementary service call forwarding It is desirable to provide systems and methods that enable enhanced managed services while supporting and managing the emerging digital home. It is a feature of the present invention to include a system, such as the gateway appliance thoroughly described herein, that can offer enhanced managed services to its users via incorporation of causation and correlation engine abilities. A causation and correlation engine can be personalized and enable broader services for system/gateway users. The enhanced features can be referred to as being provided by a Personal Causation and Correlation Engine (or “PCCE). Areas where a PCCE can be used include establishing objectives, tracking performance, and influencing behavior and conformance in the areas of financial, health/medical objectives, exercise, nutrition, advertising, home security, purchasing, and energy conservation, etc. The PCCE enables users and others to establish objectives and preferences in regards to various aspects of a users' condition and behavior and the PCCE correlates events and data and determines messaging, communication, special offers, rewards, and other incentives to cause the user to comply with certain behavior or actions. Users, coaches, recognized authorities or others may establish objectives. Based upon a vast array of real-time or near real-time events, stored dynamic data, profile information, and records of the previous effectiveness of prior causation actions, the PCCE identifies appropriate opportunities and methods of personalized communication to influence causation actions. This communication can be tailored for each user based upon user preferences and what can be known about the user and their environment. The PCCE may also leverage information regarding the user's social media contacts to establish influence and can also enlist the support of social media contacts to influence the user. The user's social media contacts may be incented to provide influence by sending messages and other communication to the user; these incentives for contacts and friends are personalized and tailored based upon what can be known about each specific social media contact. Information reflecting the correlated data and instigating event(s) are submitted to expert rules-driven applications and personalized messaging and events are created which cause the user to make decisions and perform actions, which are complementary to the user's personalized objectives. Some rules driven applications (such as home automation) can be controlled partially or exclusively by the user, while others, such as e-health, can be controlled by experts or other 3rdparties. The PCCE can interact with both an advertising server and one or more rules-based applications to formulate causation actions and messaging. High Level Processing with PCCE When the PCCE determines there is an opportunity or need for communication based upon real-time and near-real time instigating events, the PCCE will determine, based upon prioritization and objectives, the attributes of the desired communication, the method of communication and the type of communication as expressed by the user and/or determined by the PCCE, and can formulate or update an encoded token, which represents the opportunity, the type and method of desired communication. These token segments can be stored and maintained (that is updated based upon events) for each individual in the home and the home in general. The encoded token can be passed to the appropriate rules-based application component. The messaging component can be an application specific messaging component with rules that are controlled by medical professionals and behavioral scientists such as seen in an e-health app or the intelligent ads server, which can interoperate with another 3rdparty component such as a rules-based advertising platform. Based upon the attributes expressed in the PCCE token, the messaging component can return data indicating the messaging identifiers, content, and other messaging attributes. The PCCE agent will process this information and invoke the communication and direct the communication to the “best” user interface(s) using the “best” (best-rated as most effective) method for that interface and based upon what is known about the user (and an attribute indicating if the message should be displayed in public (TV, etc.) or private interface). The PCCE can also determine that a social media contact may be enlisted in the causation (these contacts may be selected and prioritized by the user or other coaches), and the PCCE may express this in the token that can be sent to the messaging components. The messaging component can respond back with data that enables the PCCE agent to invoke communication to the social media contact to cause and incent the social media contact to communicate with the user to influence their behavior. The PCCE can also dynamically store and correlate the metadata of responsiveness of the user to prior messaging; these metrics and rankings can be used for subsequent causation decision-making and can be submitted to the rules based applications. The PCCE can also integrate messaging and advertising and incentive messaging to provide enhanced encouragements for users and providing compelling incentives for positive actions. The PCCE can also support two or more users as groups with common objectives and enable sharing of events and actions among the groups. The real-time events, dynamic data, profile information, and other user data from each domain is stored and indexed as domain-specific metadata. This metadata is processed by the correlation component to create token segments, which are then passed to the causation component. The correlation component may interact with service applications to establish thresholds for specific metrics (e.g. weight, energy usage, spending levels/month) and request to be informed of when these thresholds are met or exceeded. The PCCE may also detect and incorporate information about the location and presence of other family member users or their location when formulating a causation event. The PCCE can enable users and others to establish preferences regarding application/domain specific items (e.g., preferred food types, types of exercise, timing of exercise, sedentary durations, shopping preferences, etc.). Attributes, meta-data, and token information may be kept private using a data encryption unit162and/or other application-level encryption. The user's identity may or may not be exposed to the messaging and advertising components. Messaging and interaction can be directed towards one or more endpoints and may also invoke actions within other applications such as the home and energy management applications. In some cases, the PCCE may enable messaging and interaction on one endpoint, such as a TV or mobile device (smart phone, tablet, etc.), and the user may be prompted to enable supplemental interaction on another endpoint such as a mobile device or TV. The PCCE stores records/logs of the initial interaction and correlates the subsequent interaction events on the second screen with the initial interaction and user. Records of screen specific interaction can be stored for a configurable period of time, or for a configurable number of interaction events. Upon initialization, initial token in the form as metadata “documents” instances are formulated by the correlation component. The correlation component stores a copy of current token instances and retrieves these documents upon subsequent restart. The configuration of metadata indexes, which form the token documents to be stored, can be under control of the service management center and based upon the service applications130provisioned via the service management and support center50. After the tracking indexes are established, the PCCE requests data from each service application and/or API, and also registers to be informed of events and data changes within each service and platform application. Depending upon configuration options controlled by the service management center50, the user may be able to delete the personal data and metadata and “opt” out of the causation processing. The user may also be able to export the data to a personal device or storage area. PCCE Component Descriptions: Real-time and near real-time events—These events are generated by various components and applications such as calendar, weather & forecast, home automation, energy management, e-health, location and proximity, social media, music/media services, and other applications. Dynamic Data and Trends—These data types can access and acquire data related to home, health, and financial, etc., conditions. The value of a (meta) data element may trigger an event, also complex datasets and content may be processed down to metadata and stored as key value pairs of meta-data. This metadata is stored, prioritized, and indexed (by the app or the PCCE) and subsequently processed by the correlation component, which will formulate personalized token segments. Profile—These data types are generally more static and less dynamic than the real time events or dynamic data. The profile information includes data that is stored on the gateway or can be acquired via web service calls (and cached) or other types of remote access. As the profile data is processed, meta-data is generated and stored and this meta-data is used by the correlation component, which will formulate personalized token segments. User/Expert Objectives—These data types are meta-data which represent the domain-specific objectives, which could be established by the user or domain specific objectives. The correlation engine accesses this data and personalized token segments. Historical Responsiveness—metadata that the PCCE uses to express previous effectiveness at causation can be used by the PCCE in token formulation and also accessed by 3rdparties (analytics, etc.) to track and measure effectiveness. Correlations Component—formulates (and can maintain) token segments based upon access to all metadata (events, dynamic data, profile, objectives, etc.). Correlations Component—formulates and maintains token segments based upon access to all metadata, receives triggering events, updates the metadata token segments, and passes triggering event and token segments to causation (events, dynamic data, profile, objectives, etc.). Causation Component—receives triggering event and metadata token, determines which rules/apps platforms to submit to, constructs and encodes token based upon target rules/apps platforms, and passes to causation agent. Causation Agent—receives encoded token, encrypts and routes to rules/apps platforms, receives response from rules/apps, and formulates API service calls to application interfaces based upon the response from the rules platforms. Referring toFIG.8, the drawing depicts a high-level view of PCCE module2201, which is associated with the services layer130within the application gateway. Application services interfaces140operating as part of the PCCE interacts with internal components such as the Intelligent Ad Server130I. Other layers associated with the PCCE are the Platform management layer110, service Framework layer120and the services layer130. Applicant Service interfaces140can include: SIP interface141, Web interface142, IM interface144, XMPP Interface145, UPnP Interface147and Web Services149. The services layer130can include the following modules: File Share manager130a, Backup Server130b, Home Storage130c, Network Device Manager130d, Basic Photo Editor130e, Home energy & Automation Controller130f, Media Service Module130g, E-health Manager130n, Intelligent Ads Manager130l, Parental Control130k, Presence & Networking130j, Voice Mail/IVR Server130i, Call Processing Module130h, Personalized Correlation & Causation130m. Services Framework120can include the following modules: Billing manager120a, Fault manager120b, Database Management120c, configuration Management120d, User Management120e, Statistics manager120f, Device Authentication Manger120g, Control Channel Interface120h, Service Manager120i, REST Client120j. Platform Management110can include the following management modules: Platform Manager110A, Scheduler Manager110B, Diagnostic Manager110C, Firmware Upgrades Manager110D, Resources Manager110E, Display manger110F, and Logger manager110G. FIG.9provides an expanded view of drawing21depicting the PCCE2201module within the software architecture of the application gateway10. Utilizing the framework and interfaces previously described inFIG.2C, the PCCE can communicate with other internal and external components including, but not limited to, application service interfaces140, application services130, services framework modules120, and platform management components110. FIG.10provides a detailed view of the PCCE2201, the PCCE sub-module components, and a logical view of the exemplary sources of data and events acquired and utilized by the PCCE2201. The PCCE2201is comprised of three sub-modules, the correlation2201amodule, the causation sub-module2201b, and the causation agent sub-module2201c. Together, these three sub-modules access data, receive events, and interact with internal and external application services and expert-driven rules platforms to deliver user messaging and coaching, invoke application actions, present incentives, and influence user behavior. The logical view of the data and events depicted inFIG.10can include real-time events2401, dynamic data and trends2402, and profile data2403. Real-time events2401are events that are communicated from devices and applications such as, but not limited to, the home energy and automation controller and sensor devices that can be deployed throughout a venue or environment, automatically controlled lighting or lamp units, and laptop/mobile devices connecting as clients to the gateway10. Dynamic data and trend data2402represent changing data such as, but not limited to, weather forecasts, stock portfolio information, and e-health monitoring statistics. When this type of data reaches a configurable threshold, a message indication is routed to the PCCE correlation sub module2201a. Profile data2403is data that is generally more static, less dynamic data such as, but not limited to, user preferences, demographic information, family medical history data, application usage statistics, and the like. Real-time events2401are events communicated via internal or external messaging from other applications, the service management center or the application gateway itself. The PCCE can poll the internal and external sources of data and events at configurable interval; the PCCE can also register or subscribe to be informed of application events or data changes. Now turning to the PCCE2201and sub-modules, the2201acorrelation sub-module establishes meta-data indexes based upon application gateway10configuration data that is provisioned by the service management center and the configuration data is based upon the application services, which the user subscribes to. These metadata indexes, which can be stored as xml, text string formats, or relational data formats, can contain a category such as “shopping”, and a primary sub-category such as “women's apparel”, then a secondary sub-category such as “shoes”, then a specific value such as “running shoes”. These xml, text string data, or relational data are stored in persistent storage and maintained as an in-memory document and are updated as new events and evented data are received. As another example of the type of data that is stored in the meta-data indexes, the correlation sub-module may have meta data related to e-health such as “e-health”, “medication taken=cholesterol”, “brand=xyx”, “quantity=20” “unit=mg”, “frequency=once per day”. The event data can be stored with time stamps and an indicator indicating the type of event. A meta-data document instance may be maintained for each user registered within the premise and another meta-data document instance may be maintained for the home or user premise itself. The meta-data document instance may also contain data relating to a user's social media contacts and their preferences. Depending upon the type of event received, the correlation sub-module2201amay determine that a token may need to be formulated from one or more of the meta-data instances with an event indicator and one or more instances of meta-data for a user and or for the home. The correlation sub-module2201awill formulate the token and pass this token to the causation sub-module2201b. The causation sub-module2201bwill receive the token and determine what screens may be available for messaging, what rules-based platforms and social networks should be engaged, and passes this information to the causation agent2201calong with information on what platforms should be engaged and what screens should be subsequently addressed with messaging that will be received from the rules-based platforms. It should be noted that in order to increase performance, the token may be further compressed and passed as an index with the format of the index being known by the internal or external rules-based platforms. With this method, the events and attributes of the user may be passed to the rules-based and/or advertising platforms, but the actual identity of the user may not be exposed to the third party platform. This method is a significant departure from the well-known internet browser-based “cookie” method of tracking users and supplying advertising and other services. In the “cookie method”, a string of data is stored within the user's browser, which identifies the user. All attributes and preferences are stored on back-office systems and the user often has limited knowledge of or control of the data. With the present attribute-based token method, the back-office systems are presented with attributes and the back office systems determine responses based upon the attributes presented, not necessarily upon the identity of the user. The causation agent sub-module2201creceives the token from the causation sub-module2201band based upon the attributes of the token and the routing meta-data, optionally encrypts the token and routes the token to the rules-based platforms via the application services interfaces140and awaits a response. Note that the causation agent may send and receive more than one token to more than one rules-based platform per instigating event. Upon receiving a response(s) from the rules-based platform(s), the causation agent2201cencapsulates the response and routes the response to all applicable local or remote devices (30a), media adapter element35b, set top boxes35a, TVs and screens as shown inFIG.1a, and application services130. The PCCE may also cache messaging and advertisements, including videos, which may be presented to the user at a later time or when the screen becomes active. The PCCE2201may also subsequently receive information regarding the subsequent user responses to the message or advertisement. This user response information is stored and can be used as meta-data in the index document indicating the success rate of different types of messaging. With this method, the historical record of a user's compliance or responsiveness can be subsequently presented to rules-based platforms and can be used when formulating subsequent communications. This user response information may also be retrieved by the service management center, which can aggregate and summarize the relative success of specific messaging and this summary may also be distributed to other application service gateways10and the rules-based platforms. FIG.11illustrates an example of meta-data collection and event processing as carried out by the PCCE2201appliance and system of the present invention in one embodiment. Step2510performs data collection and the formulation and maintenance of the meta-data document instantiation(s), including setting the values of the meta-data indices. At step2520, the correlation sub-module2201areceives a triggering or instigating event and updates the meta-data token and determines if the event requires the correlation sub-module2201ato notify the causation sub-module2201b. This notification criteria is configurable and under the control of the service management center50. The correlation sub-module2201aupdates the meta-data token and the triggering events and if the triggering event is found in the event table lookup, the correlation sub-module2201aprepares this token instance to send to the causation sub-module2201b. At step2530, the causation sub-module2201areceives the events, and based upon the triggering event ID and the meta-data token value, the causation sub-module2201bdetermines what rules-based platforms the token should be directed towards. This determination of which platform(s) to engage is achieved by means of a local table look-up as configured by the service management center. After the causation sub-module processes the token and optionally further compresses the token and determines if the users identity should be hidden, based upon configuration data received from the service management center, the causation sub-module2201bpasses the token and platform meta-data, including platform IDs, to the causation agent sub-module2201b. In step2540, the causation agent receives the token event and, based upon the platform ID(s) meta-data which contains one or more internal or external rules-based platform identifiers, optionally encrypts the token, routes the token to the application services interface(s)140, and awaits a response. In step2550, the causation agent receives a response from the rules-based platform(s)2204and, based upon the response from the rules-based platforms2204and the current local or remote devices (30a), media adapter element35b, set top boxes35a, TVs and screens, and application services130that are in use and available, routes the message and/or uniform location resource address to the appropriate devices, screens, applications, and the like. Note that although the steps shown inFIG.11all represent positive processing and responses, error handling, and time-out logic is supported in all the sub-modules and interfaces. The PCCE can provide users access to many services and information sources including:Ownership/purchasing (e.g., cars, real estate, music, movies, consumer products, electronic products)Loyalty Card purchase informationEmployment, Occupation, Employment History, Salary, Benefits, Type of InsuranceLocation information (e.g., location now, micro-location now, location patterns, businesses frequented, commuting methods)Travel Plans, Travel History, Device presence and statusHobbies & Community ActivitiesHobbies, Interest Groups, Church Groups, Continuing Ed classes, Volunteer ActivitiesCommunications (e.g., phone calls, texts, email, web-based, IM and IM status, twitter interests, etc.)Financial (e.g., stock/bond portfolio, and stock bond performance, checking/savings, bills and obligations, medical bills)Intents (e.g., search information, browsing history, app usage, travel itinerary, schedule/appointments calendar, “To-Dos”)Education ActivitiesHistory and PresentHome/office Status (e.g., devices offline and devices online, heating/AC status, alarm system status, presence, lights, garage doors, locks, motion detectors, cameras, etc.)Purchase InformationMusical InterestsSocial Media/Network (e.g., social network contacts, interests and intents, social media groups, Twitter followings, focused social media, contacts and their interest and intents)Exercise & Sleep (e.g., status, history and patterns, performance, user goals, personal coach goals exercise & sleep, (weekly, daily activity level, etc.))Medical Info Status and History (e.g., prescription info/bills, user responsiveness history, user causation preferences, user goals, medical coach goals, injuries, diseases, medical conditions and ailments, medications, providers and visits, schedule, medication status, current medical metrics (BP, weight, glucose, etc.) and trends)Commerce (e.g., advertising goals, purchase information, media, media interest, music, video, movies, games, books, articles, real time consumption status) The following are exemplary usage scenarios for the PCCE:Blood Pressure Check—Results High, alert personal coachWeight Check—Results Low—Celebration action and/or alert personal/medical coach and/or cause device or home automation action such as flashing of lights.Medication Reminder EventMedication Reminder not acknowledged—Alert Professional Coach or Personal CoachBlood Pressure Check—Results good, offer reward for user and their personal coach, or direct reward to favorite charityLocation alert, at restaurant, send incentive to coach or contact to in turn send recommendation to user for “smart food” choice, or alert user to “better” restaurant choice based upon location, consumption objectives, profile preferences, advertiser objectives, etc.Location alert, at store, based upon previous purchase of product such as coffee maker, provide incentive for purchasing additional coffee, etc.Travel itinerary includes air flight, notify user of low impact exercises to try on the planeMedia Search Event—present recommendation based upon previous viewing or social contacts current or previous viewingScreen based Call for advertising—present ad based upon recent intents (intents=searches, phone calls meta-data, profile data, user exercise or medical objectives)Micro Location event—user is in living room, medication not acknowledged, send reminder to TVAcceptable Sedentary Duration Exceeded, suggestions/reminder sent to run errands, perform other tasks on “to-do” lists, etc. (after the movie is over, of course), could also be reminders from incented friends, contacts, etc.Calendar alert, user traveling in 3 hours, send reminder to leave for airport in 1.5 hours (based upon location) and remind/confirm user packed medicationCall for advertising from screen, request ad based upon user's travel itinerary, objectives, interests and search historyReminder—Garage door open, call for ad (based upon token) and send with notification.Location—Near Home Depot and calendar open (25 mins before picking up kids), suggest buying AC filter which needs replacing or offer to order online and ship directly to house.Unexpected Expense, offer money-specific saving suggestions and offers to control expenses during the month.Weather forecast=rain/snow/ice, adjust travel schedule to airport and confirm flight schedulesExamples of identity detection: Login use ID, Voice Detection, Face Recognition, Facial Expression Recognition, Micro presence detection, Macro presence detection (and correlated to schedule), Activity or Sedentary detection, Typing/tapping pattern detection (detect based upon typing/taping habits)E-commerce: track purchases, maintain profile information, billing info, cache relevant ads, recommendation based on profileHome automation: manage mix of devices with automated control or controlled by the user, rules-based processing to trigger action based on sensor/event (e.g., thermostat, alarm sensor, door bell) and communication out to user or 3rdparty. The PCCE can monitor/track user responsiveness history and user causation preferences. It can help determine what the users respond to and what types of stimuli motivate to desired results (e.g., friend prompts, authoritarian prompts, positive prompts, negative prompts, special offers, special offers and rewards to friends or causes, frequent reminders, and infrequent reminders, etc.). It can include configurable notification and causation and can be based upon user goals: financial, consumer, media, medical, physical, safety and security, home maintenance, energy consumption, intents, and informational goals. It can also be based upon other entity goals: personal coach goals, medical coach goals, advertiser goals, and service provider goals. The present invention has been described with reference to diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each diagram can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing apparatus to produce a machine such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified herein. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified herein. The computer program instructions may also be loaded onto a computer-readable or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified herein. While the invention has been particularly shown and described with respect to illustrative and preformed embodiments thereof, it will be understood by those skilled in the art that the foregoing and other changes in form and details may be made therein without departing from the spirit and scope of the invention which should be limited only by the scope of the appended claims. | 339,247 |
11943352 | Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION Exponentiation is a common mathematical operation between a base and an exponent, where the exponentiation corresponds to repeated multiplications of the base. Exponentiation is used extensively in many fields, such as biology, physics, and computer science. However, the computation of extremely large exponents is difficult due to the super-linear nature of exponentiation. That is, as the exponent grows, the raw computational difficulty increases at a rate faster than linear. However, there are methods to reduce the difficulty of calculating the results of large exponentiations. For example, the so called “power rule of exponents” significantly reduces computation when factors of the exponent are known. However, integer factorization is also a known difficult problem. In fact, with sufficiently large integers, there are no efficient, known means for factoring. This difficult problem forms the foundation of much of modern cryptography. For example, two large primes may be kept secret (e.g., a private key), while the multiplication of the two large primes together may be shared (e.g., a public key) without revealing the secret primes. Exponentiation and prime factorization can also play an important role in private information retrieval (PIR) schemes. PIR schemes allow a user to retrieve data from one or more storage devices while not revealing any knowledge about the user or the retrieved data to a server hosting the one or more storage devices. For example, a server may store a public database of n blocks B1, . . . , Bnof equal length. The user wishes to download a block B. For privacy reasons, the user does not want to reveal to the server (or anyone else) the index of the downloaded data block. That is, the user desires to obliviously retrieve the selected data block. One solution for this problem is to have the client download the entire database. While effective, the bandwidth cost makes this solution infeasible for any database of significant size. Another approach is to use exponentiation. For example, the server may represent the database as an integer exponent E. Depending on the size of the database, E may be quite large (e.g., thousands or millions of digits). The client obtains a group G that is represented by a modulus M associated with a data block (e.g., query element stored) on the untrusted server. The client may select a group element g, where g is associated with the data block (e.g., query element) stored on the server that the client wants to download. The client desires to solve gEmod M, as the client can use the result to obviously retrieve the selected data block. The client knows the prime factorization of M, but desires to keep the prime factorization secret. In this situation, the client could merely send g and M to the server and allow the server to compute gEmod M. However, as previously discussed, exponentiation is a super-linear problem and with a sufficient size of the exponent E (i.e., the size of the database), the computation requirements are prohibitive. The server could greatly simplify the computation with knowledge of the prime factors of the modulus (e.g., by using Fermat's Little Theorem), but the client desires to keep the prime factors secret. Thus, it is advantageous for the client to perform efficient computation using the prime factorization and then outsource the remaining exponentiation to the server to both avoid the bandwidth costs of transmitting the entire database (i.e., the exponent) and the computation cost of the server performing the full exponentiation without access to the prime factorization of the modulus. Implementations herein are directed toward a routine for outsourcing exponentiation in a private group without revealing the prime factorization of a modulus. The routine allows for a practical balance between bandwidth and computation by having a client perform efficient exponentiation using a secret prime factorization of a modulus to generate a series of base values without the use of a sever-held exponent. The client sends the series of base values to the server for further computation with the server-held exponent without providing the prime factorization. Referring toFIG.1, in some implementations, an example system100includes a user device10(also referred to as a client device10) associated with a respective user or client12and in communication with a remote system111(herein also referred to as a server or as an untrusted server) via a network112. The user device10may correspond to any computing device, such as a desktop workstation, a laptop workstation, or a mobile device (i.e., a smart phone). The remote system111may be a single computer, multiple computers, or a distributed system (e.g., a cloud environment) having scalable/elastic computing resources118(e.g., data processing hardware) and/or storage resources116(e.g., memory hardware). An untrusted data store150(e.g., ‘untrusted storage device’ or ‘untrusted server’) is overlain on the storage resources116to allow scalable use of the storage resources116by one or more of the client or computing resources118. The untrusted data store150is configured to store a plurality of data blocks152,152a-navailable for download by the client device10. For example, the untrusted data store150includes publically-known and un-encrypted n data blocks (B)152and allows one or more client devices10to use PIR for obliviously retrieving data blocks (B)152to conceal access patterns while preserving search functionalities on the data blocks (B)152by the client devices10. Thus, the client device10may not own the data blocks152and the content of the data blocks152are available to the public in some configurations. Alternatively, the data blocks152may be private to a specific client12, but the client12still desires to conceal access patterns from the untrusted data store150. The data blocks152may be represented by an integer. That is, the untrusted data store150or the server111may process each data block152to generate a single integer that is representative of all of the data blocks152. For example, the server111may use the Chinese Remainder Theorem to encode each data block152with a distinct small prime. The client device10(e.g., a computer) associated with the client12may include associated memory hardware122and associated data processing hardware124. Each client device10may leverage the associated memory hardware122to hold or store a query instruction130, a group (G)132represented by a modulus (M)134, a positional base (B)136, and a positional count (m)138. In some examples, the data processing hardware124executes a generator160for generating a series of base values162issued to the remote system111, which herein may also be referred to as a server executing in an untrusted environment. The generator160, in some examples, executes the query instruction130to retrieve a query element (i.e., a data block (B)152) stored on the untrusted data store150. To this end, the generator160obtains or receives or generates the modulus (M)134associated with the query element150stored on the untrusted data store150. The modulus134is a product of two or more prime numbers135. The prime numbers135are selected by the user device10and may form the basis of a private key. Thus, the primes135remain secret from all other parties (including the server111). The generator160also obtains or receives or generates a group (G)132that includes one or more group elements (g)133. The group132is represented by the modulus134and each group element133of the group132is a generator of a subgroup of the prime number135. Referring now toFIG.2, in some implementations, the data processing hardware122executes a selector200that receives the query instructions130. The query instructions130include one or more data block selections131. Each data block selection131corresponds to a data block152(e.g., query element) to be retrieved obliviously from the untrusted data store150. The selector200is in communication with a data block primes data store210stored at the server111(e.g., at memory hardware116). The data block primes data store210, in some examples, stores a respective data block prime220associated with each data block152stored on the untrusted data store150. That is, the data block prime store210may provide the data block prime220that corresponds to the data block152to be retrieved from the data store150. The selector200may use the data block selections131from the query instructions130to obtain the data block primes220corresponding to the data blocks152associated with the data block selections131. Using the obtained data block primes220, the selector200selects a prime factorization of the modulus134. The prime factorization includes two or more prime numbers135and at least one of the prime numbers135corresponds to the data block prime220. Similarly, the group element133is selected based on the data block selections131such that both the modulus134and the group element133are associated with the data block prime220associated with the data block152to be retrieved from the untrusted data store150. That is, the group element133is configured to generate a respective one of the prime numbers135of the prime factorization of the modulus M134. Referring back toFIG.1, the generator160, in some implementations, also receives or determines the positional base (B)136. The positional base136is the base (also called the radix) of a server-held exponent154. That is, the exponent154may be represented with any base greater than or equal to two (i.e., binary), and the positional base136is selected by the client device10or the server111to represent the exponent154. The positional base136, in some examples, is selected by the user12, while in other examples the user device10automatically determines the positional base136. In additional examples, the server determines and relays the positional base136to the client device10. In addition to the positional base136, the generator160may receive a positional count (m)138. As described in more detail below, the positional count138represents a number of digits212(FIG.3) required to represent the exponent154with the positional base136. The generator160, using the prime factors135, generates a series of base values162,162a—n. Each base value162, in some examples, is an exponentiation of the group element133by the positional base136, which corresponds to a value of a digit position in a positional numeral system. Referring now toFIG.3, an exemplary exponent154is equal to 153,729 when the positional base136is equal to ten (i.e., using a decimal positional numeral system). It is understood that any positional numeral system (i.e., the positional base136) may be selected. For example, the numeral position system may include binary, hexadecimal, or decimal. While typically the exponent154is extremely large, small numbers are used in the example for clarity. The server111, which interprets the exponent154with the selected positional base136(in this case, ten), determines the positional count138. That is, the server111determines the number of digits212required to represent the exponent154with the positional base136. In the example shown, the exponent154(i.e., 153,729) can be represented in decimal (i.e., b=10) with six digits212. Thus, the positional count138is equivalent to six (in decimal). Regardless of the selected positional base136, the exponent may be represented as: E=E0+E1B1+E2E2+ . . . +EmBm(1) Here the exponent154, in decimal (i.e., the positional base is equal to ten), is represented as 9+(2*10)+(7*100)+(3*1000)+(5*10000)+(1*100000) for a total of 153,729. Referring now toFIG.4, equation (1)470allows for gEto be rewritten as: gE=gE0*(gB)E1*(gB2)E2* . . . *(gBm)Em(2) As is apparent from equation (2)480, a portion of gEmay be calculated without the server-held exponent154. That is, the generator160may generate the series of base values162as the series: gB,gB2, . . . ,gBm(3) It is clear that the selected positional base136affects the number of base values162in the series of base values162a-n(i.e., the variable m), and thus, the positional base136directly affects the communication cost of transmitting the series of base values162. Because larger positional bases136require more difficult computations, selection of the positional base136directly provides the tradeoff between communication cost and computation cost. In some examples, the positional base136(i.e., the numeral position system) is selected by the client device10or the server111based on a bandwidth limit for communications between the client device10and the server111. In other examples, the positional base136is selected to be approximately half of the number of digits212required to represent the exponent154in the selected positional numeral system. For example, a selection may result in an exponent154that requires 10,000 digits when represented with a positional base of 5,000. In some implementations, the server111transmits the positional count138of the exponent154to the generator160. The generator160may use the positional count138to determine a number of base values162to generate. In the example shown, the generator160may generate six base values162because the value 153,729 is represented with six digits212. Because the generator160has access to the prime factors135, the generator160generates the series of base values162efficiently without use of the server-held exponent154(e.g., using Fermat's Little Theorem or other techniques such as Euler's Theorem, Carmichael's Theorem, or the Chinese Remainder Theorem). The generator160may transmit the series of base values162to the server111. In some implementations, the series of base values162represents a series of initial base values162A,162Aa-An. That is, in some examples, the generator160generates a series of initial base values162A using the prime factorization of the modulus134and the group element133. The generator160may reduce each initial base value162A by the modulus134(i.e., by modulo M) to generate a series of modulus-reduced base values162B,162Ba-Bn. Thus, each modulus-reduced base value162B includes a respective initial base value162A in the series of initial base values162A reduced by the modulus134. In lieu of the series of initial base values162A, the generator160may transmit the series of modulus-reduced base values162B to the untrusted server111. That is the series of base values162sent to the server111either include the series of initial base values162A or the series of modulus-reduced base values162B. Reducing the initial base values162A by modulo M significantly reduces the size of the base values162B sent to the server11and therefore significantly reduces the bandwidth required to transmit the series of base values162B to the server111. Because modulus operations are computationally easier than exponentiation, performing the extra modulus operations may be advantageous. Referring back toFIG.1, in some implementations, the server111executes an exponent multiplier170. The exponent multiplier170receives the series of base values162from the client device10(e.g., the generator160) and receives the exponent154from the untrusted data store150. Optionally, the client device10may also provide the group element133and/or the modulus134to server111for use by the exponent multiplier170in generating a result192. In some examples, the exponent multiplier170(or another module executing on the server111) determines the exponent154from the data blocks152of the untrusted data store150. The exponent multiplier170uses the client-generated series of base values162and the exponent154to compute gE. In some examples, the server111determines the exponentiation of the group element133with the exponent154stored on the untrusted server111by determining, for each base value162in the series of base values162, an exponentiation of the base value162with a value482(FIG.4) at a respective digit position of the exponent154and multiplying the exponentiations of the base values162with the values482at the respective digit positions of the exponent154together to generate a result192. The result192is associated with the queried element (i.e., the data block152selected by the data block selection131). Because the server111may now generalize this computation into a standard problem of multiplying m bases with m exponents, the server111may utilize a number of algorithms to efficiently determine the result192. For example, the server111may use Pippenger's exponentiation algorithm or other addition-chain exponentiation methods. After determining gE, the server111may send gEas the result192back to the user device10. Alternatively, the server111may reduce gEby modulo M (i.e., the modulus134) to generate the result192, as this will significantly reduce the size of the result192and, as previously discussed, modulus operations are computationally easier than exponentiation. The result192is based on the exponentiation of the group element133with the exponent154stored on the untrusted data store150. That is, the result192corresponds to the value of the data block152selected by the data block selection131. The system100offers significantly reduced bandwidth usage over sending the entire exponent154to the client device10, as the system only transmits m exponents, each of which may be represented with log(M) bits. The modulus134(M) is typically much smaller than the exponent154, m*log(M)<E. Additionally, the system100offers significantly reduced computation over the server fully computing the exponentiation of the group element133because the user device10takes advantage of the knowing the prime factorization of the modulus134that is never revealed to the untrusted server111or the untrusted data store150. Thus, the system, by splitting a large problem (i.e., exponentiation with very large exponents) into several smaller problems (i.e., the smaller base value162exponents), the system100reduces the overall cost (both asymptotic and concrete) of computation without the client device10ever revealing the prime factorization of the modulus134to the untrusted server111. The system100provides a balance between the cost of communication (e.g., bandwidth) and computation and allows both the user device10and the server111to efficiently compute. For example, the system100may improve over more naive approaches (e.g., transmitting the full exponent154or allowing the server to fully compute gE), using standard costs of computation and communication, between 10× to 100×. While the examples herein are directed towards PIR, the described methods for outsourcing exponentiation may be advantageous in many other fields (e.g., blockchains). FIG.5is a flowchart of an example method500for outsourcing exponentiation of a private group. The method500starts at operation502with executing, at data processing hardware122of a client device10, a query instruction130to retrieve a query element152stored on an untrusted server111,150by, at operation504, selecting a prime factorization of a modulus134associated with the query element152stored on the untrusted server111,150. The prime factorization includes two or more prime numbers135. At operation506, the method500includes obtaining a group element133configured to generate a respective one of the two or more prime numbers135of the prime factorization. At operation508, the method500includes generating a series of base values162using the prime factorization of the modulus134and the group element133, and, at operation510, transmitting the series of base values162from the client device10to the untrusted server111,150. The untrusted server111,150is configured to determine an exponentiation of the group element133with an exponent154stored on the untrusted server using the series of base values162. At operation512, the method500includes receiving, at the data processing hardware122, a result192from the untrusted server111,150. The result192is based on the exponentiation of the group element133with the exponent154stored on the untrusted server111.150. FIG.6is schematic view of an example computing device600that may be used to implement the systems and methods described in this document. The computing device600is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document. The computing device600includes a processor610, memory620, a storage device630, a high-speed interface/controller640connecting to the memory620and high-speed expansion ports650, and a low speed interface/controller660connecting to a low speed bus670and a storage device630. Each of the components610,620,630,640,650, and660, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor610can process instructions for execution within the computing device600, including instructions stored in the memory620or on the storage device630to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display680coupled to high speed interface640. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices600may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). The memory620stores information non-transitorily within the computing device600. The memory620may be a computer-readable medium, a volatile memory unit(s), or non-volatile memory unit(s). The non-transitory memory620may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by the computing device600. Examples of non-volatile memory include, but are not limited to, flash memory and read-only memory (ROM)/programmable read-only memory (PROM)/erasable programmable read-only memory (EPROM)/electronically erasable programmable read-only memory (EEPROM) (e.g., typically used for firmware, such as boot programs). Examples of volatile memory include, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), phase change memory (PCM) as well as disks or tapes. The storage device630is capable of providing mass storage for the computing device600. In some implementations, the storage device630is a computer-readable medium. In various different implementations, the storage device630may be a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. In additional implementations, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory620, the storage device630, or memory on processor610. The high speed controller640manages bandwidth-intensive operations for the computing device600, while the low speed controller660manages lower bandwidth-intensive operations. Such allocation of duties is exemplary only. In some implementations, the high-speed controller640is coupled to the memory620, the display680(e.g., through a graphics processor or accelerator), and to the high-speed expansion ports650, which may accept various expansion cards (not shown). In some implementations, the low-speed controller660is coupled to the storage device630and a low-speed expansion port690. The low-speed expansion port690, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet), may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. The computing device600may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server600aor multiple times in a group of such servers600a, as a laptop computer600b, or as part of a rack server system600c. Various implementations of the systems and techniques described herein can be realized in digital electronic and/or optical circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. A software application (i.e., a software resource) may refer to computer software that causes a computing device to perform a task. In some examples, a software application may be referred to as an “application,” an “app,” or a “program.” Example applications include, but are not limited to, system diagnostic applications, system management applications, system maintenance applications, word processing applications, spreadsheet applications, messaging applications, media streaming applications, social networking applications, and gaming applications. These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, non-transitory computer readable medium, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The processes and logic flows described in this specification can be performed by one or more programmable processors, also referred to as data processing hardware, executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser. A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims. | 29,922 |
11943353 | DETAILED DESCRIPTION While the specification concludes with claims defining the features of the invention that are regarded as novel, it is believed that the invention will be better understood from a consideration of the following description in conjunction with the drawing figures, in which like reference numerals are carried forward. It is to be understood that the disclosed embodiments are merely exemplary of the invention, which can be embodied in various forms. The present invention provides a novel and efficient method to add randomness to processing systems implementing isogeny-based cryptosystems. By cleverly using elliptic curve properties and formulas, this randomness can be added to an electronic computing device, e.g., an IOT device, with only a small overhead. The primary application for introducing randomness to a cryptosystem is to defend against side-channel analysis attacks to an electronic computing device. In particular, a malicious third party can eavesdrop on a target's power, timing, electromagnetic radiation, error messages, and so on, to break a cryptosystem without breaking its underlying hard problem. Power and timing attacks can be particularly deadly, possibly even revealing any secret keys used by the implementation. Simple power analysis and differential power analysis are two such techniques that target the power side-channel. By using an oscilloscope or other similar tool, these power analysis attacks detect computational patterns to reveal a target's private information. The goal in adding randomness to computations in a cryptosystem is to make detection of these patterns significantly more expensive in terms of time and money. Public-key cryptography is the study of exchanging secrets over an insecure public channel. By using hard problems such as the discrete logarithm problem or isogeny problem, data confidentiality, data integrity, authentication, and non-repudiation can be achieved. Given today's technology and future advances, the computational infeasibility of these hard problems means that it will be thousands or many orders of magnitudes of years to break the cryptosystem. The primary cryptosystem primitives we describe in the following are public key exchange whereby two parties agree on a shared secret over an insecure channel and digital signature where one party digitally signs content with his private key and any other party can digitally verify this signed content with the public key associated with the signer's private key. Other primitives exist, such as authenticated key exchange, public key encryption, and zero-knowledge proofs. In the following, we will describe an instantiation of our invention given known isogeny and elliptic curve cryptosystems. The spirit of the invention is not limited to such an example but expands to any future simultaneous deployment of isogeny and elliptic curve cryptosystems. Isogeny-based cryptography is cryptography based on isogenies, or algebraic morphisms that is surjective and has a finite kernel, among a group. In modern-day cryptography, isogenies on elliptic curve groups is thought to be a hard problem, even for quantum computers. As those of skill in the art will appreciate, an isogeny on elliptic curves φ: E1→E2is a non-rational map of all points on E1to E2that preserves the point at infinity. Given a finite kernel and E1, it is simple to compute E2, the isogeny of E1using the finite kernel. However, given only E1and E2, it is a computationally intensive task to compute finite kernel used for the isogeny from E1to E2, which is the foundation of isogeny-based cryptography. Some examples of isogeny-based cryptography include, but not limited to, the supersingular isogeny Diffie-Hellman (“SIDH”) key exchange protocol, commutative supersingular isogeny Diffie-Hellman (“CSIDH”) key exchange protocol, supersingular isogeny key encapsulation (“SIKE”) mechanism, and SeaSign isogeny signatures. Each of these are isogeny-based cryptosystems that are based on the hardness of isogenies on elliptic curves. The cryptosystem parameters differ in many ways. However, efficient implementation of these cryptosystems can be designed such that they also provide protections from outside observers. In the following, we describe each of our randomization techniques in terms of, but not limited to, computations shared by both SIDH and SIKE cryptosystems. In these two cryptosystems, secret isogenies are computed in the same manner. Consider the computation φ: E1→E2in the SIDH and SIKE setting. First, a secret kernel that defines the isogeny is computed by the double point multiplication R=[m]P+[n]Q, where R, P, and Q are on E1and m and n are secret scalars. Second, the large-degree isogeny φ is computed by the path defined by R, φ: E1→E2=φ:E1→E1/R). Here, φ, R, m, and n are critical internal computations, that if revealed through differential power analysis or other attacks could break the SIDH and SIKE foundational security problem. The computations in SIDH and SIKE are generally carried out in a sequential manner. Intermediate values are stored in registers to represent intermediate elliptic curves or points. The above isogeny computation is entirely deterministic. Thus, naïve implementations could inadvertently leak private internal information. This innovation adds randomness to these computations to change the computational paths to reach the final value. In each case, this innovation claims to add the randomness at any point in the isogeny-based system. The most common place to insert randomness is at the beginning of the protocol. However, this does not fit all cases where one computation always converges to the same result. Thus, there are scenarios where any one of these randomness techniques may be applied to a cryptosystem implementation at multiple computational points. One direct example is the SIDH and SIKE setting. Here, one may compute the kernel point R=[m]P+[n]Q, convert the projective point back to the affine representation, and then proceed with the large-degree isogeny. This conversion from projective to affine removes the computational randomness from this point forward as the isogeny formulas will follow a deterministic computation order. The first randomness innovation, as best depicted and represented inFIG.1, is to randomly change the projective representation of any intermediate point in the isogeny computation. A simple representation of an elliptic curve point is its affine representation, (x,y). However, for better performance and several other benefits, implementations typically represent an elliptic curve point in projective coordinates, (X:Y:Z), which can be converted back to affine with the formulas x=XZ,y=YZ. Thus, we can change an elliptic curve point to a random projective representation by multiplying X, Y, and Z by a random value. The main innovation here is to apply this randomness technique at any intermediate value in the isogeny computation setting. Some examples include before the double-point multiplication, during the double-point multiplication, during the isogeny computation, and so on. Furthermore, this innovation recognizes various projective-like representations of elliptic curve coordinates, such as Kummer x-only (X:Z) with x=XZ and Jacobian (X:Y:Z) with x=XZ2y=YZ3. For this first randomization technique as well the others, it is not immediately clear that this technique will change the computational hierarchy and will still result in the correct answer. This is illustrated inFIG.1. Given correct projective point arithmetic and isogeny formulas, changing the projective representation of an affine point will not alter the final isogeny result. In general, isogeny-based cryptography uses the movement from curve to curve to find a final elliptic curve class that is identified by its j-invariant. A projective point is an extension of an affine point, but can be reduced back to its affine representation. Projective point formulas for elliptic curve arithmetic and isogeny arithmetic can be developed from the geometric relationship of points on curves in their affine representation. Thus, by using the basis extension and arranging terms used in the projective representation, projective point arithmetic and isogeny formulas will produce the same result. Furthermore, each formula updates all coordinates used in a projective representation which will generally change based on the input projective point. The second randomization technique, as best depicted and represented inFIG.2, focuses on randomizing the representation of the scalars used in the double-point multiplication, R=[m]P+[n]Q. Since the elliptic curve is a cyclic group, adding the order of the point will result in adding the zero element. For instance, instead of m, the scalar for P can be m+r·#P, where r is some random integer and #P is the order of point P. Similarly, some multiple of the order of point Q can be added to n to produce the same resulting point, but with a larger scalar. One general optimization to SIDH and SIKE has been to set m=1, so that R=P+[n]Q. This randomization technique can still be applied to the elliptic curve multiply and add operation. For this second randomization technique, this is an application of the cyclic nature of point arithmetic which does not inhibit the isogeny operation. As is shown inFIG.2, the input scalars of the double-point multiplication have changed, but since the resulting point is the same affine point, the following isogeny operations will still compute the correct result. Furthermore, given the nature of projective coordinate arithmetic, the resulting projective point R will have a different result depending on what multiple of point P's cardinality was used. The third randomization technique, as best depicted and represented inFIG.3, randomizes both the representation of the curve and any corresponding points through an elliptic curve isomorphism, ψ: E→E′ and ψ: P→P′. An isomorphism is a type of isogeny that does not change elliptic curve isomorphism classes. Here, we randomly choose a value that defines the isomorphism pp and push both the curve and points through this isomorphism. As an example, consider the short Weierstrass form of an elliptic curve: E: y2=x3+ax+b with point P (xp,yp). The isomorphism defines the mapping from E to E′, where E′: y2=x3+a′x+b′. One simple mapping is give a random value u is to compute P′ (u−2xp, u−3yp) and E′: y2=x3+u4ax+u6b. This countermeasure can be adopted at any point of the isogeny computation, such as during kernel point generation or before computing any elliptic curve mapping. Furthermore, this innovation targets any type of elliptic curve point representation, again including the Kummer or Jacobian representations. In this third randomization scenario, an isomorphism is a type of isogeny, but still returns the correct result. An isomorphism does not change the elliptic curve isomorphism class so any further computations can still return the correct result.FIG.3shows an example of double-point multiplication and the large-degree isogeny whereby both results return the equivalent value. Although the elliptic curve isomorphism class is not changed, the representation of both the elliptic curve and the elliptic curve point are altered, so the computational hierarchy in point arithmetic or isogeny arithmetic has been altered. The fourth randomization technique, as best depicted and represented inFIG.4, is to switch between isogeny formulas during an isogeny computation. For instance, if the large-degree isogeny is composed of multiple degree-2 isogenies (where 2 is the fixed isogeny degree), then performing half the isogenies with one set of isogeny formulas and the other half with another set of isogeny formulas will alter the computational tree for the result.FIG.4shows two different isogeny computations, one with formula A and the other with formula B. Given a defined kernel, both isogeny formulas will return a curve with the correct isomorphism class. These resultant curves will be isomorphic to each other, but not necessary the same curve. Throughout the history of isogeny-based cryptography, many different formulas for computing basic isogenies have been proposed and evaluated. Some are more optimized than others and some may use less memory. Based on precomputed computations and isogeny flow, the computations involved in these formulas have evolved which produce different elliptic curves when used. | 12,524 |
11943354 | DETAILED DESCRIPTION A computer-implemented application may be configured to display external content, in addition to providing the functionality of the application itself. The effectiveness of a given external content item displayed by an application can be measured by detecting and counting events defined as successful outcomes that follow impressions of the given external content within a predefined period of time. An impression, which indicates that an external content item has been presented to a user of an application, is an event associated with fetching an external content item from its source (a third party system, for example) and displaying it on a screen provided by the application that functions as a delivery system. An event defined as a successful outcome that follows an impression within a predefined period of time is referred to as a conversion. Examples of conversions include events indicating downloading of a third party application, subscribing to a newsletter or enrolling into an on-line class, where such events occurred within a predetermined period of time after an associated external content item was displayed to a user of the application. An attribution process, also referred to as merely attribution, is a process of generating metrics indicative of conversions that follow impressions, such as, for example, how many users that were shown an external content item subsequently performed a certain activity related to the external content item. Attribution employs private set intersection (PSI) protocol to compute the intersection of the impression dataset and the conversion dataset. PSI is a cryptographic technique that allows two parties holding separate datasets (the impression dataset and the conversion dataset, for example) to compare encrypted versions of those sets in order to compute the intersection in a manner such that neither party exposes any personally identifiable information (PII) to the other party. Some examples of PI include an email address an Internet protocol (IP) address, and a phone number of a user. PSI-size is a secure multiparty computation cryptographic technique that allows two parties to privately join their private datasets and discover the number of identifiers they have in common, or the size of the set intersection. In some examples, PSI-size is executed with respect to a host dataset (a dataset storing user information maintained by the host system configured to display external content that originates from a third party provider) and a partner dataset (a dataset storing user information maintained by a partner system that provides the external content to the host system). The host system assigns respective internal identifiers (host-assigned identifiers) to its users, and the partner system assigns respective internal identifiers (partner-assigned identifiers) to represent their users. The same user may be identified in the host dataset and in the partner dataset by different internal identifiers. In some examples, an item in a dataset storing user information, whether the host dataset or the partner dataset, is a record comprising an identification of a user (a host-assigned identifier in the host dataset and a partner-assigned identifier in the partner dataset) mapped to a set of PII elements. Examples of PI elements in the dataset records, include an email address, an IP address, and a device identification, for example. The PII elements in the dataset records may be hashed. The intersection between the host dataset and the partner dataset is the number of records that represent the same user in both datasets. The technical problem of matching records in different datasets, such as a host dataset and a partner dataset, while maintaining the privacy of each dataset, is addressed by providing a privacy safe joint identifier protocol that computes anonymous joint identifiers that can be used for matching user records of the host dataset and the partner dataset in a privacy safe manner. An anonymous joint identifier is generated such that the host-assigned and the partner-assigned identifies that have been determined to represent the same user are mapped to the same anonymous joint identifier. In one example, the privacy safe joint identifier protocol double-encrypts PII elements from the records of both the host dataset and the partner dataset, using an encryption key provided by the host system and, also, an encryption key provided by the partner system, to double-encrypt the PII elements. As a result, the double-encrypted PII elements cannot be mapped to their associated host-assigned or partner-assigned identifiers, because the host system and the partner system are each in possession of only one of the two encryption keys used to produce the double-encrypted PII elements. The double-encrypted PII elements from both datasets are made available to the host system. At the host system, each double-encrypted version of a PII element from the partner dataset is tagged with an encrypted partner-assigned identifier, and each double-encrypted version of a PII element from a host dataset is tagged with a partner-generated anonymous identifier, which means that the host system cannot determine a mapping of any of the double-encrypted PII elements to a host-assigned identifier. If a PII element (a hashed email address, for example) is present in both the host dataset and the partner dataset, the double-encrypted version of such PII element is tagged with both the associated encrypted partner-assigned identifier and with the associated partner-generated anonymous identifier. The double-encrypted PII elements at the host system are then assigned respective anonymous joint identifiers, in a manner that the double-encrypted PII elements that have been determined to be associated with the same user are assigned the same anonymous joint identifier. The intersection between the host dataset and the partner dataset can be determined as the number of joint identifiers that are used to tag PII elements that have already been tagged with both an associated encrypted partner-assigned identifier and with an associated partner-generated anonymous identifier. In some examples, the host system provides respective mappings between the anonymous joint identifiers and the encrypted partner-assigned identifiers to the partner system, so that the partner system can tag its partner-assigned identifiers with respective anonymous joint identifiers. The host system generates a random identifier for each anonymous joint identifier and uses the generated random identifiers to obtain, from the partner system, mappings of random identifiers to respective encrypted host-assigned identifiers. In some examples, the host system discards the double-encrypted PII elements prior to obtaining, from the partner system, mappings of random identifiers to respective encrypted host-assigned identifiers, so that the host system is still prevented from determining a mapping of any of the double-encrypted PII elements to a host-assigned identifier. Based on the obtained mappings of random identifiers to respective encrypted host-assigned identifiers, the host system derives any existing mappings between a host-assigned identifier and an anonymous joint identifier. After the anonymous joint identifiers are established between the host dataset and the partner dataset, PSI-size can be executed based on the mappings of the host-assigned identifiers and the anonymous joint identifiers. For example, the size of the intersection between the host dataset and the partner dataset may be calculated as the number of anonymous joint identifiers that are mapped to a host-assigned identifier or a partner-generated anonymous identifier and, also, to an encrypted partner-assigned identifier. In some examples, PSI-size is executed on a predetermined cadence, such as daily or weekly. An example of the operation of the privacy safe joint identifier protocol can be described as follows. A host dataset comprising mapping between the host-assigned user identifiers and the respective PII elements, which can be denoted as {SAID_i→[PII_{i,1}, PII_{i,2}, . . . ]}, is shown in Table 1 below. TABLE 1Host-assigned IDsPII elementsSAID1he1, d1SAID2he2, d2SAID3he3, d3 SAID1, SAID2and SAID3are user identifiers assigned by the host system; he1, he2and he3are hashed email addresses; d1, d2and d3are device identifiers. A partner dataset comprising mapping between the partner-assigned user identifiers and the respective PII elements, which can be denoted as {AFID_j→[PII_{j,1}, PII_{j,2}, . . . ]} are shown in Table 2 below. TABLE 2Partner-assigned IDsPII elementsAFID1he4, d4AFID2he5, d5AFID3he2, d6, d2 AFID1, AFID2and AFID3are user identifiers assigned by the partner system; he4, he5and he6are hashed email addresses; d4, d5, d6and d2 are device identifiers. At the host system, the protocol generates an Elliptic Curve Cryptography (ECC) key denoted as host_key and a lightweight encryption key denoted as l_key_host. At the partner system, the protocol generates ECC key denoted as partner_key and a lightweight encryption key denoted as l_key_partner. EncH and EncP denote the encryption with the host_key and the partner_key respectively. In some examples, encrypting an identifier using the host_key and the partner_key produces the result expressed in Equation 1 below. EncH(EncP(identifier))=EncP(EncH(identifier)) Equation 1 EncHL and EncPL denote the encryption with the l_key_host as l_key_partner respectively. Examples of lightweight encryption include Advanced Encryption Standard (AES), pseudorandom function family (PRF), hashing with salt, and so on. In some examples, at t the host system, the host-assigned identifiers from the host dataset are encrypted using the l_key_host, and the PII elements from the host dataset are encrypted using the host_key. The PII elements from the host dataset are encrypted using the host_key are tagged with the respective host-assigned identifiers encrypted using the l_key_host. The resulting mappings that are shown in Table 3 below can be denoted as: [EncL(SAID_i)→EncH(PII_{i,1}),EncH(PII_{i,2}), . . . ]J. TABLE 3Encrypted PII elementsEncrypted Host-from the host datasetassigned IDsEncH(he1)EncHL(SAID1)EncH(he2)EncHL(SAID2)EncH(he1)EncHL(SAID3)EncH( d1)EncHL(SAID1)EncH( d2)EncHL(SAID2)EncH( d3)EncHL(SAID3) The PII elements from the host dataset encrypted using the host_key and mapped to respective host-assigned identifiers from the host dataset encrypted using the l_key_host are shuffled and sent to the partner system. At the partner system, the protocol generates a random identifier (referred to as a partner-generated anonymous identifier for the purposes of this description) for each encrypted host-assigned identifier from the host dataset and stores the resulting mappings as shown in Table 4 below. TABLE 4Encrypted Host-Partner-generatedassigned IDsanonymous IDsEncHL(SAID1)NID1EncHL(SAID2)NID2EncHL(SAID3)NID3 At the partner system, the protocol shuffles the received encrypted PII elements from the host dataset, encrypts them with the partner_key and tags them with the respective partner-generated anonymous identifiers. The mappings of the resulting double-encrypted PII elements from the host dataset to the respective partner-generated anonymous identifiers are shown in Table 5 below. TABLE 5Double encrypted PII elementsPartner-generated anonymousfrom the Host datasetidentifiersEncP(EncH( he1))NID1EncP(EncH(he2))NID2EncP(EncH(he3))NID3EncP(EncH(d1))NID1EncP(EncH(d2))NID2EncP(EncH(d3))NID3 The mappings shown in Table 5 can be denoted as: [NID_i→EncP(EncH(PII_{i,1})),EncP(EncH(PII_{i,2})), . . . ] The mappings of the double-encrypted PI elements from the host dataset to the respective partner-generated anonymous identifiers are sent to the host system. At the partner system, the protocol encrypts the PIT elements from the partner dataset with the partner_key and tags them with the respective partner-assigned identifiers encrypted with the l_key_partner. The mappings of the resulting PIT elements from the partner dataset encrypted with the partner_key to the respective partner-assigned identifiers encrypted with the l_key_partner are shown in Table 6 below. TABLE 6Encrypted PII elements from theEncrypted Partner-assignedPartner datasetidentifiersEncP(he4)EncPL(AFID1)EncP(he5)EncPL(AFID2)EncP(he2)EncPL(AFID3)EncP(d4)EncPL(AFID1)EncP(d5)EncPL(AFID2)EncP(d6)EncPL(AFID3)EncP(d2)EncPL(AFID3) The mappings shown in Table 6 can be denoted as: [EncL(AFID_j)→EncP(PII_{j,1}),EncP(PII_{j,2}), . . . ] The mappings of the PII elements from the partner dataset encrypted with the partner_key to the respective partner-assigned identifiers encrypted with the l_key_partner, as shown in Table 6, are sent to the host system. The host system receives the PII elements from the partner dataset encrypted with the partner_key and encrypts them with the host_key. As a result, the host system now has double-encrypted PII elements derived from the partner dataset as well as double-encrypted PII elements derived from the host dataset. The double-encrypted PII elements derived from the partner dataset are tagged with respective encrypted partner-assigned identifiers, and the double-encrypted PII elements derived from the host dataset are tagged with respective partner-generated anonymous identifiers, as shown in Table 7 below. TABLE 7Double encrypted PII elementsEncryptedfrom the Host dataset and fromPartner-Partner-generatedthe Partner datasetassigned IDsanonymous IDsEncP(EncH(he1))NID1EncP(EncH(he2))EncPL(AFID3)NID2EncP(EncH(he3))NID3EncP(EncH(d1))NID1EncP(EncH(d2))EncPL(AFID3)NID2EncP(EncH(d3))NID3EncH(EncP(he4)EncPL(AFID1)EncH(EncP(he5)EncPL(AFID2)EncH(EncP(d4)EncPL(AFID1)EncH(EncP(d5)EncPL(AFID2)EncH(EncP(d6)EncPL(AFID3) As explained above and, also, illustrated by Equation 1, encrypting a PII element first with the partner_key and then with the host_key results in the same value as when that PII element is first encrypted with the host_key and then with the partner_key. In Table 7, the rows shown with strikethrough indicate that EncP(EncH(he2)) is the duplicate of EncH(EncP(he2), and EncP(EncH(d2)) is the duplicate EncH(EncP(d2). Because the PII elements e2 and d2 appear in both the host dataset and the partner dataset, the associated EncP(EncH(he2)), EncH(EncP(he2), EncP(EncH(d2)), and EncH(EncP(d2) are tagged with both the associated encrypted partner-assigned identifier EncPL(AFID3) and the associated partner-generated anonymous identifier NID2. At the host system, the protocol merges the double-encrypted PII elements into clusters and assigns, to each cluster, an anonymous joint identifier. The merging of the double-encrypted PII elements into clusters may be performed based on various clustering mechanism and, in some examples, may utilize respective weights assigned different types of PII elements. In one example, the double-encrypted PII elements tagged with the same partner-generated anonymous identifier are merged in the same cluster. If a double-encrypted PII element in the cluster is tagged with an encrypted partner-assigned identifier, the double-encrypted PII elements tagged with that encrypted partner-assigned identifier are also merged into that cluster, as shown in Table 8 below. In Table 8, the PII elements shown with strikethrough indicate the result of EncP(EncH(he2)) being the same value as EncH(EncP(he2)) and EncP(EncH(d2)) being the same value as EncH(EncP(d2)). TABLE 8Clusters of double encrypted PIIAnonymouselements from the Host datasetJointand from the Partner datasetIDsEncP(EncH(he1)), EncP(EncH(d1))JID1EncP(EncH(he2)), EncP(EncH(d2)),JID2EncH(EncP(d6)),EncP(EncH(he3)), EncP(EncH(d3))JID3EncH(EncP(he4)), EncH(EncP(d4))JID4EncH(EncP(he5)), EncH(EncP(d5))JID5 For table 8: “EncH(EncP(he2))” with strike through is removed, and “EncH(EncP(d2))” with strike through is added. At the host system, the protocol creates mappings of the anonymous joint identifiers to respective encrypted partner-assigned identifiers, as shown in Table 9. TABLE 9AnonymousEncrypted Partner-Jointassigned IDsIDsEncPL(AFID1)JID4EncPL(AFID2)JID5EncPL(AFID3)JID2 The mappings of the anonymous joint identifiers to respective encrypted partner-assigned identifiers, which may be denoted as JID< >EncLP(AFID), are sent to the partner system. The partner system can now derive the mappings between the partner-assigned identifiers and the respective anonymous joint identifiers. At the host system, the protocol generates a random identifier RID for each c JID and sends the resulting mappings to the partner system. The mappings between the random identifiers, the respective anonymous joint identifiers and the respective random IDs, which may be denoted as RID< >JID< >NID, are shown in Table 10 below. TABLE 10Partner-generatedAnonymousRandomanonymous IDsJoint IDsIDsNID1JID1RID1NID2JID2RID2NID3JID3RID3 The mappings between the random identifiers and the respective partner-generated anonymous identifiers, which may be denoted as RID< >NID, are sent to the partner system. The random identifiers are used in order to prevent the partner system form learning the mapping between the encrypted host-assigned identifiers and the anonymous joint identifiers. The partner system sends the mappings between the random identifiers and the respective encrypted host-assigned identifiers, which may be denoted as RID< >EncLH(SAID), to the host system. In some examples, the double encrypted PII elements are discarded at the host system before the mappings between the random identifiers and the respective encrypted host-assigned identifiers are received at the host system. At the host system, the protocol derives, from the RID< >EncLH(SAID), the mappings between the host-assigned identifiers and the respective anonymous joint identifiers, which may be denoted as SAID< >JID. The mappings between the host-assigned identifiers and the respective anonymous joint identifiers are shown in Table 11 below. TABLE 11AnonymousJointHost-assigned IDsIDsSAID1JID1SAID2JID2SAID3JID3 As explained above, the size of the intersection between the host dataset and the partner dataset may be calculated as the number of anonymous joint identifiers that are mapped to a host-assigned identifier or a partner-generated anonymous identifier and, also, to an encrypted partner-assigned identifier. In the instant example, it can be seen in Table 9 and Table 11, one anonymous joint identifier appears in both tables, JID2, and thus the size of the intersection is 1. The privacy safe joint identification can be executed on a predetermined cadence, using the newly generated joint identifiers for each cadence. It will be noted that the methodology described herein can be used beneficially in any context for determining the intersection of two datasets in a privacy safe manner. Networked Computing Environment FIG.1is a block diagram showing an example networking environment100for exchanging data (e.g., messages and associated content) over a network. The networking environment100includes multiple instances of a client device102, each of which hosts a number of applications, including a messaging client104and other applications. Each messaging client104is communicatively coupled to other instances of the messaging client104(e.g., hosted on respective other client devices102), a messaging server system108and third-party servers110via a network112(e.g., the Internet). A messaging client104can also communicate with locally-hosted applications using Applications Program Interfaces (APIs). A messaging client104is able to communicate and exchange data with other messaging clients104and with the messaging server system108via the network112. The data exchanged between messaging clients104, and between a messaging client104and the messaging server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video or other multimedia data). A messaging client104is shown as displaying external content105, which can be provided by the third-party servers110. The external content105, in some examples, is information that may be of interest to a user and/or information that the provider of the third-party servers110would like to make available to a broader audience. The messaging server system108provides server-side functionality via the network112to a particular messaging client104. While certain functions of the networking environment100are described herein as being performed by either a messaging client104or by the messaging server system108, the location of certain functionality either within the messaging client104or the messaging server system108may be a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system108but to later migrate this technology and functionality to the messaging client104where a client device102has sufficient processing capacity. The messaging server system108supports various services and operations that are provided to the messaging client104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client104. This data may include message content, client device information, geolocation information, media augmentation and overlays, message content persistence conditions, social network information, and live event information, as examples. Data exchanges within the networking environment100are invoked and controlled through functions available via user interfaces (UIs) of the messaging client104. Turning now specifically to the messaging server system108, an Application Program Interface (API) server116is coupled to, and provides a programmatic interface to, application servers114. The application servers114are communicatively coupled to a database server120, which facilitates access to a database126that stores data associated with messages processed by the application servers114. Similarly, a web server128is coupled to the application servers114, and provides web-based interfaces to the application servers114. To this end, the web server128processes incoming network requests over the Hypertext Transfer Protocol (HTTP) and several other related protocols. The Application Program Interface (API) server116receives and transmits message data (e.g., commands and message payloads) between the client device102and the application servers114. Specifically, the Application Program Interface (API) server116provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client104in order to invoke functionality of the application servers114. The Application Program Interface (API) server116exposes various functions supported by the application servers114, including account registration, login functionality, the sending of messages, via the application servers114, from a particular messaging client104to another messaging client104, the sending of media files (e.g., images or video) from a messaging client104to a messaging server118, and for possible access by another messaging client104, the settings of a collection of media data (e.g., story), the retrieval of a list of friends of a user of a client device102, the retrieval of such collections, the retrieval of messages and content, the addition and deletion of entities (e.g., friends) to an entity graph (e.g., a social graph), the location of friends within a social graph, and opening an application event (e.g., relating to the messaging client104). The application servers114host a number of server applications and subsystems, including for example a messaging server118, an attribution server122, and a social network server124. The messaging server118implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content) included in messages received from multiple instances of the messaging client104. As will be described in further detail, the text and media content from multiple sources may be aggregated into collections of content (e.g., called stories or galleries). These collections are then made available to the messaging client104. Other processor and memory intensive processing of data may also be performed server-side by the messaging server118, in view of the hardware requirements for such processing. The application servers114also include an attribution server122configured to facilitate privacy safe attribution process, using the privacy safe joint identification protocol described herein. The social network server124supports various social networking functions and services and makes these functions and services available to the messaging server118. To this end, the social network server124maintains and accesses an entity graph within the database126. Examples of functions and services supported by the social network server124include the identification of other users of the networking environment100with which a particular user has relationships or is “following,” and also the identification of other entities and interests of a particular user. Returning to the messaging client104, features and functions of an external resource (e.g., an application installed on the client device102or a web view app executing in the web view in the messaging client104) are made available to a user via an interface of the messaging client104. The external resource is often provided by a third party but may also be provided by the creator or provider of the messaging client104. The messaging client104receives a user selection of an option to launch or access features of such an external resource. The messaging client104can notify a user of the client device102, or other users related to such a user (e.g., “friends”), of activity taking place in one or more external resources. For example, the messaging client104can provide participants in a conversation (e.g., a chat session) in the messaging client104with notifications relating to the current or recent use of an external resource by one or more members of a group of users. One or more users can be invited to join in an active external resource or to launch a recently-used but currently inactive (in the group of friends) external resource. The external resource can provide participants in a conversation, each using respective messaging clients104, with the ability to share an item, status, state, or location in an external resource with one or more members of a group of users into a chat session. The shared item may be an interactive chat card with which members of the chat can interact, for example, to launch the corresponding external resource, view specific information within the external resource, or take the member of the chat to a specific location or state within the external resource. Within a given external resource, response messages can be sent to users on the messaging client104. The external resource can selectively include different media items in the responses, based on a current context of the external resource. Also shown inFIG.1is a developer tools server125. The developer tools server125maintains one or more software developer kits (SDKs) that permit integration of some of the features provided with the messaging server system across a third party application. These features include, for example, a privacy safe joint identification protocol provided by the attribution server122. An example of a third party application that benefits form integrating a privacy safe joint identification protocol provided by the attribution server122is a partner system that maintains a partner dataset of user records. System Architecture FIG.2is a block diagram of a system200, which embodies a number of subsystems that are supported on the client-side by the messaging client104and on the sever-side by the application servers114. These subsystems include, for example, an ephemeral timer system202, a collection management system204, an attribution service208, and an external resource system214. The ephemeral timer system202is responsible for enforcing the temporary or time-limited access to content by the messaging client104and the messaging server118. The ephemeral timer system202incorporates a number of timers that, based on duration and display parameters associated with a message, or collection of messages (e.g., a story), selectively enable access (e.g., for presentation and display) to messages and associated content via the messaging client104. The collection management system204is responsible for managing sets or collections of media (e.g., collections of text, image video, and audio data). A collection of content (e.g., messages, including images, video, text, and audio) may be organized into an “event gallery” or an “event story.” Such a collection may be made available for a specified time period, such as the duration of an event to which the content relates. For example, content relating to a music concert may be made available as a “story” for the duration of that music concert. The collection management system204may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of the messaging client104. External content provided by the third party servers110, in some examples, is displayed by a messaging client104in response to detecting a request to present a particular collection at a messaging client104. The metrics indicative of conversions that follow presentation of an external content item at a messaging client104are generated by the attribution service208. The attribution service208is shown inFIG.2as spanning the application servers114and the third party servers110. The attribution service208provides various functions that facilitate a privacy safe joint identification protocol described herein, which can be utilized by an external resource provided by the third party servers110by means of a Software Development Kit (SDK). In some examples, an SDK configured to support a privacy safe joint identification protocol is provided by the external resource system214. An external resource, such as a partner system described above, utilizes the SDK to perform operations described above as being performed by the partner system. In order to integrate the functions of the SDK into an external resource, the SDK is downloaded by a third-party server110from the messaging server118or is otherwise received by the third-party server110. Once downloaded or received, the SDK is included as part of the application code of an external resource, such as a partner system. The code of the external resource can then call or invoke certain functions of the SDK to integrate features of the attribution service208into the external resource. Process Flow FIG.3is a flowchart of a method300for privacy safe anonymized identity matching, in accordance with some examples. The method300may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, microcode, etc.), software, or a combination of both. In one example, some of the processing logic resides at the application servers114ofFIG.1. Although the described flowchart can show operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a procedure, an algorithm, etc. The operations of methods may be performed in whole or in part, may be performed in conjunction with some or all of the operations in other methods, and may be performed by any number of different systems, such as the systems described herein, or any portion thereof, such as a processor included in any of the systems. The method300commences with operation310. At operation310, a host system stores a host dataset (for example, a dataset storing user information maintained by the host system configured to display external content that originates from a third party provider), and a partner system associated with the third party provider stores a partner dataset (a dataset storing user information maintained by the partner system that provides the external content to the host system). As explained above, the host system assigns respective host-assigned identifiers to its users, and the partner system assigns respective partner-assigned identifiers to represent their users. The same user may be identified in the host dataset and in the partner dataset by different internal identifiers. An item in a dataset storing user information, whether the host dataset or the partner dataset, is a record comprising an identification of a user (a host-assigned identifier in the host dataset and a partner-assigned identifier in the partner dataset) mapped to a set of PII elements. Examples of PI elements in the dataset records, include an email address, an IP address, and a device identification, for example. The PII elements in the dataset records may be hashed. At operation320, a host encryption key is generated at the host system, and a partner encryption key is generated at the partner system. At operation330, double-encrypted PII elements are produced by double-encrypting the respective PII elements from records of the host dataset and from records of a partner dataset, using the host encryption key and the partner encryption key. The double-encrypted PII elements derived from the partner dataset are tagged with respective associated encrypted partner-assigned identifiers. The double-encrypted PII elements derived from the host dataset are tagged with respective associated partner-generated anonymous identifiers. At operation340, respective anonymous joint identifiers are established for each of the double-encrypted PII elements based on the associated encrypted partner-assigned identifiers and the associated partner-generated anonymous identifiers At operation350the respective anonymous joint identifiers are used to calculate an intersection size of the host dataset and the partner dataset. The intersection size of the host dataset and the partner dataset is the number of records that represent the same user in both datasets. Machine Architecture FIG.4is a diagrammatic representation of the machine400within which instructions408(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine400to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions408may cause the machine400to execute any one or more of the methods described herein. The instructions408transform the general, non-programmed machine400into a particular machine400programmed to carry out the described and illustrated functions in the manner described. The machine400may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine400may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine400may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smartphone, a mobile device, a wearable device (e.g., a smartwatch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions408, sequentially or otherwise, that specify actions to be taken by the machine400. Further, while only a single machine400is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions408to perform any one or more of the methodologies discussed herein. The machine400, for example, may comprise the client device102or any one of a number of server devices forming part of the messaging server system108. In some examples, the machine400may also comprise both client and server systems, with certain operations of a particular method or algorithm being performed on the server-side and with certain operations of the particular method or algorithm being performed on the client-side. The machine400may include processors402, memory404, and input/output I/O components438, which may be configured to communicate with each other via a bus440. In an example, the processors402(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor406and a processor410that execute the instructions408. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.4shows multiple processors402, the machine400may include a single processor with a single-core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory404includes a main memory412, a static memory414, and a storage unit416, both accessible to the processors402via the bus440. The main memory404, the static memory414, and storage unit416store the instructions408embodying any one or more of the methodologies or functions described herein. The instructions408may also reside, completely or partially, within the main memory412, within the static memory414, within machine-readable medium418within the storage unit414, within at least one of the processors402(e.g., within the Processor's cache memory), or any suitable combination thereof, during execution thereof by the machine400. The I/O components438may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components438that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components438may include many other components that are not shown inFIG.4. In various examples, the I/O components438may include user output components424and user input components426. The user output components424may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The user input components426may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further examples, the I/O components438may include biometric components428, motion components430, environmental components432, or position components434, among a wide array of other components. For example, the biometric components428include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye-tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components430include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope). The environmental components432include, for example, one or cameras (with still image/photograph and video capabilities), illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. With respect to cameras, the client device102may have a camera system comprising, for example, front cameras on a front surface of the client device102and rear cameras on a rear surface of the client device102. The front cameras may, for example, be used to capture still images and video of a user of the client device102(e.g., “selfies”), which may then be augmented with augmentation data (e.g., filters) described above. The rear cameras may, for example, be used to capture still images and videos in a more traditional camera mode, with these images similarly being augmented with augmentation data. In addition to front and rear cameras, the client device102may also include a 360° camera for capturing 360° photographs and videos. Further, the camera system of a client device102may include dual rear cameras (e.g., a primary camera as well as a depth-sensing camera), or even triple, quad or penta rear camera configurations on the front and rear sides of the client device102. These multiple cameras systems may include a wide camera, an ultra-wide camera, a telephoto camera, a macro camera and a depth sensor, for example. The position components434include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components438further include communication components436operable to couple the machine400to a network420or devices422via respective coupling or connections. For example, the communication components436may include a network interface Component or another suitable device to interface with the network420. In further examples, the communication components436may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices422may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components636may detect identifiers or include components operable to detect identifiers. For example, the communication components636may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components436, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (e.g., main memory412, static memory414, and memory of the processors402) and storage unit416may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions408), when executed by processors402, cause various operations to implement the disclosed examples. The instructions408may be transmitted or received over the network420, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components436) and using any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions408may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to the devices422. Glossary “Carrier signal” refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such instructions. Instructions may be transmitted or received over a network using a transmission medium via a network interface device. “Client device” refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smartphones, tablets, ultrabooks, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network. “Communication network” refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other types of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. “Component” refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software), may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In embodiments in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors402or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented components may be distributed across a number of geographic locations. “Computer-readable storage medium” refers to both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. “Machine storage medium” refers to a single or multiple storage devices and media (e.g., a centralized or distributed database, and associated caches and servers) that store executable instructions, routines and data. The term shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks The terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium.” “Non-transitory computer-readable storage medium” refers to a tangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine. “Signal medium” refers to any intangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine and includes digital or analog communications signals or other intangible media to facilitate communication of software or data. The term “signal medium” shall be taken to include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. Software Architecture FIG.5is a block diagram500illustrating a software architecture504, which can be installed on any one or more of the devices described herein. The software architecture504is supported by hardware such as a machine502that includes processors520, memory526, and I/O components538. In this example, the software architecture504can be conceptualized as a stack of layers, where each layer provides a particular functionality. The software architecture504includes layers such as an operating system512, libraries510, frameworks508, and applications506. Operationally, the applications506invoke API calls550through the software stack and receive messages552in response to the API calls550. The operating system512manages hardware resources and provides common services. The operating system512includes, for example, a kernel514, services516, and drivers522. The kernel514acts as an abstraction layer between the hardware and the other software layers. For example, the kernel514provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services516can provide other common services for the other software layers. The drivers522are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers522can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., USB drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. The libraries510provide a common low-level infrastructure used by the applications506. The libraries510can include system libraries518(e.g., C standard library) that provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries510can include API libraries524such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries510can also include a wide variety of other libraries528to provide many other APIs to the applications506. The frameworks508provide a common high-level infrastructure that is used by the applications506. For example, the frameworks508provide various graphical user interface (GUI) functions, high-level resource management, and high-level location services. The frameworks508can provide a broad spectrum of other APIs that can be used by the applications506, some of which may be specific to a particular operating system or platform. In an example, the applications506may include a home application536, a contacts application530, a browser application532, a book reader application534, a location application542, a media application544, a messaging application546, a game application548, and a broad assortment of other applications such as a third-party application540. The applications506are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications506, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application540(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application540can invoke the API calls550provided by the operating system512to facilitate functionality described herein. Glossary “Carrier signal” refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such instructions. Instructions may be transmitted or received over a network using a transmission medium via a network interface device. “Client device” refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smartphones, tablets, ultrabooks, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network. “Communication network” refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other types of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. “Component” refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various examples, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software), may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering examples in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In examples in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors402or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some examples, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other examples, the processors or processor-implemented components may be distributed across a number of geographic locations. “Computer-readable storage medium” refers to both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. “Ephemeral message” refers to a message that is accessible for a time-limited duration. An ephemeral message may be a text, an image, a video and the like. The access time for the ephemeral message may be set by the message sender. Alternatively, the access time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory. “Machine storage medium” refers to a single or multiple storage devices and media (e.g., a centralized or distributed database, and associated caches and servers) that store executable instructions, routines and data. The term shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks The terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium.” “Non-transitory computer-readable storage medium” refers to a tangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine. “Signal medium” refers to any intangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine and includes digital or analog communications signals or other intangible media to facilitate communication of software or data. The term “signal medium” shall be taken to include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. | 73,576 |
11943355 | DETAILED DESCRIPTION For purposes of the description hereinafter, it is to be understood that the disclosure may assume alternative variations and step sequences, except where expressly specified to the contrary. It is also to be understood that the specific devices and processes illustrated in the attached drawings and described in the following specification are simply exemplary aspects of the disclosure. Hence, specific dimensions and other physical characteristics related to the aspects disclosed herein are not to be considered as limiting. As used herein, the term “coupled” should be understood to include any connection between two things, including, and without limitation, a physical connection (including, and without limitation, a wired or mechanical connection), a non-physical connection (including, and without limitation, a wireless connection), or any combination thereof. Furthermore, the words “comprising” (and any form of comprising, such as “comprise” and “comprises”), “having” (and any form of having, such as “has” and “have”), “including” (and any form of including, such as “includes” and “include”) or “containing” (and any form of containing, such as “contains” and “contain”) are to be understood as inclusive and open-ended and do not exclude additional, unrecited elements or method steps. Additionally, terms indicating a DAO role followed by the word “wallet” (for example, and without limitation, “project developer wallet(s)”, “auditor wallet(s)”, and “steward wallet(s)”) should be understood to denote a virtual wallet owned by an entity (that is, an individual or organization) acting as the denoted role. For example, and without limitation, the term “one or more project developer wallets” is to be understood as having the same meaning as “one or more virtual wallets owned by one or more entities acting as one or more project developers.” It is to be understood that a single entity may take on multiple roles; accordingly, a single wallet may be associated with multiple roles. Furthermore, as used herein, the term “wallet” refers to a virtual wallet having software elements, hardware elements, or any combination thereof. As used herein, the term “at least one of” is synonymous with “one or more of.” For example, the phrase “at least one of A, B, and C” means any one of A, B, and C, or any combination of any two or more of A, B, and C. For example, “at least one of A, B, and C” includes one or more of A alone; or one or more of B alone; or one or more of C alone; or one or more of A and one or more of B; or one or more of A and one or more of C; or one or more of B and one or more of C; or one or more of all of A, B, and C. Similarly, as used herein, the term “at least two of” is synonymous with “two or more of.” For example, the phrase “at least two of D, E, and F” means any combination of any two or more of D, E, and F. For example, “at least two of D, E, and F” includes one or more of D and one or more of E; or one or more of D and one or more of F; or one or more of E and one or more of F; or one or more of all of D, E, and F. FIG.1is a flow chart illustrating an example method for distributing a digital currency in exchange for fiat currency or desirable actions by DAO personnel, according to one or more embodiments.FIG.1shows virtual tokens102being acquired in three exemplary ways. First, a quantity of virtual tokens102may be acquired in exchange for a currency (for example, and without limitation, U.S. dollars, other fiat currency, or other virtual tokens). Second, a quantity of virtual tokens102may be acquired in exchange for the performance of a desirable action.FIG.1depicts carbon sequestration (“CS”)104or carbon mitigation106as the desirable action, but other desirable actions may be rewarded without departing from the scope of the present disclosure. Third, a quantity of virtual tokens102may be acquired in exchange for the performance of an alternative desirable action; in this case, production of renewable energy (“RE”)106.FIG.1shows Clean Energy Coins (CECs) being received as the virtual tokens102, but the methods disclosed herein may apply to any virtual tokens. ThoughFIG.1shows only three means by which virtual tokens may be acquired, those skilled in the relevant art will understand that any means of acquiring virtual tokens may be used without departing from the scope of the present disclosure. For example, in certain embodiments, tokens may be received by performing actions useful to one or more functions of the DAO. For example, and without limitation, examples include: (1) processing transactions; (2) facilitating smart contracts; (3) voting and/or other decision making; (4) assessing projects; and (5) providing protocol support. CS104is an example process by which carbon dioxide (or other greenhouse gasses) are pulled from the atmosphere and stored in the earth to slow (or even reverse) climate change. RE106is the production of energy with either zero or near-zero greenhouse gas emissions. ThoughFIG.1shows only two desirable actions by which virtual tokens102may be acquired, those skilled in the relevant art will understand that any desirable action may be rewarded by the DAO without departing from the scope of the present disclosure. These include, but are not limited to, such actions as the support and encouragement of (1) improved carbon capture technologies; (2) carbon re-use technologies (including, and without limitation, GHG batteries and waste-to-energy), (3) renewable energy technologies (including direct carbon mitigation techniques (such as improved solar radiation capture efficiency or reduced transfer loss) and indirect carbon mitigation techniques (such as increased solar-power equipment durability, longevity, and/or usage)), and (4) other carbon mitigation techniques (including, and without limitation, the removal and/or replacement of GHG-producing substances, such as petcoke). In certain embodiments, virtual tokens102may be exchanged for CS104, RE106, or other desirable actions through smart contracts. A smart contract is a contract wherein code is used to render one or more terms of the contract as self-executing. The number of virtual tokens acquired in exchange for currency100, CS104, or RE106need not be an integer; rather, any quantity of virtual tokens (including fractions less than or exceeding one token) may be received. In certain embodiments, the process ofFIG.1may be used in a DAO. FIG.2is a flow chart illustrating a method for delineating roles within a DAO, according to one or more embodiments. A master node226may process transactions in several ways. In certain embodiments, a trader200may enter into transactions involving virtual tokens102through the use of a virtual wallet214. In certain embodiments, a project developer202may file a new project form216to request that a new project be funded by the DAO. In certain embodiments, a steward204or representative may vote using a project voting application218to determine whether the DAO should fund a project. In certain embodiments an auditor206may verify the identity and/or credentials of a project developer, auditor, steward, and/or representative. In certain embodiments, an auditor206may assess a requested project and/or create a contract220(for example, and without limitation, a smart contract). In certain embodiments, an auditor206may also provide identity verification, credential verification, and/or project output verification services. The examples provided in this paragraph are purely exemplary; the master node226may process any transactions within the scope of a DAO. Traders200, project developers202, stewards204, representatives, and auditors206may enter into transactions with the master node226to allow the DAO to function. Similarly, validators208, governors210, and protocol developers212may perform system operations to support the master node226.FIG.2shows the delineation between system-operations roles and other roles within the DAO via a hashed background. In certain embodiments, a validator208may verify one or more transactions. In certain embodiments, a protocol developer212may code one or more new protocols on a test node224. In certain embodiments, a governor210may use a protocol voting application222to authorize the development of a new protocol. Upon approval by a governor210, the new protocol may be migrated from the test node224to the master node226. In any of the operations shown inFIG.2, one or more entities may work together to perform an action (for example, and without limitation, two or more governors210may vote as to whether to approve a new protocol). In certain embodiments, one or more distributed applications may be used within the processes shown inFIG.2. For example, and without limitation, a new project form216, a project voting application218, a contract application220, and/or a protocol voting application may be a distributed application. A distributed application is an application that runs on two or more computers within a network. A distributed application may be stored on servers and/or via cloud computing. A digital application (that is, software) is an application that runs on one or more computers. A distributed application is a form of digital application. The present application explicitly discloses no less than ten roles that a member of a DAO may take on: trader200, project developer202, steward204, representative (which is a form of steward204), auditor206, validator208, delegator (which is a form of validator208in a manner analogous to the role of a representative as compared to that of a steward204), governor210, protocol developer212, and bounty hunter (not shown). This list is not exclusive; other roles may be incorporated into a DAO without departing from the scope of the present disclosure. Moreover, in certain embodiments, a subset of the roles listed herein may be incorporated into a DAO without incorporating all roles listed herein. A single entity may take on one or more roles. The roles listed in this paragraph are discussed in more detail below. Trader200: Any entity who holds a quantity of virtual tokens may be considered a trader200. Traders200are responsible for moving currency around via exchanges, transactions, or other contracts In certain embodiments, a digital application such as a digital wallet may facilitate these responsibilities. A digital wallet may be comprised of software, hardware, or any combination thereof. A trader may view its virtual tokens as an investment, a currency, or both. Project Developer202: A project developer202is an entity within a DAO that introduces one or more projects in the hopes of gaining funding approval. In certain embodiments, a project developer202may use a digital application to submit one or more projects to a DAO. In certain embodiments, a digital application may guide a project developer202through the project approval process. In certain embodiments, project developers202may be required to pay a fee to support the various tasks associated with the evaluation of the project developer202, the project, and/or contract formation. Once a project is submitted by a project developer202, stewards204and/or representatives may review the project and the requested funding and payback contract(s) before voting on the project. If accepted, the project developer202may be paid in full once the treasury has the appropriate funds and may be expected to pay back funds per the contract. Steward204: A steward204is an entity within a DAO that votes on whether a project should be approved for funding. In certain embodiments, a steward204may exchange a quantity of virtual tokens for a quantity of VT, which the steward204may use to vote as to whether a project should be approved. In certain embodiments, in exchange for voting on projects, stewards204may be paid an interest proportional to their staked VT. In certain embodiments, in exchange for voting on projects, stewards204may be paid a proportional yield associated with proceeds from current project returns. Simply holding VT may not be enough to accrue interest and/or yield. Stewards204may choose to contract with a representative to vote on their behalf in exchange for a portion of interest and/or yield accrued. Any entity with virtual tokens may become a steward204through a voting digital application by exchanging virtual tokens for VT. In certain embodiments, a steward204may divest its VT through the same voting digital application. Representative: A representative is an entity within a DAO that may vote on projects on behalf of stewards204. In certain embodiments, a steward204may become a representative. In certain embodiments, a representative may be a type of steward204. In certain embodiments, a representative may receive a portion of the interest/yield accrued by the staked VT of the stewards204whom the representative represents. In certain embodiments, a reputation score may be assigned to a representative. The reputation score may be a value that increases when projects voted upon by the representative are approved and/or perform well. Conversely, the reputation score may decrease when projects voted upon by the representative fail to garner approval and/or perform poorly. In certain embodiments, the reputation score may exist within a numerical range—for example, and without limitation, the reputation score may vary between 0 and 2. In certain embodiments, stewards204may choose to become representatives by going through a verification process. A steward204may have its identity verified and may have its credentials verified by one or more auditors206. Auditor206: An auditor206is an entity within a DAO that may verify the identity and/or credentials of entities and/or projects. Furthermore, in certain embodiments, an auditor206may construct and/or negotiate contracts. In certain embodiments, auditors206may vie for work via a staking process. Once approved for a given task, an auditor206may verify the identity or credentials of an entity, validate whether a project developer202owns property (physical or virtual) or assets it claims to own, and/or provide independent assessment of one or more of a project's plan, viability, and returns. In certain embodiments, an auditor206may verify the correctness of assumptions, risks, and/or calculations associated with the proposal. In certain embodiments, auditors206may stake virtual tokens to vie for work. In certain embodiments, an automated system may assess the vying auditors' stakes, reputations, and/or credentials to determine a winning auditor to perform the work of the contract. Auditors206may be paid a fee for work performed. If an auditor's work is deemed inadequate by the requester (for example, and without limitation, a project developer202), then the requester may get back a portion of its funds. If the work is deemed inadequate by the DAO system, then the payment may be withheld until such time as the work becomes adequate or may be returned to the requester. If the work is found to be faulty or incorrect, the auditor206may lose its stake. In certain embodiments, a reputation score may be assigned to an auditor206. The reputation score may be a value that increases when the auditor206successfully completes tasks. Conversely, the reputation score may decrease when the auditor206performs tasks inadequately. The reputation score may be impacted by one or more of the correctness of contracts, the completeness of contracts, and feedback from requesters. In certain embodiments, the reputation score may exist within a numerical range—for example, and without limitation, the reputation score may vary between 0 and 2. In certain embodiments, an auditor206may hold credentials showing expertise in a particular field. In certain embodiments, an auditor may have its credentials confirmed by the DAO system. Having these credentials confirmed may increase the likelihood that a given auditor206is awarded a contract which utilizes the relevant field's skillset. Validator208: A validator208is an entity within a DAO that verifies transactions and/or smart contracts that are added to the blockchain or directed acyclic graph (“DAG”). Validators208may operate network and computational hardware (that is, servers) to facilitate the processing of transactions in the system. A validator208may be expected to operate system protocols, facilitate digital applications, and install/implement approved system changes. There may be hardware requirements and/or geopolitical requirements for operating a node. In certain embodiments, each validator208may be expected to register its server with all other nodes on the system. In certain embodiments, validators208may further be expected to process transactions via a validation staking process in exchange for an interest payment for and operation of the server(s). In certain embodiments, validators208may further be expected to process transactions via a validation staking process in exchange for a proportional yield of transaction fees for and operation of the server(s). Realization of a validator's interest may be subject to a confirmation of uptime/availability of the node to facilitate transactions. In certain embodiments, validators208may be awarded time slots to add transactions to the Blockchain/DAG. For each correctly added transactions in the awarded time slot, the validator may receive a transaction fee. In certain embodiments, the number of time slots awarded to a validator may be based on the amount staked and/or the number of times transactions were correctly added to the Blockchain/DAG. Realization of a validator's interest may be subject to the reputation (correctness) of the node. In certain embodiments, a validator208or its node may maintain a reputation score associated with its performance based on the realization criteria for its interest. A validator's or node's reputation score may be used by one or more of governors210, other validators208, delegates, or other entities within the DAO system. In certain embodiments, any entity may become a validator208by (1) setting up a server, (2) installing the requisite software, and (3) being approved for the role. In certain embodiments, more or less requirements may be imposed to become a validator208. Delegator: A delegator is an entity within a DAO that stakes coin in association with a validator208. In exchange for staking coin, the delegator may be paid a negotiated percentage of the validator's208profits proportional to the delegator's staked coin. The validator's208reputational score may be used by a trader to determine likely profitable delegation agreements between parties. Governor210: A governor210is an entity within a DAO that may operate as the ruling body of the DAO, in a manner similar to that of a member of a board of directors. Governors210may propose changes to the system, vote on proposed changes, and facilitate the manifestation of changes via protocol developers212through a process known as “Governance.” In certain embodiments, governors210may be responsible for ensuring the system remains fair, balanced, trusted, and open. In certain embodiments, requirements may be imposed to become a governor210. For example, and without limitation, an entity may be required to be an existing member of a DAO before being given the role of governor210. Furthermore, in certain embodiments, another member of the DAO may need to nominate the proposed governor210before the role of governor210is assigned. Additionally, existing governors210may vote to determine whether another entity becomes a governor210. Protocol Developer212: A protocol developer212is an entity within a DAO that facilitates changes to the DAO's software systems. For example, and without limitation, a protocol developer212may facilitate changes to one or more of (1) digital applications which make up the various systems, (2) the templates used within, (3) rates or calculations used within the system, (4) protocols, and (5) other changes. In certain embodiments, protocol developers212may be contracted specifically for a given task. In certain embodiments, a protocol developer212may be rewarded for providing the best implementation of a proposed change. Changes may be presented to the governance and testing systems for approvals. Once the software changes pass a governance vote for approval, the protocol developer(s)212may be paid from the reserve and/or treasury. Validators208may then be notified of the new software approved and may be expected to implement the changes. In certain embodiments, there may be no requirements to becoming a protocol developer212. An entity is especially likely to become a protocol developer212without any requirements in the case of open development for rewards. In certain embodiments, proposed changes may be reviewed and approved by governance before a protocol developer212acts on the proposed changes; once the change is implemented and tested, governance may approve payment. Bounty Hunter (not Shown): A bounty hunter is an entity who reviews interactions between various nodes and ledgers to discover discrepancies and malicious actors within the DAO system. In certain embodiments, bounty hunters may also review one or more of protocols, digital applications, and contracts to find bugs within the system. Bounty hunters may or may not be direct members of the DAO community. In certain embodiments, a bounty hunter may be paid a reward in currency (for example, and without limitation, virtual tokens) for finding malicious entities (for example, and without limitation, entities partaking in front running, sandwiching, and/or vote manipulation), inconsistencies (for example, and without limitation, contract loopholes, ledger mismatches, invalid credentials, and/or invalid identities), and/or software flaws (for example, and without limitation, bugs). In certain embodiments, a bounty hunter's rewards may be determined ahead of time and may correspond to the severity of the issue identified. FIG.3is a flow chart illustrating a method for distributing funds throughout the phases of a steward-approved project plan within a DAO architecture, according to one or more embodiments. A project304may be proposed by a project developer202(not shown). A voting group300(for example, and without limitation, a group of one or more stewards204and/or one or more representatives) may vote to approve a project304. Upon approval, the project304may be funded via funds from the DAO treasury302. The project may produce financial gains in the form of currency306. Currency306may be reinvested into the DAO treasury302. In certain embodiments, currency306reinvested into the DAO treasury302may be used to fund future projects304upon approval by a voting group300. The DAO treasury302may contain one or more fiat currencies and/or a quantity of virtual tokens. In certain embodiments, whether a given project will receive funds from the DAO treasury or DAO reserve may be determined via voting among stewards204. In certain embodiments, the exchange rate between virtual tokens and VT may be 1:1. In certain embodiments, a steward204may be permitted to exchange its VT for virtual tokens at any point using the standard exchange rate (for example, and without limitation, 1:1). In certain embodiments, there may be a small fee applied to exchanges from VT to virtual tokens. Such a fee may exist to discourage vote-and-run behavior. Moreover, in certain embodiments, stewards204who convert their VT to virtual tokens may not be eligible to receive accumulated interest. In certain embodiments, a voting goal may be created for each votable project. In certain embodiments, the voting goal for a given project may be a dynamically calculated value which indicates the number of votes that must be attained for the project to be accepted for funding. For example, and without limitation, the following equations may be used to calculate a voting goal: BSMax=NOS·VP·[logLF(VT(VP·NOS)+1)-RC]PF+1;BSMed=NOS·[logLF(VTNOS)-RC]PF;BSmin=NOS·[logLF(VT-(NOS-1))-RC]PF+1;BV=(BSMin-1)·100BSMax;CM=IFPC>TreasuryTHENCVMaxELSEPC(Reserve2+Treasury2·PT)·(CVMax-CVMin(+CVMin;CVMax=(SD+0.5)·100;DM=IFPD+PCDR·PT<DSMin100ThenDSMin100ELSEMin[PD+PCDR·PT,DSMax100];DSMax=500DSMin=90;ERR=0.12;ESS=0.9;GM=IFPDGSPC2·PDGS>1THEN0ELSEGSMax-PDGSPC2·PDGS·GSMax;GSMax=10;LF=8;Mechanics=∑(CM,GM,RM,SM)·DM+BV;PF=3PT=183RC=-0.4RM=max(0,min(PRRERR·RS2,RS));RS=7;SM=IFPES≥ESSTHEN0ELSEmin(SS,(ESS-PES)·);SS=3;SD=std(StewardVT)/avg(StewardVT)Votes=[logLF(VT+1)-RC]PF+1;andVotingGoal=Mechanics100·max(BSMin,(BSMin+BSMedSD)), WHEREIN: VariableDefinitionBSMaxMaximum Base Supply (representing the maximum votesupply)BSMedMedian Base Supply (representing the median vote supply)BSMinMinimum Base Supply (representing the minimum votesupply)BVBase VoteCMCost MechanicCVMaxMaximum Cost Votes (representing the maximum amountassociated with a project's cost)CVMinMinimum Cost Votes (representing the minimum amountassociated with a project's cost)DMDebt MechanicDRDaily RevenueDSMaxMaximum Debt Scalar (representing the maximumimportance of debt)DSMinMinimum Debt Scalar (representing the minimumimportance of debt)ERRExpected Return RateESSExpected Success Rate (representing the decimal-formpercentage of a standard acceptable success project)GMDo Good Mechanic (representing a vote percentageassociated with the amount of good a project provides)GSMaxMaximum Do Good Scalar (representing the percentagethat good affects the vote calculations)LFLog FactorMechanicsRepresents various vote scaling weights to be applied to aprojectNOSNumber of Stewards 204 in the DAO systemPCProject Cost (representing the requested amount for theproject in U.S. dollars)PDPipeline Debt (representing the remaining amount tofund for approved projects in U.S. dollars)PDGSProject Do Good Score (representing an amount of U.S.dollars determined by an assessor to be equivalent to theamount of good the project will accomplish)PESProject Expected Success (representing the percentagechance in decimal form that a project will completedevelopment and produce up to expectations)PFPower FactorPRRProject Return Rate (representing the anticipated returnrate of a project in decimal form as verified by an assessor)PTProject Timeout (representing the number of days a projectmay be on the voting block, e.g., half a year).RCReduction Constant (representing a constant to bettertransition low token rates)ReserveThe amount of currency (for example, and without limitation,fiat currency) in the DAO reserveRMRevenue Mechanic (representing the affect the return of theproject has on the votes needed to pass)RSRevenue Scalar (representing the vote acceptance weightassociated with the revenue returns a project may produce)SDSupply Distribution (representing the distribution of votingpower among stewards 204)SMSuccess Mechanic (representing the affect a project'slikelihood of success has on the votes needed to be approved)SSSuccess Scalar (representing a vote acceptance weightassociated with the likelihood of project completion)StewardVTThe number of voting tokens held by each individual stewardTreasuryThe amount of currency (for example, and without limitation,fiat currency) in the DAO treasuryVotesThe number of votes cast by stewards 204 for a given projectVotingThe number of votes needed for a project to be approvedGoalVPVotable Projects (representing the number of projects thatmay currently be voted upon)VTVoting Tokens Note that the above equations are purely exemplary, and any appropriate voting mechanism may be used without departing from the scope of the present disclosure. In certain embodiments, a project with a completed funding contract in a “ready to vote” state (that is, a project that has gone through the necessary approvals) may be voted upon. Stewards204may vote by staking VT to one or more projects through a digital application. In certain embodiments, using VT to vote may not initially produce a one-to-one vote. In certain embodiments, an algorithmic voting mechanism may be used to reduce the control that large entities may otherwise have over the decision-making process. In certain embodiments, stewards204may be free to allocate or de-allocate their VT at any time. In certain embodiments, VT may be given greater weight the longer it has been allocated to a given project. In certain embodiments, there may be a maximum weight to which a single VT may accrue. In certain embodiments, a VT's conviction and vested conviction may be determined according to the below equations: α=0.64CD=5VS=NOS·[logLF(VTNOS)-RC]PFToday'sConviction=IF(DaysCommitted≥CD)THEN(Votes·α·(VS-VS·VotesVS)DaysCommitted+1)ELSE0Today's Vested Conviction=IF (Days Committed>PT)THEN 0ELSEIF (Days Committed=0)THEN 0ELSE IF (Yesterday's Vested Conviction+Today's Conviction)>0ELSE Yesterday's Vested Conviction+Today's ConvictionElse 0 WHEREIN the definitions of the above table apply, and: VariableDefinitionαScalar for convictionCDThe number of days before aconviction yield is added to a voteVSMedian base vote supplyDaysThe number of days a given VT hasCommittedbeen committed to a given project.Today'sConviction yield associated withConvictionthe current day.Today's VestedAccumulated conviction throughConvictionthe current day. Note that the above equations are purely exemplary, and any appropriate voting mechanism may be used without departing from the scope of the present disclosure. In certain embodiments, removing VT from an allocation may not remove the conviction votes for the allocation. In certain embodiments, removing VT from an allocation may begin a decay cycle, wherein the value of the previous conviction yield allocation decays at an increasing rate as compared to the dedication rate. In certain embodiments, the algorithms that manage DAO voting may not allow a situation in which transferring in and out of the same allocation increases the power of the vote across outcomes. In certain embodiments, a project may be reassigned to an “actively funded status” and may be added to a list of distributable projects once it has met its voting goal. In certain embodiments, a DAO may have a crediting cycle by which it allocates funds to distributable projects. In certain embodiments, once a project has met its voting goal, VT may be de-allocated from the project and returned to the corresponding steward(s)204. In certain embodiments, projects may undergo a time decay function once they have been added to a voting list to ensure that after a given date, the projects are no longer considered viable. In certain embodiments, if a project developer202wishes to proceed after its project has timed out, the project may be given an expedited path back to voting. FIGS.4A-4Bare flow charts illustrating a method for managing funds and personnel within a DAO, according to one or more embodiments. A DAO is a type of decentralized system. A trader200may trade virtual tokens for one or more of goods, services, other virtual tokens, and fiat currency. In certain embodiments, a trader200may become a steward204. In certain embodiments, the process by which a trader200may become a steward204may comprise three steps. First, a trader200may exchange a quantity of virtual tokens (for example, and without limitation, CEC) for a quantity of VT. Second, a trader200may assign or delegate VT to a proposed project. These steps do not have to occur in order; any order of these steps falls within the scope of the present disclosure. Moreover, steps may be subtracted or added without departing from the scope of the present disclosure. A trader200may become a steward204by exchanging or swapping virtual tokens with voting tokens (VT). In certain embodiments, a project developer202may pay a negotiated fee for services to an auditor206. In certain embodiments, a project developer202may receive investment funds from the DAO treasury400in the form of a loan. The loan may be repaid with interest and may be repaid from the profits of a project. In certain embodiments, a project developer may pay a project listing fee to the DAO reserve402to have the project voted upon by one or more stewards and/or representatives204. In certain embodiments, an auditor may stake virtual tokens to the DAO reserve402in exchange for job opportunities. The DAO reserve402may reimburse the entire stake unless the job is failed, in which a lesser percentage of the stake may be reimbursed (for example, and without limitation, 80%). In certain embodiments, a protocol developer212may be paid a predetermined rate or fee from the DAO reserve402in exchange for executing one or more software protocol changes. In certain embodiments, one or more stewards204may stake virtual tokens to the DAO reserve402vote on one or more projects. In certain embodiments, the DAO reserve402may repay the entire stake; however, in certain embodiments, the DAO reserve402may withhold a percentage of the stake if the steward204withdraws its stake before a predetermined time period has elapsed. Anyone may be a liquidity provider404. In certain embodiments, a liquidity provider404may exchange proceeds with one or more of the DAO treasury400, a trader200, and another entity. In certain embodiments, a liquidity provider404may purchase fiat currency. In certain embodiments, a liquidity provider may contribute fiat currency and/or virtual currency to a liquidity pool. In certain embodiments, the DAO treasury400may transfer a percentage of interest earned and/or profit earned (for example, and without limitation, 20%) to the DAO reserve402. In certain embodiments, the DAO reserve402may transfer a periodic stake-proportional yield to one or more stewards204. In certain embodiments, the DAO reserve402may mint virtual currency. The virtual currency may be sent to a liquidity provider404to exchange with traders200. In certain embodiments, one or more of the DAO treasury400and the DAO reserve402may act as a liquidity provider404. In certain embodiments, the DAO reserve402may also buy back virtual tokens from a liquidity pool. Thus, the DAO reserve402may remove virtual tokens and fiat currency from the liquidity pool. In certain embodiments, a transaction fee (for example, and without limitation, of all transactions) may be paid by traders200to the DAO reserve402in order to pay validators208. In certain embodiments, a validator208may stake virtual tokens to the DAO reserve402in exchange for the right to participate in one or more DAO operations. The DAO reserve402may return the validator's stake208in its entirety; however, in certain embodiments, the DAO reserve402may withhold part or all of the validator's stake if the validator208acts maliciously. In certain embodiments, the DAO reserve402may compensate non-malicious validators208with funds proportional to the yield of transaction fees earned during a period. In certain embodiments, a trader200may become a project developer202by having its identity and/or credentials verified by an auditor206. Additional steps may be taken without departing from the scope of the present disclosure. In certain embodiments, a trader200may be required to pay a fee to become a project developer202. A trader200may also become a validator208. In certain embodiments, a trader200may be required to have its identity and/or credentials verified by an auditor206before becoming a validator208. In certain embodiments, a trader may be required to pay a fee to become a validator208. In certain embodiments, a validator may operate a node and/or distribute proceeds to other delegators. A validator208, auditor206, or any other role may become a trader by acquiring digital tokens. In certain embodiments, a trader200may use a liquidity provider404to exchange currency for virtual tokens (for example, and without limitation, CEC). A coin offering (“CO”)406(for example, and without limitation, an initial coin offering (“ICO”)) may be used to transfer additional virtual tokens to the DAO treasury400. In certain embodiments, a DAO400may invest currency in one or more projects. Currency may be held in the DAO treasury400. When approved projects produce revenue, a project developer202may return currency to the DAO treasury400(for example, and without limitation, one or more of fiat currency and virtual tokens). In certain embodiments, the DAO treasury400may mint (that is, create) a quantity of virtual tokens for a CO406. In certain embodiments, the DAO treasury may mint (that is, create) a quantity of virtual tokens and transfer the quantity of virtual tokens to a DAO Reserve402. In certain embodiments, the DAO reserve402may mint its own virtual tokens and/or receive currency in the form of a fee from one or more project developers202in exchange for allowing a project to be listed on a registrar for approval. In certain embodiments, one or more auditors may stake virtual tokens (that is, transfer virtual tokens to one or more of the DAO treasury400or the DAO reserve402) in exchange for the right to perform services. Once some or all of the auditor's services are performed, the auditor206may receive virtual tokens from one or more of the DAO treasury400or the DAO reserve402. Similarly, in certain embodiments, one or more validators208and/or one or more delegators may stake virtual tokens in exchange for the right to perform services. Once some or all of the validator's services are performed, the validator208may receive virtual tokens from one or more of the DAO treasury400or the DAO reserve402. Similarly, in certain embodiments, one or more individuals or entities may stake virtual tokens in exchange for the right to vote. The one or more stewards204or representatives may receive virtual tokens if the project is approved and the project incurs revenue. Alternatively, the one or more stewards204or representatives may be reimbursed their staked virtual tokens if the project is rejected. In certain embodiments, the one or more stewards204or representatives may be reimbursed a quantity of virtual tokens greater than their original staked virtual tokens due to interest accrued while the virtual tokens were staked. The DAO treasury400may receive currency in exchange for virtual tokens. For example, and without limitation, the DAO treasury may receive one or more of fiat currency (for example, and without limitation, U.S. dollars), stable coins (for example, and without limitation, USDT), or other cryptocurrencies (for example, and without limitation, Ethereum and/or Bitcoin) from the sale of newly issued virtual tokens. In certain embodiments, the DAO treasury400may retain a first portion of the received currency; a second portion of the received currency may be transferred to the DAO reserve402to provide stabilization and/or to assist in future transactions. In certain embodiments, individual investors and/or traders200may be able to purchase a DAO's virtual tokens from the DAO itself or from crypto exchanges, centralized exchanges, and/or decentralized exchanges. In certain embodiments, the DAO treasury400and/or reserve402may operate as a trader200. Entities may hold virtual tokens in a virtual wallet (for example, and without limitation, Metamask). In certain embodiments, liquidity providers may procure a DAO's virtual tokens and other crypto-currencies in order to execute market-making functions on one or more cryptocurrency exchanges to facilitate currency swaps by investors and/or traders. In certain embodiments, the DAO treasury400may deliver funds to a project developer's virtual wallet as approved by the DAO after a successful voting process by stewards204. In return, the project developer202may be required to deliver some or all of the funds produced by the project to the DAO treasury400per a funding contract. In certain embodiments, one or more virtual wallets may be associated with one or more private keys. A private key may mathematically obfuscate access to one or more wallets in order to increase the cryptographic security of one or more of sending, receiving, and spending virtual tokens. In certain embodiments, a DAO may maintain a reserve402of virtual tokens produced from ICOs, COs, or other fund-incurring processes. Periodically, the DAO may approve the minting of new virtual tokens to replenish the DAO reserve402in order to provide a reward for staking by project validators208, delegators, stewards204, and/or representatives. In certain embodiments, the DAO reserve402may be used to pay protocol developers212to implement changes to the DAO that have been approved by governors210. In certain embodiments, the DAO reserve402may be used to pay auditors206for their services. In certain embodiments, the process of selecting an auditor206to perform work may occur as a function based on one or more of the amount of virtual token(s) staked, the amount of time staked, the reputation of the auditor, matching credentials of the auditor and task requested, and randomness. In certain embodiments, auditors206who have recently performed tasks may have a lower chance or receiving additional tasks—in this way, a semi-equitable distribution of work among auditors206may be maintained. In certain embodiments, the highest selection factor in determining an auditor206to receive work may be the amount of virtual tokens staked. In certain embodiments, auditors206who have spent more time staked without being selected may have a higher chance of being assigned work. In certain embodiments, the following algorithm may be used to select an auditor206to perform a task: AW=(AM·CM·RM·SM)CM=1+#ofMatchingCredentials9RM=max(0.1,ln(2AR))SM=ASTS WHEREIN: VariableDefinitionAMAge Mechanic (representing the number of daysan auditor 206 has been staked without beingselected to receive work)ARAuditor's Reputation (representing an auditor'sreputation score as a value between 0 and 2with a default of 1)ASAuditor's Stake (representing the auditor'squantity of staked virtual tokens)AWAuditor's Weight (representing the weight/likelihood for a given auditor 206 to be selected)CMCredential Mechanic (representing the degreeto which an auditor's credentials match witha given task)RMReputation MechanicSMStaking MechanicTSTotal Staked (representing the total amount ofstaked virtual tokens in the pool) In certain embodiments, auditors206may not be allowed to perform tasks requested by themselves. In certain embodiments, an auditor's staked currency may be lost if the task completed was deemed to be incorrect. In certain embodiments, a non-fungible indicator associated with an account and/or wallet may be generated when the account holder's identity is verified. In certain embodiments, the non-fungible indicator may identify the entity identified, the source(s) of the verification, and/or a hash/checksum for matching the verification information with the identity information. In certain embodiments, the verification may no longer be valid if key identity information in the account or wallet is changed. In certain embodiments, an auditor206may receive an increase in reputation score specific to identity verification upon successfully verifying an actor's identity. Conversely, in certain embodiments, an auditor may receive a decrease in reputation score upon incorrectly verifying an actor's identity or failing to verify an actor's identity. In certain embodiments, reputation scores may be consolidated across an actor's tasks; in certain other embodiments, reputation scores may be unique to each of an actor's tasks. In certain embodiments, an auditor206may receive one or more reputational adjustments directly tied to the completion of one or more contracted tasks (including, and without limitation, one or more of verification, certification, project validation, contract negotiation, and assessment) acceptably or unacceptably (wherein a task is completed “acceptably” if it is confirmed by the other contracted party or other auditor for correctness and within the expected time). In certain embodiments, a reputational score may be specific to a category of work (that is, a task type) and may not transfer to other types. For example, and without limitation, an auditor206may have a high score for verification and a low score for assessment. In certain embodiments, the amount of reputational change may be dependent upon one or more of the type of contract and the reason(s) for marking a task as acceptable or unacceptable. In certain embodiments, a reputational change corresponding to a failure may be lessened by the replication of the same result from other auditors206. Such a mechanism may reduce the effects of unhelpful or unwilling contract participants. In certain embodiments, post-contract reputation scores may be adjusted through downstream processes, such as a feedback loop for various project contracts from the users of the data (stewards and representatives) and from the contract initiators. In certain embodiments, an auditor206may receive an increase in reputation score (including a general reputation score and/or a credential-verification-specific reputation score) upon successfully verifying an actor's credentials. Conversely, in certain embodiments, an auditor may receive a decrease in reputation score upon incorrectly verifying an actor's credentials or failing to verify an actor's credentials. In certain embodiments, a project assessment process may be used to evaluate and verify a project plan for purposes of investment. A project assessment process may generate a series of weighted or unweighted scores associated with one or more points of interest. In certain embodiments, auditors206may determine the project assessment scores. In certain embodiments, stewards204or representatives may use one or more project assessment scores to assist in determining whether to vote for a project. In certain embodiments, projects may be verified. In certain embodiments, project verification may only be permitted on behalf of verified entities. In certain embodiments, project verification may provide a non-fungible indicator ( ). The non-fungible indicator may indicate one or more of whether the plan has been verified, what the sources of the verification were, what project assessment scores are associated with the project (if any), and a hash/checksum for matching the verification information with the credential information. In certain embodiments, the verification may no longer be valid if key identity information in the plan is changed. Any relevant aspect of a project may be assigned a project assessment score without departing from the scope of the present disclosure. For example, and without limitation, return rate may be included as a project assessment score. Return rate may be defined as the average projected rate of return per time period over the lifespan of the payback. Alternatively, a contractually determined number may be used as the return rate. In certain embodiments, the return rate may be provided by the project developer202. In certain embodiments, the project developer202may further provide the period and/or frequency relevant to the return rate. In certain embodiments, an auditor206may be required to independently calculate and/or confirm one or more values provided by a project developer202. In certain embodiments, project expected success may be included as a project assessment score. Project expected success may be a percentage value representing the likelihood that the project will complete development and reach its expected return rate. In certain embodiments, project expected success may be calculated based on one or more of the project's utility score, the project developer's history and/or credentials, the project's development plan, the relevant geopolitical climate for the project, and other factors. In certain embodiments, utility may be included as a project assessment score. Utility may be used as a score of risk aversion. Furthermore, utility may provide insight for the project expected success score. In certain embodiments, project cost may be included as a project assessment score. In certain embodiments, a do-good value may be included as a project assessment score. A do-good value may be a financial representation (at the time of calculation) of the social value of the project. In certain embodiments, a do-good value may be thought of in terms of the value of carbon mitigation and/or carbon sequestration (per market credit rates) of the project. In certain embodiments, a do-good value may include communal and/or social factors which may be added and valued in economic terms. For example, and without limitation, the production of clean water as a byproduct of energy generation may be valued based on the difference in time (that is, the economic gain) that the community saves. In certain embodiments, do-good values may be accompanied by additional documentation to explain the rationale of the value. In certain embodiments, the process for generating a project assessment score may be carried out by multiple auditors206. In certain embodiments, a fee may be paid to one or more auditors206in exchange for project assessment score services. A standard calculation may be used to determine a value of the fee based upon information provided about the project. In certain embodiments, an auditor's reputation score (including a general reputation score and/or an assessment-specific reputation score) may be impacted by the quality of its work in producing project assessment scores. In certain embodiments, contract creation may commence after project assessment has completed. In certain embodiments, contract creation may culminate in the creation of a deterministic smart contract for the funding and proceeds distributions of a project. One or more project developers202and one or more auditors206may work together toward a mutually acceptable contract. In certain embodiments, an auditor206may have access to a series of templates provided by the DAO contracting system to ease the creation of the contract. In certain embodiments, a contract may be evaluated by an algorithm of the DAO to guarantee the contract's deterministic nature. In certain embodiments, an auditor206may function in a dual role; an auditor may (1) create a deterministic smart contract; and (2) provide knowledge as to what project and/or contract traits are likely to garner votes from stewards204and representatives. A contract may establish an expected funding amount, which may be payable to the project developer202. The payment of funds may be structured in a lump sum or a tiered fashion based upon deterministic milestones. In certain embodiments, a return mechanism may be implemented, allowing for the return of funds back to the DAO treasury400. In certain embodiments, a project may be assigned a “ready to vote” status upon completion of contract creation. Upon completion of contract creation, an auditor206may be paid some or all of the contract fee. In certain embodiments, an auditor206may review the contract upon its completion. In certain embodiments, the relevant project developer202may review the contract after the auditor's (or auditors') reviews are completed. A project developer202may have the option to terminate the contract, accept the contract, or send the contract back to an auditor206for changes. If a project is terminated, the relevant project developer202may be refunded the remaining contract fee. If the contract is accepted, the project may be made available for voting and the relevant auditor(s)206may receive the remaining contract fee. In certain embodiments, a project developer202may provide a score based on the auditor(s)206performance, which may impact the auditor(s)206reputation score(s). In certain embodiments, a project developer202may provide a segmented score, which may be subdivided into multiple parts. For example, and without limitation, a project developer202may provide separate scores as to an auditor's performance for credentials verification, identity verification, contract negotiation, contract drafting, etc. In certain embodiments, a contract may include time-based restrictions such that the contract cannot remain in an indeterminate state in excess of a predetermined period of time. A project developer202may not be permitted to block the final tiered milestone of funding in order to avoid initiation of the return process. FIG.5is a block diagram illustrating components of an example control server, according to one or more embodiments. Control server500may refer to any computing system for performing the algorithms and communications described herein, and may include processor502, storage508, interface506, and memory504. In some embodiments, control server500may refer to any suitable combination of hardware and/or software implemented in one or more modules to process data and provide the described functions and operations. In some embodiments, the functions and operations described herein may be performed by a pool of control servers500. The algorithms described herein may be performed by one or more control servers, such as the control server illustrated inFIG.5. Memory504may refer to any suitable device capable of storing and facilitating retrieval of data and/or instructions. Examples of memory504include computer memory (for example, Random Access Memory (“RAM”) or Read Only Memory (“ROM”)), mass storage media (for example, a hard disk), removable storage media (for example, a Compact Disk (“CD”) or a Digital Video Disk (“DVD”)), database and/or network storage (for example, a server), and/or or any other volatile or non-volatile, non-transitory computer-readable memory devices that store one or more files, lists, tables, or other arrangements of information. AlthoughFIG.5illustrates memory504as internal to control server500, memory504may be internal or external to control server500, depending on particular implementations. Also, memory504may be separate from or integral to other memory devices to achieve any suitable arrangement of memory devices for use in the DAO. Memory504is generally operable to store one or more applications510. Application(s)510generally refer to logic, rules, algorithms, code, tables, and/or other suitable instructions for performing a particular application described herein. Processor502is communicably coupled to memory504. Processor502is generally operable to execute application510stored in non-transitory form in memory504. Processor502may comprise any suitable combination of hardware and software to execute instructions and manipulate data to perform the described functions for control server500. In some embodiments, processor502may include, for example, one or more computers, one or more central processing units (“CPUs”), one or more microprocessors, one or more applications, and/or other logic. Storage508is communicably coupled to processor502. In some embodiments, storage508may refer to any suitable device capable of storing and/or facilitating retrieval of data and/or instructions. Examples of storage508include computer memory (for example, RAM or ROM), mass storage media (for example, a hard disk), removable storage media (for example, a CD or a DVD), database and/or network storage (for example, a server), and/or or any other volatile or non-volatile, non-transitory computer-readable memory devices that store one or more files, lists, tables, or other arrangements of information. Storage508may store data, such as contract data, device performance data, etc. In some embodiments, interface506is communicably coupled to processor502and may refer to any suitable device operable to receive input for control server500, send output from control server500, perform suitable processing of the input or output or both, communicate to other devices, or any combination of the preceding. Interface506may include appropriate hardware (for example, and without limitation, a modem, network interface card, etc.) and software, including protocol conversion and data processing capabilities, to communicate through a network or other communication system that allows control server500to communicate to other components of the DAO. Interface506may include any suitable software operable to access data from various devices such as components of nodes or other components such as energy trading/pricing/forecasting platforms. FIG.6is a stack diagram illustrating a software system for a DAO, according to one or more embodiments. In certain embodiments, the DAO may run on a TCP/IP protocol. The base layer600may comprise a blockchain or a DAG core608. The blockchain or DAG core608may be responsible for one or more of node consensus determination, transaction validation, ledger maintenance and/or creation, and peer-to-peer propagation. The second layer602may be responsible for one or more protocols for off-chain solutions614, including, and without limitation, smart contracts and/or oracles. The application programming interface604(“API”) may be responsible for one or more API backends616(for example, one or more digital applications) and/or one or more decentralized protocols618(for example, and without limitation, one or more voting protocols). On-chain transactions applications and protocols612may be handled by second layer networks602and/or APIs604, and may transact with the blockchain and/or DAG core610. User and/or market applications606may be responsible for one or more exchanges620, one or more enterprise solutions622(for example, and without limitation, one or more virtual wallets), and/or one or more user applications624(for example, and without limitation, the front end of a digital application). The individual layers ofFIG.6's stack diagram are discussed in more detail below. Layer 1: Base Layer Running on top of TCP/IP, the base layer may be comprised of the platform core software. In certain embodiments, the base layer may be developed as an open-source software that may include one or more of a blockchain and/or DAG core, a proof-of-stake-based transaction validation, and one or more peer-to-peer propagation functions that run the DAO network. In certain embodiments, modification to the DAO's core software may be driven through one or more improvement proposals. In certain embodiments, improvement proposals may be voted on and approved by the governance process. Layer 2: On-Chain and Off-Chain Transactions Token transactions are activities that take place on the main blockchain or DAG. For example, and without limitation, transactions of virtual tokens between the DAO treasury and an entity's wallet may be token transactions. Solution transactions are activities that take place outside the main blockchain or DAG. For example, and without limitation, smart contracts used to implement agreements between the project developers202and the DAO may be solution transactions. Layer 3: APIs In certain embodiments, the API layer may be comprised of one or more of server/client interface to backend contract generators, digital applications, and oracles. For example, and without limitation, an API may connect a project profiling engine to a smart contract generator. Layer 4: User and Market Applications In certain embodiments, the user-and-market-applications layer may comprise one or more high-level user applications. In certain embodiments, the one or more high-level user applications may support the market cases and/or use cases of the DAO. For example, and without limitation, wallets, payment processors, escrow, reserves, exchanges, and/or other applications may operate on the user-and-market-applications layer. In certain embodiments, a DAO may be initialized by a founding organization that creates a first plurality of virtual tokens within a virtual treasury. In certain embodiments, the founding organization may then initiate an ICO, thereby allowing public investment in the DAO. In certain embodiments, a decentralized application may be utilized to allow one or more traders200to interface with the virtual reserve402. The traders200may acquire virtual tokens by exchanging another currency (for example, and without limitation, a fiat currency or another virtual currency) in exchange for the reserve's virtual tokens. The reserve's virtual tokens need not be acquired in whole integers; rather, an trader may acquire a fraction of a virtual token. In certain embodiments, it may be necessary for a trader200to register a virtual wallet before acquiring the virtual currency. In certain embodiments, a trader200may send the virtual tokens to one or more other traders200in one or more transactions. In certain embodiments, a trader200may exchange one or more voting shares in exchange for virtual tokens. A liquidity provider may allow a trader to exchange virtual tokens for another currency (for example, and without limitation, a fiat currency or another virtual currency). In certain embodiments, projects may be funded from the virtual treasury400via a voting process. A vote may be cast by exchanging virtual tokens for VT with the DAO's reserve in a process called “staking.” Any trader200who stakes their virtual tokens to acquire VT is considered a steward204. The act of staking increases the seriousness and gravity of the voting process, as stewards' virtual tokens will be illiquid throughout the staking process. In certain embodiments, the exchange rate between virtual tokens and VT may be based off a logarithmic algorithm to discourage a plutocracy. In certain embodiments, exchanges between virtual tokens and VT may be accompanied by a tax in order to discourage Sybil attacks. Once staked, a steward204may commit their VT to any project that is “ready for vote” via a funding contract. In certain embodiments, a registry of one or more projects may be accessible to one or more stewards204. In one or more embodiments, the value of the VT may appreciate over time once it is assigned to a project. As VT is assigned to a project, its voting weight and illiquid monetary value may increase over time. This mechanism of increasing the weight and value of project-assigned VT may be referred to as “conviction voting,” and it may further discourage Sybil attacks. In certain embodiments, stewards204may be free to unassign their VT at any time; however, removing VT may retain only its principal value (that is, the value originally assigned by a steward204), and the accumulated weight and value from conviction voting may be lost. In other embodiments, the accumulated weight and value from conviction voting may not be immediately lost upon withdrawal of VT from assignment; rather, it may depreciate over time. In certain embodiments, withdrawn VT may depreciate at a rate faster than the rate of conviction voting appreciation. In certain embodiments, a steward204may go through an identification verification process to become a representative. A steward204seeking to become a representative may be able to skip the identification verification process if prior verification has already been performed (for example, and without limitation, if the steward204was previously verified to become an auditor206). In certain embodiments, the governors210may approve a steward204to become a representative. The decentralized application may add confirmed representatives to a registrar. In certain embodiments, a steward204may be required to pay virtual tokens to become a representative. Representatives are stewards204that may be delegated VT from other stewards204to vote on the staking stewards' behalf; in this way, the voting process of the DAO may resemble a republic (that is, a system in which members elect representatives to vote on their behalf), a direct democracy (that is, a system in which members vote directly on behalf of themselves), or any combination thereof. In certain embodiments, the mechanism by which representatives vote on stewards' behalf may be referred to as “liquid democracy.” In certain embodiments, a representative may be assigned a reputation score based on the historic performance of projects for which the representative has voted in the past. In certain embodiments, a representative may be removed from representative status based on one or more criteria (for example, and without limitation, a poor reputation score or other undesirable behavior); in other embodiments, representative status may be permanent. In certain embodiments, a project may be approved upon acquiring an amount of VT equal to or exceeding a predetermined funding goal. In certain embodiments, approved projects may be removed from a votable list of projects held in a registrar. One or more stewards204and/or representatives who voted to approve a project that was ultimately approved may be recorded for the purpose of modifying their reputation score. In certain embodiments, a steward204who assigned VT to approve a project may receive the principal amount of VT assigned upon approval of the project. In certain other embodiments, a steward204who assigned VT to approve a project may receive the full, accumulated value of VT (including value gained from conviction voting) upon approval of the project. In certain embodiments, stewards204who approved a project may receive virtual tokens as an interest payment proportional to the VT staked or as a proportional yield associated with proceeds from project returns. In certain embodiments, a steward204may be permitted to exchange its VT for an equivalent value of virtual tokens at any point. In certain embodiments, when a steward204votes through a representative for a project that is later approved, a portion of the rewards associated with the voting may go to the representative and a portion of the rewards associated with the voting may go to the steward204. In certain embodiments, the steward's share of rewards may be larger than the representative's share. In certain embodiments, new projects may be proposed by project developers202. In certain embodiments, project developers may pay virtual tokens to request a funding contract. Auditors206may determine whether the funding contract should be granted. In certain embodiments, project developers202may pay additional virtual tokens to have their requests for funding contracts reviewed by auditors206in an expedited fashion. In certain embodiments, two or more auditors206may be required to approve a funding contract. In certain embodiments, auditors206may determine whether to approve a project by using one or more criteria (for example, and without limitation, an auditor206may verify the project and project developer's credentials, perform a project feasibility check, and/or review documentation for problems). In certain embodiments, auditors206and project developers202may work together to adjust project funding contracts. Once the assigned auditors206and the project developer202agree that a funding contract is acceptable, the project may be listed on a registrar as a “fundable project.” In certain embodiments, auditors206may be compensated via a fee arrangement that pays out at successive milestones. For example, and without limitation, an auditor206may be paid 15% of its fee after verifying the relevant project developer's identity, 40% after completing a feasibility study, 15% after creating a contract, and 30% after the contract is accepted. In certain embodiments, a relevant project developer202may pay virtual tokens to repeat the audit process in the event that an auditor206has rejected a proposed funding contract. In certain embodiments, projects associated with sub-projects or parent projects that have already undergone a successful audit may be allowed to skip one or more phases of the audit process. In certain embodiments, an oracle (that is, an off-chain resource used for verification and validation of contracts) may be used to communicate data to one or more smart contracts. An oracle may collect data itself or receive data from one or more third-party sources. In certain embodiments, data from a project may be communicated by an oracle to a smart contract. In certain embodiments, the data may be a metric by which the success of the project is determined for the purpose of generating a representative's reputation score. The data may be any useful data, including, and without limitation, an amount of energy produced, a level of vegetation density, video surveillance footage or other evidence of carbon sequestration, documentation of land ownership, documentation of resource production, other useful data, or any combination thereof. In certain embodiments, if payment for a project is insufficient to cover costs, the DAO may transition the project to an “insufficient payment” state. The DAO may update the project status to indicate the insufficient payment. A project developer202of an approved project may provide additional payment to the appropriate smart contract with the insufficient payment to resolve the situation. In certain embodiments, a trader200may begin the process of becoming an auditor206by completing an associated form and/or paying a fee with virtual tokens. In certain embodiments, an existing auditor206may initiate an identification process to determine the identity of the trader200seeking to become an auditor206. In certain embodiments, the existing auditor206may provide identification information to one or more smart contracts. In certain embodiments, two or more auditors206may be needed to approve a trader's request to become an auditor206. In certain embodiments, if the two auditors' identification results match, each auditor206may be paid a portion (for example, and without limitation, 25%) of the auditors' fee. In certain embodiments, if the two auditors' identification results do not match, then a third auditor206may be assigned to verify the identity. If the third auditor's identification results match one of the first two auditors' identification results, the auditors206with matching identification results may be paid the portion of the auditors' fee and the nonmatching auditor206may lose its right to a fee. If the third auditor's identification results do not match either of the first two auditors' identification results, subsequent auditors may be added until two auditors' identification results match. After identification, the trader200may confirm the identification. Following identification of a trader200seeking to become an auditor206, the trader may undergo a certification process. Once certification is complete, certification data may be provided to one or more smart contracts. In certain embodiments, two or more auditors206may be needed to approve a trader's request to become an auditor206. In certain embodiments, if the two auditors' certification results match, each auditor206may be paid a portion (for example, and without limitation, 25%) of the auditors' fee. In certain embodiments, if the two auditors' certification results do not match, then a third auditor206may be assigned to certify the trader200. Once the third auditor's certification results match one of the first two auditors' certification results, the auditors206with matching certification results may be paid the portion of the auditors' fee and the nonmatching auditor206may lose its right to a fee. Auditors206are responsible to validate, verify, and assess projects and entities within the DAO. To ensure fairness, auditors206must stake virtual tokens in a proof-of-stake method to win the chance to work on contracts. In certain embodiments, an auditor206may be assigned to work on a smart contract based off a weighted system with one or more criteria. In certain embodiments, the criteria may comprise one or more of stake size, auditor score, certifications associated with the relevant request, and a randomly generated value. A randomly generated value may be included within the selection algorithm in order to facilitate an equitable distribution of work. An auditor may stake virtual tokens to indicate its level of interest in processing contracts and in order to increase its odds of being awarded one or more smart contracts to validate, verify, and/or assess. In certain embodiments, auditors206may be assigned a reputation score based on their performance in validating, verifying, and/or assessing projects and/or entities. A DAO may comprise one or more servers. In certain embodiments, a DAO may comprise two servers. A first server may be operable to store a plurality of first virtual tokens, wherein each one of the plurality of first virtual tokens may be associated with fiscal value. Fiscal value may take the form of one or more fiat currencies (for example, and without limitation, U.S. dollars) and/or one or more other virtual tokens. Those skilled in the art will understand that the values of the first virtual tokens need not directly follow the value of a fiat currency; rather, the first virtual tokens may be associated with one or more fiat currencies insofar as one or more fiat currencies may be used to purchase a quantity of the first virtual tokens. A second server may be operable to store a plurality of second virtual tokens, wherein each one of the plurality of second virtual tokens may correspond to a unit of voting power (for example, and without limitation, VT). In certain embodiments, a DAO may operate through a plurality of nodes. In certain embodiments, any role within a DAO may have one or more nodes associated with that role. For example, and without limitation, auditors206may be associated with one or more auditor nodes; project developers202may be associated with one or more project developer nodes; validators208may be associated with one or more validator nodes; stewards204may be associated with one or more steward nodes; protocol developers212may be associated with one or more protocol developer nodes212; governors210may be associated with one or more governor nodes; and/or other roles may be associated with one or more other nodes. FIG.7is a process diagram illustrating a method for initiating a project within a DAO, according to one or more embodiments. In certain embodiments, a project developer202may request an account using a digital application700. The project developer202may fill out project details with supporting documentation702. An auditory206may verify the developer's identity using a second digital application704. In certain embodiments, step702and step704may occur at or around the same time. The project developer202may then submit a completed project form706. The project developer202may respond to assessment queries708from an auditor206while the auditor assesses the project710. Once the auditor206has completed project assessment712, the project developer202may request creation of one or more smart contracts714. The project developer718may respond to assessment queries718from an auditor206while the auditor produces one or more smart contracts716. Once the auditor206has completed the one or more smart contracts720, the project developer202may review the one or more smart contracts and request funding722. Voters (such as stewards and representatives204) may vote to approve the project, the DAO may verify the votes, and the DAO treasury400may fund the project724. Once funding is received, the project developer202may begin development using the new funds726. The steps listed in this paragraph are purely exemplary and non-limiting. It is within the ability of one of ordinary skill in the art and with the benefit of the present disclosure to select one or more appropriate steps for initiating a project. Moreover, steps may be added, omitted, or performed in a different sequence without departing from the scope of the present disclosure. FIGS.8A-8Dare example user interfaces for DAO project management, according to one or more embodiments. In certain embodiments, an overview page800may be used as a base page from which a user may view, enter, and/or search for information about a DAO project. In certain embodiments, a user may view, enter, and/or search for information regarding the status of a DAO project using a development status page802. In certain embodiments, a user may view, enter, and/or search for information regarding expenses related to operations and maintenance of a DAO project using an operations and maintenance page804. In certain embodiments, a user may view, enter, and/or search for information regarding project financing using a finance details page806. User interfaces800,802,804, and806are purely exemplary and non-limiting. Other user interfaces may be used without departing from the scope of the present disclosure, and it is within the ability of one skilled in the art and with the benefit of the present disclosure to select a suitable user interface. While various embodiments of a DAO were provided in the foregoing description, those skilled in the art may make modifications and alterations to these aspects without departing from the scope and spirit of the invention. For example, it is to be understood that this disclosure contemplates that, to the extent possible, one or more features of any aspect can be combined with one or more features of any other aspect. Accordingly, the foregoing description is intended to be illustrative rather than restrictive. The invention described hereinabove is defined by the appended claims, and all changes to the invention that fall within the meaning and the range of equivalency of the claims are to be embraced within their scope. | 77,774 |
11943356 | DETAILED DESCRIPTION Embodiments of the present invention allow for linked user identity for persistent login. Such linked identity may allow different entities to maintain control over their respective sets of user data, while providing a streamlined user experience that avoids much of the repetitive need to login to different services with different login credentials (e.g., during periods of heavy use). FIG.1Aillustrates an exemplary network environment in which a system of linking identity for persistent login may be implemented. The illustrated network environment includes a communication network105and the respective systems (110A and110B) of two different entities (Entity1and Entity2), as well as identity management services140A-N and user devices145A-N. Further, each entity system110A-B may further include web server(s)115A-B, web application(s)120A-B, policy database(s)125A-B, OAuth prox(ies)130A-B, and persistent login prox(ies)135A-B. Each communication (e.g., between systems110, services140, and devices145) can occur over one or more communication network(s)105. Any combination of open or closed networks can be included in the communication network105. Examples of suitable networks include the Internet, personal area networks, a local area networks (LAN), wide area networks (WAN), wireless local area networks (WLAN), and other networks known in the art. The communication network105can further be inclusive of intranets, extranets, and combinations thereof. Examples of communication network service providers are the public switched telephone network, a cable service provider, a provider of digital subscriber line (DSL) services, or a satellite service provider. Communications network105allows for communication between the various components of network environment. Each of the different systems110A-B may be controlled by a specific entity. As used herein, entity may refer to a business (including online business), organization, brand, or other type of body that exercises control over a respective system110. Entity systems110A-B may vary by entity and include any number and combination of computer system components, servers, apparatuses, and devices associated with performing operations of the associated entity. As illustrated inFIG.1A, however, entity systems110A-B may include components associated with implementing persistent login. Web server115may be inclusive of any web server or servers known in the art, as well as other computing devices that may include standard hardware computing components such as network and media interfaces, non-transitory computer-readable storage (memory), and processors for executing instructions or accessing information that may be stored in memory. The functionalities of multiple servers may be integrated into a single server or may be distributed over multiple serves and associated devices. In an exemplary embodiment, web server115may support a variety of websites, webpages, and web-based services associated with the respective entity. For example, an entity that is an online business may offer users access to business-related data and services via online sites, pages, and portals provided by web server110. Web application120may be inclusive any type of application known in the art for executing functions in an online or web setting. Web application120may be installed on and/or executable on web server115or other associated device, system, or server to perform specified web or online operations for the associated entity system110. In exemplary embodiments, web application120may be executable to manage client or user login for the entity system110. As mentioned above, where the entity may be an online business, and such online business may wish to provide individual users with access to their respective data in a secure manner. A web application120may be used to elicit user credentials (e.g., user name, password) in accordance with login policies and to authenticate such credential before providing access to certain data or services. Policy database125may be inclusive any storage structure maintained in one or more memory, which may be local, remote, or distributed. In exemplary embodiments, policy database125may include any variety of policies (e.g., rules, requirements, etc.) associated with account linking and persistent login. An exemplary policy may specify certain authentication requirements in order to link accounts and to access certain account data or calling specific services and APIs. Such policies may be determined by the associated entity, such that each entity may maintain different custom sets of policies governing the extent of access and types of authentication requirements over their respective data and services. As such, the ability to access and execute certain data and services may be controlled in accordance with the policies specified by the associated entity and maintained in policy database125. The entity may further update and implement policies dynamically and in real-time by updating policy database125. While different existing access authorization and delegation systems may be used, one specific example (OAuth) is referenced herein. As such,FIGS.1A-Billustrate an OAuth proxy130, which may be used in accordance with inter alia OAuth 2.0 standards to link two different user accounts, each associated with two systems110A-B (e.g., controlled by different entities). OAuth proxy130allows different systems115A-B to use the OAuth framework to exchange access tokens, which are the basis of linking the associated user accounts. In exemplary embodiments, different access tokens may be exchanged between two different entities in relation to accounts associated with the same user. Each access token may be associated with a different scope of authorization (e.g., types of data or services). The access token(s) may thereafter be used by the first entity system115A to call one or more APIs (application programming interfaces) associated with the second entity system115B in order to access certain data and services available to the requesting user (e.g., user of a user device145). Persistent login proxy135A-B may use the account linkage established by OAuth proxy135in a manner that persists over time. Whereas access tokens may be associated with a period of expiration, persistence of the login may be achieved by refreshing the access token. As described in detail herein, such refresh may rely on a refresh token issued by the entity system115B being linked another entity system115A. Rather than issuing a single access token, therefore, persistent login proxy135B may issue and manage a set of tokens, which may include at least an access token, a refresh token, and an identity token. The set of tokens may be provided to the first entity system115A. A user (e.g., user of user device145) that is authenticated may therefore link their user account associated with a first entity system110A with their user account associated with a second entity system110B, such that certain data and services available from the second entity system110B may be accessible via a user landing page or portal of the first entity system110A. Persistent login proxy135may further be configured to encrypt and decrypt tokens (and associated strings) exchanged between different entity systems110. The network environment above may be deployed between a first entity system110A and second entity system110B. Such deployment may include registering the first entity system110A (and associated endpoint(s)) with the second entity system110B (and associated endpoint(s) and API(s)). The first entity (e.g., the entity that makes the call for data from another entity) may add or build the features and components (e.g.,115-135) into their respective system110A to support account linkage and persistent login, as well as provide certain data (e.g., user interface, screen displays) to the second entity. In some embodiments, code snippets or software development kits (SDKs) may be provided to facilitate build or deployment of persistent login implementations. Identity management services140A-N may be inclusive any service provider systems known in the art for managing identity. Some implementations may include identity management services140A-N specific to one or more entities. In some embodiments, however, identity management services140A-N may be operated by third party identity management entities. Such identity management services140A-N may be used to manage identity in accordance with OAuth standards. User devices145A-N may include any types of computing device or system used by a user to communicate with entities1and2. Such user devices145!-N may include one or more integrated circuits, input devices, output devices, data storage devices, and/or network interfaces, among other things. The integrated circuits can include, for example, one or more processors, volatile memory, and/or non-volatile memory, among other things. The input devices can include, for example, a keyboard, a mouse, a key pad, a touch interface, a microphone, a camera, and/or other types of input devices. The output devices can include, for example, a display screen, a speaker, a haptic feedback system, a printer, and/or other types of output devices. A data storage device, such as a hard drive or flash memory, can enable the computing device to temporarily or permanently store data. A network interface, such as a wireless or wired interface, can enable the computing device to communicate with a network. Examples of user devices145A-N may include desktop computers, laptop computers, server computers, hand-held computers, tablets, smart phones, personal digital assistants, digital home assistants, as well as machines and apparatuses in which a computing device has been incorporated. FIG.1Billustrates exemplary information flows within a specific implementation of the system ofFIG.1A. As illustrated inFIG.1B, the network environment may include a first entity site (e.g., “Client.com site” that may be provided by web server115) in communication with a web application120(e.g., “Responsive Login Web App”) and an e-commerce application programming interface (e.g., “Ecom API CustID) associated with a second entity110B. As noted in relation toFIG.1A, communications within the network environment may use presently available communication networks105, including local, proprietary networks (e.g., an intranet) and/or may be a part of a larger wide-area network (e.g., Internet). In some embodiments, a persistent login proxy135A-B may be associated with the first entity and the second entity. Exemplary systems may further use architectures for delegating access authority (e.g., OAuth); such systems may therefore include a proxy130(e.g., OAuth Proxy) that facilitates associated functions of the architectures. The workflows illustrated inFIG.1A-Bare discussed in further detail with respect to the screenshots presented inFIGS.2A-2Band the method ofFIG.3. FIGS.2A-2Gillustrates a variety of screenshots that may be generated and presented to a user during the method illustrated inFIG.3, which is a flowchart illustrating an exemplary method of persistent login. Such method may include linking the accounts of two (or more) different entities that each control their respective security processes and resources and then providing for persistent login based on the linked accounts. The persistent login and associated account linkage allows a user to access and manage their account information from different accounts maintained by different entities in a streamlined and secure manner. Moreover, each entity may continue to establish and enforce their own respective security policies as to the different types of account information and functions. The example illustrated inFIGS.2A-Gallows for a user to link accounts associated with an e-commerce website and a funding source (e.g., bank). FIG.2Ais an exemplary screenshot of a webpage associated with a first entity. In the illustrated example, the first entity may be an online business. The illustrated webpage may be presented to a user that has already logged in and been authenticated by the first online business (“Amazon.com”). In particular, the webpage illustrated in the screenshot ofFIG.2Aincludes a link (“Link your Amazon.com and Synchrony account”) that offers the user the option of linking their e-commerce account with their respective account associated with a second entity (or online business) that may have partnered with the first online business. In some embodiments, the user may be presented with options for linking to multiple different accounts (of different entities). In some embodiments, an entity may be associated with multiple brands and platforms (e.g., mobile application, mobile website, desktop website). The first entity may wish to manage such brands or platforms together (e.g., universal control) or separately. In some instances, the second entity may require separate channels to be used for each brand or platform. In such cases, accounts associated with each brand or platform may be separately linked to a second entity account; such account linking may therefore be controlled at the brand or platform level, and persistent login may be separately controlled in relation to specific brands or platforms. Terminating persistent login for one brand or platform, for example, may be accomplished without terminating persistent login for other brands or platforms associated with the terminating entity. Upon activation of the link, the user may be transported to another website. The screenshot ofFIG.2Billustrates an exemplary notification to the user regarding their departure from the first online business website115A and transport to the second online business website115B or web application120B. As provided in the notification, the transport to the second online business website allows the user to provide authentication credentials at the second online business, which therefore does not have to rely entirely on the first online business to manage user identity. FIG.2Cillustrates an exemplary login webpage associated with the second online business. The URL that transported the user from the webpage ofFIG.2Ato the present webpage ofFIG.2Cmay include metadata that conveys information to the second online business about the user account with the first online business. In some instances, additional security measures or checks may be implemented to ensure the correct accounts are linked. For example, the first online business may need to provide certain account data (e.g., last 4 digits of an account number of the second business) before the linking process is allowed to proceed. The webpage illustrated in the screenshot ofFIG.2Casks the user to provide credentials (e.g., user ID and password) that are associated with the second online business (e.g., Synchrony Bank). Such credentials are specific to the second online business and may have been established when the user opened or otherwise registered an account with the second online business. In some embodiments, the second online business may perform any variety of security checks known in the art for authenticating and verifying the user and their authorizations (e.g., device checks, validate accounts, rules enforcement). For example, the second entity system110B may perform a device check of user device145and develop a risk profile in real-time based on different factors present during the call to link accounts. Depending on the level of risk indicated by the risk profile, additional security measures (e.g., multi-factor authentication) may be required before proceeding with linking the accounts. Risk factors considered may include whether the device has been used before in associated with access to the second entity account, geographic location, prior logins, login frequency, and other risk factors known in the art. Whether a risk profile calls for additional security measures (and which ones) may be defined by the second entity (e.g., and maintained in policy database125B). As such, the second entity may continue to update and tune the rules over time. In some embodiments, data regarding the risk factors and risk profiles may be tracked over time, along with associated rules adjustments. Artificial intelligence (AI) and machine learning (ML) techniques may further be applied to such data in order to refine rules adjustments in real time based on patterns and trends detected within the risk factors and risk profiles. For example, historical data may be tracked as to risk associated with user devices145from different locations, and comparisons may indicate a spike in persistent login requests coming from user devices145in Russia or the Ukraine. Such spike may be indicative of higher levels of risk, and the rules may be adjusted in real-time to require stepped-up authentication requirements in response to the same. While the user may be presented with the screenshots presented inFIGS.2A-2G, the components ofFIG.1(e.g., persistent login proxies140A-B and OAuth proxy150) may be operating in conjunction to implement account linkage and persistent login in accordance with the method illustrated inFIG.3. As discussed herein, there may be time limitations on persistent login so as to avoid security risks and otherwise enforce and maintain compliance with security requirements, rules, and policies of the different entities. The method for persistent login allows, however, for logins to persist under certain circumstances, as well as reducing the number of times the user is required to re-authenticate and/or log in again. In step310ofFIG.3, an authorization code may be generated based on the authentication of user in relation to the first entity. An authorization code may be generated (e.g., in accordance with standard OAuth 2.0 specifications) by web application120or other component (e.g., persistent login proxy140A) associated with the first entity. Such authorization code may be valid for a specified time period (e.g., five seconds). The authorization code may be provided (e.g., via OAuth proxy150) to the second entity (e.g., persistent login proxy140B). In step320ofFIG.3, the second entity may issue a set of tokens in response to the authorization code. Such set of tokens may include an access token, refresh token, and identity token. The token set may be provided to the first entity when the first entity website110makes a service call to the second entity. Such service call may include exchanging the authorization code for the set of tokens, which may thereafter be validated and stored by the first entity in association with the user account (e.g., Amazon.com user account). The first entity may thereafter use the issued (and validated) tokens to access and retrieve information maintained by the second entity. Such access may be streamlined, as the user may no longer be required to enter and re-enter credentials for the second entity as often. Based on the tokens linking the respective user accounts of the first and second entities, the user may be presented with a webpage corresponding to the screenshot ofFIG.2D. As illustrated, the screenshot ofFIG.2Dindicates that the first online business account (e.g., Amazon.com user account for the user) has been linked to the second online business account (e.g., Synchrony user account for the user). In addition, secured, real-time information from the respective user accounts may be used to populate a webpage associated with the first entity. The linked accounts may further allow the user to access certain secured services (e.g., adding another authorized user) associated with the second online business. As noted before, the webpage may be associated with the first online business, such as a user account webpage that may be presented to the user who is logged into the first online business. Notwithstanding and unlike prior implementations (e.g., traditional SSO), the second online business retains a degree of control over the types and extents of information and services provided that had been available before. Over a period time, the user may pause or cease interacting with the webpage (e.g., goes to another website). To allow the user to access the same set of data again (e.g., from both accounts). To maintain streamlined access, the accounts may remain linked for a predetermined period of time based on the validity of the issued tokens. Each token may have a respective period of time during which the token is valid. After expiration of the associated period of time, however, the tokens are no longer valid and may need to be replaced or otherwise refreshed. In step330ofFIG.3, an access request (including the access token) may be sent from the first online business to the second online business. Such access request may be sent when a user at the first online business website seeks to access information associated with the second online business account. In an exemplary workflow, the first online business may verify that the current secured session is still active (e.g., whether the user remains logged in at Amazon.com). If not, the user may be requested to log into the first online business account by providing the credentials specific to the first online business. Once the user has been authenticated with the first online business, the website may make a call (associated with the access token) for certain information or services from the second online business. In step335ofFIG.3, it may be determined whether the access token is valid. Such check may be performed once the user has been confirmed as having logged back in with the first online business. The first online business may check the validity of the access token (associated with the second online business). For example, the first entity (e.g., Amazon.com) server may communicate with a second (e.g., Synchrony) endpoint server to confirm that the access token associated with linking the two accounts is valid. No other information regarding the user may need to be exchanged other than confirmation that the access token remains valid. If the access token is valid, the method may proceed to step360. If the access token is not valid, the method may proceed to step340ofFIG.3in which a refresh request including the refresh token may be sent. In some embodiments, the access token may be association with a predetermined time period (e.g., one hour), after which the access token may expire and become invalid. If the access token has become invalid, step340may include the first online business server thereafter pinging the same or different endpoint of the second online business with the associated refresh token. In some embodiments, persistent login may be governed by a different set of rules than the rules governing account linking. As such, depending on the risk profile and the rules governing persistent login in association with indicated risk level, different security checks may be performed in order to determine whether stepped-up security measures (e.g., multi-factor authentication) may be required before issuing a new access token or refreshing the access token. In step350, a new (or refreshed) access token, as well as a new refresh token, may be issued and sent to the first entity. Like the access token, the refresh token may also be configured with an expiration date (e.g., ninety (90) days from generation date). As long as the user continues to use the linked accounts within the timeframe during which the refresh token is valid (e.g., at least once every ninety days), the refresh token continues to be refreshed and valid, thereby allowing for the linkage to persist over time. As such, users who may frequently access the webpage(s) populated with data obtained using persistent login are not required to re-enter credentials of the second online entity. The period of expiration may be set, adjusted, or otherwise updated by the second entity in accordance with their respective policies (e.g., from policy database125B). Such adjustments may be specific to the first entity, such that subsequent access tokens (and refresh tokens and authorization codes) issued for linking accounts between the first entity and the second entity may have adjusted longevity. Once the access token has been confirmed as valid (or refreshed so as to be valid), the first online business server may generate an encrypted payload (e.g., URL safe string) based on the validated access token and the associated identity token in step360ofFIG.3. In step370, such payload may be directed and handed off to the second online business server (e.g., a Synchrony server), which may thereafter decrypt the payload (e.g., by way of a back-end service), validate the access token and identity tokens, confirm that both of the provided tokens are associated with the same user account of the second online business, and pull data (e.g., user identifier) and look-up related data from the user account based on the identity token. In some embodiments, use of the access token may be limited to one attempt. As such, regardless of whether the access token is deemed valid or not, the first online business server may be required to refresh the access token as described herein in order to conduct a next persistent login attempt successfully. In some instances, an entity may specify a quota as to how many times the access token may be used for a specific service or over a specific period of time. Meanwhile, the second online business may further apply all of the rules, policies, etc., in accordance with their specific requirements. Various additional security checks may be performed regarding the user device or account status, for example. If the security checks are successful, the user may be taken to a landing page such as illustrated in the screenshot ofFIG.2E. Such landing page may correspond to what the user is normally presented with if the user had logged in with the second online business directly. As described herein, however, the user may only need to supply credentials once (e.g., upon account linkage) and the link may persist as long as the user continues to access the linked accounts during the time period in which the refresh token remains valid. In some embodiments, an API may be provided that allows for the data of the landing page to be presented via a webpage associated with the first online business. Rather than presenting the user with a landing page associated with the second online business (e.g. of Synchrony Bank), therefore, the user may be presented instead with a webpage associated with the first online business (e.g., Amazon.com) but that presents the data obtained from the second online business. In other words, the webpage in the screenshot ofFIG.2Emay be presented by either the servers of the first online business (e.g., on Amazon.com) or of the second online business (e.g., on Amazon.SYF.com). In some embodiments, certain types of data may be subject to higher security standards. For example, certain types of user data (e.g., credit line increase, adding an authorized user) may be associated with higher risk than other data (e.g., check status, available credit, current balance, next payment due date). Such data may therefore be categorized (e.g., by risk), and the access token may be associated with a scope of risk. As such, the first online business may make service call s to various endpoints of the second business in accordance with the respective scope of risk of the access token being used. Be Additional or step-up security requirements may be required, for example, where the access token is associated with a low-risk scope only, but the requested service is considered high-risk. Such higher risk service may require involvement of different endpoint servers of the second online business. Attempts to use a low-risk access token to call upon the endpoints associated with higher-risk transactions may fail based on the mismatch between the scope of the access token and the category of the requested service or data associated therewith. A different workflow may be initiated, however, to obtain stepped-up credentials or other security measures in order to change the scope of the current access token or issue a new access token having the appropriate scope to access the requested service or data. FIG.2Gis a screenshot of an exemplary webpage associated with full service management associated with the second online business. Such a screen may be presented when the user of user device145chooses to go to full servicing by clicking on “Manage at Synchrony Bank” after the accounts have been linked. The user may then be authenticated again before landing on a landing page associated with the second entity (e.g., SYF servicing summary page). The user may also be given the option to de-link accounts. For example,FIG.2Fillustrates an option button for de-linking the first and second online business accounts of the specific user. As such, if the user wishes to access the second online business by way of the first online business website, the user may be requested to undergo the linking process again and resupply credentials in order to link the accounts and use persistent login. As discussed, users may use any number of different electronic user devices to initiate transactions within the network environment, such as general purpose computers, mobile phones, smartphones, personal digital assistants (PDAs), portable computing devices (e.g., laptop, netbook, tablets), desktop computing devices, handheld computing device, or any other type of computing device capable of communicating over communication network. User devices may also be configured to access data from other storage media, such as memory cards or disk drives as may be appropriate in the case of downloaded services. User device may include standard hardware computing components such as network and media interfaces, non-transitory computer-readable storage (memory), and processors for executing instructions that may be stored in memory. As example and not by way of limitation, the computer system may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, or a combination of two or more of these. Where appropriate, the computer system may include one or more computer systems; be unitary or distributed; span multiple locations; span multiple machines; and/or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate. Further, the entities discussed herein—including the respective websites and services—may perform the disclosed actions using any type of computing device or server known in the art, including standard hardware computing components such as network and media interfaces, non-transitory computer-readable storage (memory), and processors for executing instructions or accessing information that may be stored in memory. In different embodiments, the functions may be distributed over multiple network devices, or the functionalities of multiple servers may be integrated into a single server. Any of the aforementioned servers (or an integrated server) may take on certain client-side, cache, or proxy server characteristics. These characteristics may depend on the particular network placement of the server or certain configurations of the server. FIG.4is a block diagram of an exemplary computing system400that may be used to implement an embodiment of the present invention. An example computing system can include a processor410(e.g., a central processing unit), memory (including non-volatile memory)420-440, an interface devices450-460, display device470, and peripheral(s)470. The memory420, mass storage430, and portable storage440may store data and/or and one or more code sets, software, scripts, etc. The components of the computer system can be coupled together via a bus490or through some other known or convenient device. The processor410may be configured to carry out all or part of methods described herein for example by executing code for example stored in memory. One or more of a user device or computer, a provider server or system, or a suspended database update system may include the components of the computing system or variations on such a system. The processor410may be, for example, be a conventional microprocessor such as an Intel Pentium microprocessor or Motorola power PC microprocessor. One of skill in the relevant art will recognize that the terms “machine-readable (storage) medium” or “computer-readable (storage) medium” include any type of device that is accessible by the processor410. The memory and storage420-440can be coupled to the processor410by, for example, a bus490. The memory420-440can include, by way of example but not limitation, random access memory (RAM), such as dynamic RAM (DRAM) and static RAM (SRAM). The memory420-440can be local, remote, or distributed. The bus490can also couples the processor410to the non-volatile memory and drive unit. The non-volatile memory is often a magnetic floppy or hard disk, a magnetic-optical disk, an optical disk, a read-only memory (ROM), such as a CD-ROM, EPROM, or EEPROM, a magnetic or optical card, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory during execution of software in the computer. The non-volatile storage can be local, remote, or distributed. The non-volatile memory is optional because systems can be created with all applicable data available in memory. A typical computer system will usually include at least a processor410, memory, and a device (e.g., a bus) coupling the memory to the processor410. Software can be stored in the non-volatile memory and/or the drive unit. Indeed, for large programs, it may not even be possible to store the entire program in the memory. Nevertheless, it should be understood that for software to run, if necessary, it is moved to a computer readable location appropriate for processing, and for illustrative purposes, that location is referred to as the memory herein. Even when software is moved to the memory for execution, the processor410can make use of hardware registers to store values associated with the software, and local cache that, ideally, serves to speed up execution. As used herein, a software program is assumed to be stored at any known or convenient location (from non-volatile storage to hardware registers), when the software program is referred to as “implemented in a computer-readable medium.” A processor410is considered to be “configured to execute a program” when at least one value associated with the program is stored in a register readable by the processor410. The bus490can also couples the processor410to the network interface device. The interface can include one or more of a modem or network interface. It will be appreciated that a modem or network interface can be considered to be part of the computer system. The interface can include an analog modem, Integrated Services Digital network (ISDN0 modem, cable modem, token ring interface, satellite transmission interface (e.g., “direct PC”), or other interfaces for coupling a computer system to other computer systems. The interface can include one or more input and/or output (I/O) devices450-460. The I/O devices450-460can include, by way of example but not limitation, a keyboard, a mouse or other pointing device, disk drives, printers, a scanner, and other input and/or output devices, including a display device470. The display device470can include, by way of example but not limitation, a cathode ray tube (CRT), liquid crystal display (LCD), or some other applicable known or convenient display device. In operation, the computer system can be controlled by operating system software that includes a file management system, such as a disk operating system. One example of operating system software with associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, WA, and their associated file management systems. Another example of operating system software with its associated file management system software is the Linux™ operating system and its associated file management system. The file management system can be stored in the non-volatile memory and/or drive unit and can cause the processor410to execute the various acts required by the operating system to input and output data and to store data in the memory, including storing files on the non-volatile memory and/or drive unit. Some portions of the detailed description may be presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or “generating” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within registers and memories of the computer system into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the methods of some examples. The required structure for a variety of these systems will appear from the description below. In addition, the techniques are not described with reference to any particular programming language, and various examples may thus be implemented using a variety of programming languages. In various implementations, the system operates as a standalone device or may be connected (e.g., networked) to other systems. In a networked deployment, the system may operate in the capacity of a server or a client system in a client-server network environment, or as a peer system in a peer-to-peer (or distributed) network environment. The system may be a server computer, a client computer, a personal computer (PC), a tablet PC, a laptop computer, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, an iPhone, a Blackberry, a processor, a telephone, a web appliance, a network router, switch or bridge, or any system capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that system. While the machine-readable medium or machine-readable storage medium is shown, by way of example, to be a single medium, the term “machine-readable medium” and “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” and “machine-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the system and that cause the system to perform any one or more of the methodologies or modules of disclosed herein. In general, the routines executed to implement the implementations of the disclosure, may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as “computer programs.” The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processing units or processors in a computer, cause the computer to perform operations to execute elements involving the various aspects of the disclosure. Moreover, while examples have been described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that the various examples are capable of being distributed as a program object in a variety of forms, and that the disclosure applies equally regardless of the particular type of machine or computer-readable media used to actually effect the distribution. Further examples of machine-readable storage media, machine-readable media, or computer-readable (storage) media include but are not limited to recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks, (DVDs), etc.), among others, and transmission type media such as digital and analog communication links. In some circumstances, operation of a memory device, such as a change in state from a binary one to a binary zero or vice-versa, for example, may comprise a transformation, such as a physical transformation. With particular types of memory devices, such a physical transformation may comprise a physical transformation of an article to a different state or thing. For example, but without limitation, for some types of memory devices, a change in state may involve an accumulation and storage of charge or a release of stored charge. Likewise, in other memory devices, a change of state may comprise a physical change or transformation in magnetic orientation or a physical change or transformation in molecular structure, such as from crystalline to amorphous or vice versa. The foregoing is not intended to be an exhaustive list of all examples in which a change in state for a binary one to a binary zero or vice-versa in a memory device may comprise a transformation, such as a physical transformation. Rather, the foregoing is intended as illustrative examples. A storage medium typically may be non-transitory or comprise a non-transitory device. In this context, a non-transitory storage medium may include a device that is tangible, meaning that the device has a concrete physical form, although the device may change its physical state. Thus, for example, non-transitory refers to a device remaining tangible despite this change in state. The above description and drawings are illustrative and are not to be construed as limiting the subject matter to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. As used herein, the terms “connected,” “coupled,” or any variant thereof when applying to modules of a system, means any connection or coupling, either direct or indirect, between two or more elements; the coupling of connection between the elements can be physical, logical, or any combination thereof. Additionally, the words “herein,” “above,” “below,” and words of similar import, when used in this application, shall refer to this application as a whole and not to any particular portions of this application. Where the context permits, words in the above Detailed Description using the singular or plural number may also include the plural or singular number respectively. The word “or,” in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, or any combination of the items in the list. Those of skill in the art will appreciate that the disclosed subject matter may be embodied in other forms and manners not shown below. It is understood that the use of relational terms, if any, such as first, second, top and bottom, and the like are used solely for distinguishing one entity or action from another, without necessarily requiring or implying any such actual relationship or order between such entities or actions. While processes or blocks are presented in a given order, alternative implementations may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, substituted, combined, and/or modified to provide alternative or sub combinations. Each of these processes or blocks may be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed in parallel, or may be performed at different times. Further any specific numbers noted herein are only examples: alternative implementations may employ differing values or ranges. The teachings of the disclosure provided herein can be applied to other systems, not necessarily the system described above. The elements and acts of the various examples described above can be combined to provide further examples. Any patents and applications and other references noted above, including any that may be listed in accompanying filing papers, are incorporated herein by reference. Aspects of the disclosure can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further examples of the disclosure. These and other changes can be made to the disclosure in light of the above Detailed Description. While the above description describes certain examples, and describes the best mode contemplated, no matter how detailed the above appears in text, the teachings can be practiced in many ways. Details of the system may vary considerably in its implementation details, while still being encompassed by the subject matter disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the disclosure should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the disclosure with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the disclosure to the specific implementations disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the disclosure encompasses not only the disclosed implementations, but also all equivalent ways of practicing or implementing the disclosure under the claims. While certain aspects of the disclosure are presented below in certain claim forms, the inventors contemplate the various aspects of the disclosure in any number of claim forms. Any claims intended to be treated under 35 U.S.C. § 112(f) will begin with the words “means for”. Accordingly, the applicant reserves the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the disclosure. The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Certain terms that are used to describe the disclosure are discussed above, or elsewhere in the specification, to provide additional guidance to the practitioner regarding the description of the disclosure. For convenience, certain terms may be highlighted, for example using capitalization, italics, and/or quotation marks. The use of highlighting has no influence on the scope and meaning of a term; the scope and meaning of a term is the same, in the same context, whether or not it is highlighted. It will be appreciated that same element can be described in more than one way. Consequently, alternative language and synonyms may be used for any one or more of the terms discussed herein, nor is any special significance to be placed upon whether or not a term is elaborated or discussed herein. Synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any exemplified term. Likewise, the disclosure is not limited to various examples given in this specification. Without intent to further limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the examples of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control. Some portions of this description describe examples in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof. Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In some examples, a software module is implemented with a computer program object comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. Examples may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. Examples may also relate to an object that is produced by a computing process described herein. Such an object may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any implementation of a computer program object or other data combination described herein. The language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the subject matter. It is therefore intended that the scope of this disclosure be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the examples is intended to be illustrative, but not limiting, of the scope of the subject matter, which is set forth in the following claims. Specific details were given in the preceding description to provide a thorough understanding of various implementations of systems and components for a contextual connection system. It will be understood by one of ordinary skill in the art, however, that the implementations described above may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. It is also noted that individual implementations may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed, but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function. Client devices, network devices, and other devices can be computing systems that include one or more integrated circuits, input devices, output devices, data storage devices, and/or network interfaces, among other things. The integrated circuits can include, for example, one or more processors, volatile memory, and/or non-volatile memory, among other things. The input devices can include, for example, a keyboard, a mouse, a key pad, a touch interface, a microphone, a camera, and/or other types of input devices. The output devices can include, for example, a display screen, a speaker, a haptic feedback system, a printer, and/or other types of output devices. A data storage device, such as a hard drive or flash memory, can enable the computing device to temporarily or permanently store data. A network interface, such as a wireless or wired interface, can enable the computing device to communicate with a network. Examples of computing devices include desktop computers, laptop computers, server computers, hand-held computers, tablets, smart phones, personal digital assistants, digital home assistants, as well as machines and apparatuses in which a computing device has been incorporated. The term “computer-readable medium” includes, but is not limited to, portable or non-portable storage devices, optical storage devices, and various other mediums capable of storing, containing, or carrying instruction(s) and/or data. A computer-readable medium may include a non-transitory medium in which data can be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections. Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as compact disk (CD) or digital versatile disk (DVD), flash memory, memory or memory devices. A computer-readable medium may have stored thereon code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, or the like. The various examples discussed above may further be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks (e.g., a computer-program product) may be stored in a computer-readable or machine-readable storage medium (e.g., a medium for storing program code or code segments). A processor(s), implemented in an integrated circuit, may perform the necessary tasks. Where components are described as being “configured to” perform certain operations, such configuration can be accomplished, for example, by designing electronic circuits or other hardware to perform the operation, by programming programmable electronic circuits (e.g., microprocessors, or other suitable electronic circuits) to perform the operation, or any combination thereof. The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software, firmware, or combinations thereof. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. The techniques described herein may also be implemented in electronic hardware, computer software, firmware, or any combination thereof. Such techniques may be implemented in any of a variety of devices such as general purposes computers, wireless communication device handsets, or integrated circuit devices having multiple uses including application in wireless communication device handsets and other devices. Any features described as modules or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a computer-readable data storage medium comprising program code including instructions that, when executed, performs one or more of the methods described above. The computer-readable data storage medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may comprise memory or data storage media, such as random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates program code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer, such as propagated signals or waves. The program code may be executed by a processor, which may include one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, an application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Such a processor may be configured to perform any of the techniques described in this disclosure. A general purpose processor may be a microprocessor; but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure, any combination of the foregoing structure, or any other structure or apparatus suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated software modules or hardware modules configured for implementing a suspended database update system. The foregoing detailed description of the technology has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the technology to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to best explain the principles of the technology, its practical application, and to enable others skilled in the art to utilize the technology in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the technology be defined by the claim. | 63,532 |
11943357 | DETAILED DESCRIPTION Embodiments of the present invention allow for calculating a risk resulting from a network of networks that includes unknown relationships in a privacy preserving and/or federated manner, which does not require any party to divulge/share sensitive information. Embodiments of the present invention calculate risk from a business network while preserving privacy of parties of the business network. Additional embodiments of the present invention apply privacy protected computation techniques to network of networks in a cascading manner. Further embodiments of the present invention utilize a calculated risk of a business network to identify optimization scenarios for mitigating risk of a party (i.e., choosing a reduced risk upstream network). Some embodiments of the present invention recognize participants in business networks seek to continuously meet demands of clients and participants are relying on network members to meet demand of the clients. Additionally, embodiments of the present invention recognize the challenge is to be able to calculate risk, and mitigate the risk, while taking into account such unknown relationships of network members while preventing exposure of sensitive information of the network members. For example, managing risk involved in supply and demand commitments is complicated due to the dependent nature of relationships between different network members that are not known to each individual participant. In one scenario, Client A should not be aware of the relationship between Member B and the other parties (e.g., unknown relationship). However, Client A's ability to calculate and mitigate this “secondary” risk due to the unknown relationship of Member B is crucial due to impact on Member A directly. Furthermore, similar principals apply to various other multi-entity Bayesian networks (MEBNs) such as manufacturing, financial networks including banks, customers, etc., or a healthcare network with patients and treatments. Various embodiments of the present invention remedy such challenges by calculating the risk resulting from an entire network of networks including unknown relationships while not requiring any participant to divulge sensitive information. As a result, a client can reduce the risk imposed from network members or at least have a better understanding of the risk imposed. Embodiments of the present invention recognize that business networks experience data security issues (e.g., exposure of sensitive information) while transmitting messages to and from members of the business networks due to members attempting to gather additional information from the messages. Various embodiments of the present invention can operate to increase data security of business networks by utilizing privacy preserving algorithms to prevent members from divulging sensitive information. Implementation of embodiments of the invention may take a variety of forms, and exemplary implementation details are discussed subsequently with reference to the Figures. The present invention will now be described in detail with reference to the Figures.FIG.1is a functional block diagram illustrating a distributed data processing environment, generally designated100, in accordance with one embodiment of the present invention.FIG.1provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environment may be made by those skilled in the art without departing from the scope of the invention as recited by the claims. The present invention may contain various accessible data sources, such as database144, that may include personal data, content, or information the user wishes not to be processed. Personal data includes personally identifying information or sensitive personal information as well as user information, such as tracking or geolocation information. Processing refers to any, automated or unautomated, operation or set of operations such as collection, recording, organization, structuring, storage, adaptation, alteration, retrieval, consultation, use, disclosure by transmission, dissemination, or otherwise making available, combination, restriction, erasure, or destruction performed on personal data. Risk program200enables the authorized and secure processing of personal data. Risk program200provides informed consent, with notice of the collection of personal data, allowing the user to opt in or opt out of processing personal data. Consent can take several forms. Opt-in consent can impose on the user to take an affirmative action before personal data is processed. Alternatively, opt-out consent can impose on the user to take an affirmative action to prevent the processing of personal data before personal data is processed. Risk program200provides information regarding personal data and the nature (e.g., type, scope, purpose, duration, etc.) of the processing. Risk program200provides the user with copies of stored personal data. Risk program200allows the correction or completion of incorrect or incomplete personal data. Risk program200allows the immediate deletion of personal data. Distributed data processing environment100includes server140, client device120, and member server(s)130, all interconnected over network110. Network110can be, for example, a telecommunications network, a local area network (LAN) a municipal area network (MAN), a wide area network (WAN), such as the Internet, or a combination of the three, and can include wired, wireless, or fiber optic connections. Network110can include one or more wired and/or wireless networks capable of receiving and transmitting data, voice, and/or video signals, including multimedia signals that include voice, data, and video information. In general, network110can be any combination of connections and protocols that will support communications between server140, member server(s)130, and client device120, and other computing devices (not shown) within distributed data processing environment100. For example, network110can not only be a supply and/or logistics chain but also a computer network, a telecommunications network, transportation network, or power grid. Client device120can be one or more of a laptop computer, a tablet computer, a smart phone, smart watch, a smart speaker, virtual assistant, or any programmable electronic device capable of communicating with various components and devices within distributed data processing environment100, via network110. In general, client device120represents one or more programmable electronic devices or combination of programmable electronic devices capable of executing machine readable program instructions and communicating with other computing devices (not shown) within distributed data processing environment100via a network, such as network110. Client device120may include components as depicted and described in further detail with respect toFIG.4, in accordance with embodiments of the present invention. Client device120includes user interface122and application124. In various embodiments of the present invention, a user interface is a program that provides an interface between a user of a device and a plurality of applications that reside on the client device. A user interface, such as user interface122, refers to the information (such as graphic, text, and sound) that a program presents to a user, and the control sequences the user employs to control the program. A variety of types of user interfaces exist. In one embodiment, user interface122is a graphical user interface. A graphical user interface (GUI) is a type of user interface that allows users to interact with electronic devices, such as a computer keyboard and mouse, through graphical icons and visual indicators, such as secondary notation, as opposed to text-based interfaces, typed command labels, or text navigation. In computing, GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces which require commands to be typed on the keyboard. The actions in GUIs are often performed through direct manipulation of the graphical elements. In another embodiment, user interface122is a script or application programming interface (API). Application124is a computer program designed to run on client device120. An application frequently serves to provide a user with similar services accessed on personal computers (e.g., web browser, playing music, e-mail program, or other media, etc.). In one embodiment, application124is mobile application software. For example, mobile application software, or an “app,” is a computer program designed to run on smart phones, tablet computers and other mobile devices. In another embodiment, application124is a web user interface (WUI) and can display text, documents, web browser windows, user options, application interfaces, and instructions for operation, and include the information (such as graphic, text, and sound) that a program presents to a user and the control sequences the user employs to control the program. In another embodiment, application124is a client-side application of risk program200. In various embodiments of the present invention, member server(s)130may be a desktop computer, a computer server, or any other computer systems, known in the art. In general, member server(s)130is representative of any electronic device or combination of electronic devices capable of executing computer readable program instructions. Member server(s)130may include components as depicted and described in further detail with respect toFIG.4, in accordance with embodiments of the present invention. Member server(s)130can be a standalone computing device, a management server, a web server, a mobile computing device, or any other electronic device or computing system capable of receiving, sending, and processing data. In one embodiment, member server(s)130can represent a server computing system utilizing multiple computers as a server system, such as in a cloud computing environment. In another embodiment, member server(s)130can be a laptop computer, a tablet computer, a netbook computer, a personal computer (PC), a desktop computer, a personal digital assistant (PDA), a smart phone, or any programmable electronic device capable of communicating with client device120, server140, and other computing devices (not shown) within distributed data processing environment100via network110. In another embodiment, member server(s)130represents a computing system utilizing clustered computers and components (e.g., database server computers, application server computers, etc.) that act as a single pool of seamless resources when accessed within distributed data processing environment100. In another embodiment, member server(s)130represents one or more members of a supply chain network that provide information (e.g., goods, resources, services, inventory, etc.) to client device120. In this embodiment, one or more member instances of member server(s)130can be a participant of a first network of client device120, which is known to a user of client device120. Additionally, the one or more instances of members server(s)130can be a participant of a second network of an instance of member server(s)130that participates in the first network of client device120, which is unknown to the user of client device120. Additionally, risk program200utilizes the one or more instances of members server(s)130to determine risk factors that are utilized to compute an overall risk of a network with respect to client device120. In various embodiments of the present invention, server140may be a desktop computer, a computer server, or any other computer systems, known in the art. In general, server140is representative of any electronic device or combination of electronic devices capable of executing computer readable program instructions. Server140may include components as depicted and described in further detail with respect toFIG.4, in accordance with embodiments of the present invention. Server140can be a standalone computing device, a management server, a web server, a mobile computing device, or any other electronic device or computing system capable of receiving, sending, and processing data. In one embodiment, server140can represent a server computing system utilizing multiple computers as a server system, such as in a cloud computing environment. In another embodiment, server140can be a laptop computer, a tablet computer, a netbook computer, a personal computer (PC), a desktop computer, a personal digital assistant (PDA), a smart phone, or any programmable electronic device capable of communicating with client device120, member server(s)130, and other computing devices (not shown) within distributed data processing environment100via network110. In another embodiment, server140represents a computing system utilizing clustered computers and components (e.g., database server computers, application server computers, etc.) that act as a single pool of seamless resources when accessed within distributed data processing environment100. Server140includes storage device142, database144, and risk program200. Storage device142can be implemented with any type of storage device, for example, persistent storage405, which is capable of storing data that may be accessed and utilized by client device120and server140, such as a database server, a hard disk drive, or a flash memory. In one embodiment storage device142can represent multiple storage devices within server140. In various embodiments of the present invention, storage device142stores numerous types of data which may include database144. Database144may represent one or more organized collections of data stored and accessed from server140. For example, database144includes a risk values, mitigation actions, conditions, protocols, etc. In one embodiment, data processing environment100can include additional servers (not shown) that host additional information that accessible via network110. Generally, risk program200calculates a risk resulting from a network of networks that includes unknown relationships in a privacy preserving manner. Additionally, risk program200can convert insights from a risk calculation into actionable recommendations for risk mitigation. In one embodiment, risk program200identifies a set of conditions corresponding to a user of client device120. Also, risk program200determines compliance of member server(s)130of network110with the set of conditions of the user of client device120. Additionally, risk program200utilizes the compliance of member server(s)130to compute an overall risk value for network110with respect to member server(s)130. Furthermore, risk program200utilizes the compliance of member server(s)130and the overall risk to perform mitigation/optimization actions for network110and member server(s)130. FIG.2is a flowchart depicting operational steps of risk program200, a program that calculates a risk resulting from a network of networks that includes unknown relationships in a privacy preserving manner, in accordance with embodiments of the present invention. In one embodiment, risk program200initiates in response to a user connecting client device120to risk program200through network110. For example, risk program200initiates in response to a user registering (e.g., opting-in) a laptop (e.g., client device120) with risk program200via a WLAN (e.g., network110). In another embodiment, risk program200is a background application that continuously monitors client device120. For example, risk program200is a client-side application (e.g., application124) that initiates upon booting of a laptop (e.g., client device120) of a user and monitors a business network for triggering events. In step202, risk program200determines a set of network conditions corresponding to a user. In one embodiment, risk program200determines a set of conditions corresponding to a user of client device120. For example, risk program200determines a set of conditions ‘C(t)’ corresponding to a user of a computing device (e.g., client device120) that one or more members (e.g., member server(s)130) of a network of networks (e.g., network110, member server(s)130, business network, etc.) must fulfill at time ‘t’. In this example, risk program200can derive the set of conditions using external requirements (e.g., policies, regulations, and/or restrictions independent of constraints of a business network) and/or business network constraints (e.g., response times, deadlines, etc.). FIG.3Adepicts business network310, which is an example illustration of an instance of a structure (e.g., supply chain) of relationships between members of a network (e.g.,FIG.1), in accordance with example embodiments of the present invention. Business network310includes Party A311, Party B313, Party C317, and Party D315. Party A311is a root node of business network310, which corresponds to client device120ofFIG.1. Party B313, which corresponds to member server(s)130ofFIG.1, is a child node of Party A311of business network310. Party D315, which corresponds to member server(s)130ofFIG.1, is a child node of Party B313of business network310. Party C317, which corresponds to member server(s)130ofFIG.1, is a child node of Party B313of business network310and is a child node of an additional network not visible to Party A311. In an example embodiment with respect toFIG.3A, Party A311depends on Party B313for the supply of a component (e.g., assets, inventory, etc.). Party B313generates the component using materials (e.g., resources, etc.) from Party C317, and Party A311is unaware of Party C317. As a result, risk program200can determine constraints (e.g., set of conditions) from relationships of parties of business network310. In this example embodiment, risk program200can determine response time (e.g., delivery timeframe) for Party B313based on a dependency relationship with Party A311. FIG.3Bdepicts business network320, which is an example illustration of an instance of a structure (e.g., supply chain) of relationships between members of a network (e.g.,FIG.1) that includes an external requirement, in accordance with example embodiments of the present invention. Business network320includes Party A311, Party B313, Party C317, Party D315, and regional constraint322. Party A311is a root node of business network320, which corresponds to client device120ofFIG.1. Party B313, which corresponds to member server(s)130ofFIG.1, is a child node of Party A311of business network320. Party D315, which corresponds to member server(s)130ofFIG.1, is a child node of Party B313of business network310. Party C317, which corresponds to member server(s)130ofFIG.1, is a child node of Party B313of business network320and is a child node of an additional network not visible to Party A311. Regional constraint322is an external requirement that Party A311must comply with. In an example embodiment with respect toFIG.3B, Party A311depends on Party B313for a compound (e.g., assets, inventory, etc.) in the manufacture of a component. Party B313generates the compound of the component using Material C (e.g., resources, etc.) from Party C317and Party D315, and Party A311is unaware of Party C317. As a result, risk program200can determine constraints (e.g., set of conditions) from external requirements independent of business network320. In this example embodiment, risk program200identifies a regional exposure constraint based on regional constraint322related to Material C that is manufactured in a certain region/country by Party C317and Party D315, but the total exposure of Party A311to the certain region/country should not be more than ‘x’ % for Material C. In step204, risk program200generates a subset of requirements corresponding to the set of network conditions. In one embodiment, risk program200identifies one or more constraints corresponding to the set of conditions corresponding to a user of client device120. For example, risk program200determines one or more core requirements ‘Ci’ of a set of conditions ‘C(t)’ corresponding to a user of a computing device (e.g., client device120) that one or more members (e.g., member server(s)130) of a network of networks (e.g., network110, member server(s)130, business network, etc.) must fulfill at time ‘t’. In one scenario, ‘C(t)’ corresponds to resources of two or more servers (e.g., member server(s)130) a computing device (e.g., client device120) of a user must utilize to process a workload. In this example, risk program200identifies an amount and type of resources required to process the workload with respect to the two or more servers. In another scenario, risk program200determines that a user would needs to produce one thousand (1000) units of Material A (e.g., ‘C(t)’). Additionally, risk program200determines that one thousand (1000) units of Material A requires ten (10) kilograms of Compound A and ten (10) kilograms of Compound B (e.g., ‘Ci’, ‘C1’, ‘C2’, etc.), and risk program200identifies one or more suppliers (e.g., member server(s)130) of a supply chain (e.g., network110) responsible for supplying Compound A and Compound B. In step206, risk program200determines compliance of members of a business network associated with the user. In various embodiments of the present invention, a user of client device120may need to know whether member server(s)130can fulfill a set of conditions the user requires. Additionally, member server(s)130would likely not want to expose a response to the user via network110due to exposure risk of sensitive information (e.g., business level info, inventory, etc.) when sharing a response. Furthermore, upstream parties of a business network are assumed to be semi-honest (i.e., the parties follow a protocol but can attempt to infer and/or gather additional information from the messages that are exchanged with other parties). In one embodiment, risk program200publishes a request to member server(s)130via network110. For example, risk program200can define each condition of a set of conditions of a user in a form that allows parties (e.g., member server(s)130) to respond. In this example, risk program200transmits a defined request to each of the parties of a business network and collects responses to determine compliance with the set of conditions (i.e., identify risk factors). FIG.3Cdepicts business network330, which is an example illustration of an instance of a structure (e.g., supply chain) of relationships between members of a network (e.g.,FIG.1) that includes a plurality of privacy preserving protocols, in accordance with example embodiments of the present invention. Business network330includes Party A311, Party B313, Party C317, Party D315, MPC1332, and MPC2334. Party A311is a root node of business network330, which corresponds to client device120ofFIG.1. Party B313, which corresponds to member server(s)130ofFIG.1, is a child node of Party A311of business network320. Party D315, which corresponds to member server(s)130ofFIG.1, is a child node of Party B313of business network310. Party C317, which corresponds to member server(s)130ofFIG.1, is a child node of Party B313of business network330and is a child node of an additional network not visible to Party A311. MPC1332and MPC2334are protocols of methods for parties to jointly compute a function over respective inputs while keeping those inputs private from each other. In an example embodiment, risk program200can generate a relevant MPC protocol for each condition ‘Ci’ of ‘C(t)’, capable to compute risk assessment for underlying condition ‘Ci’ between the upstream parties (e.g., Party B313and Party A313, Party B313and Party D315, etc.). In another embodiment, risk program200determines whether member server(s)130complies with a set of conditions corresponding to a user of client device120. For example, risk program200utilizes privacy preserving algorithms (e.g., multi-party computation (MPC), zero knowledge proof (ZKP), differential privacy, secret sharing, etc.) to allow parties of a business network to respond to a published request. In this example, risk program200identifies a response of parties (e.g., member server(s)130) of a business network (e.g., network110) allowing the parties to share responses without revealing any sensitive information beyond extracted risk values relevant to computation of risk as it relates to ‘C(t)’. In an alternative example, risk program200can enforce privacy preserving protocols of client device120and member server(s)130when sharing information. In step208, risk program200determines an overall risk of the business network associated with the user. In one embodiment, risk program200calculates a risk for a user of client device120. For example, risk program200determines an overall risk based on fulfilled conditions of a set of conditions corresponding to a user. In this example, risk program200determines a risk factor for each relationship of a party (e.g., child node) of a business network (i.e., determines the risk of known parties/relation and no additional information is being divulged except for the risk value provided in the response to a published request). Additionally, risk program200aggregates the collected risk factors derived from the request corresponding to a subset of conditions to generate an overall risk for the business network. In another embodiment, risk program200can calculate a risk for each relationship of client device120with member server(s)130via network110. For example, risk program200can determine a risk factor corresponding to an isolated segment (e.g., relationship between parties) of a business network (i.e., risk can be defined by conditions ‘Ci’). Additionally, risk program200can cascade the risk factor collected across one or more networks. Referring now toFIG.3A, in one scenario, Party C317is unknown to Party A311. Referring now toFIG.3A, in one scenario, a risk of Party B313not meeting commitments to Party A311depends a great deal on Party C317being able to provide the material to Party B313(i.e., if risk program200determines a risk that C cannot meet commitment to B, then a risk exists that B will not be able to meet commitments to A). Additionally, Party A311requests to know that business network310(e.g., a supply chain network (SCN)) can hold a certain amount of inventory of material required to generate product (e.g., condition), thus risk program200calculates a risk respect to a risk value provided by Party B313, which does not disclose inventory (e.g., sensitive information) of Party B313or Party C317, and can utilize the risk value to determine a total network risk with respect to other child nodes of Party A311that provide the material. Referring now toFIG.3C, parties are assumed to comply to a central coordination and are semi-honest as the parties fulfill a common protocol and uphold a cascading risk policy. Risk program200enables generated MPCs corresponding to each underlying condition ‘Ci’ of ‘C(t)’ to protect risks shared by Party C317and Party D315by allowing the parties to “secretly-share” risk values within MPC1332without revealing any sensitive information to one another. Additionally, risk program200enables parties of MPC1332and MPC2334to calculate in isolation instances of underlying conditions ‘Ci’ of ‘C(t)’ and cascade computation through business network330. In an example embodiment, risk program200allows Party B313(e.g., a central trusted party) to compute risk for MPC1332(e.g., Party B313's underlying network). In this example, risk program200allows Party B313to cascade risk of the underlying network to MPC2334due to participation in both network of business network330(i.e., for each party, based upon the overall risk calculation, provide the risk of every relation known by each party with each upstream party). As a result, risk program200prevents the ability of third parties to infer implicitly or explicitly any sensitive information about relationships between other participants. In another embodiment, risk program200identifies an event that triggers computation of risk for a user of client device120. For example, risk program200can trigger computation of risk of a business network (e.g., supply chain network SCN) upon identifying one or more events. In this example, risk program200can trigger calculation of risk of the business network upon determining a change occurred in a regulation (e.g., eternal requirement), onboarding of a new supplier (e.g., new participants), or an existing participant reports a changing in the business network (e.g., upstream chain change). In step210, risk program200performs mitigation actions associated with the overall risk of the business network associated with the user. In various embodiments of the present invention, risk program200can utilize risk calculations to run an optimization problem, which identifies options for new or different relationships to satisfy a set of conditions. As a result of utilizing the risk calculations, the options would be identified while still preserving privacy of sensitive information of participants. In one embodiment, risk program200determines a set of mitigation actions based on a set of conditions corresponding to a user of client device120. For example, risk program200determines set of mitigation actions based on the set of conditions of a user. In this example, risk program200can identify different relationships within a business network with existing parties to satisfy each condition of the set of conditions based on a cost associated with a new relationship as well as costs associated with the risk materializing (e.g., as indicated by a response or the overall risk). In another embodiment, risk program200performs a mitigation action. In one scenario, risk program200determines that a supplier (e.g., member server(s)130) adds another upstream member to a business network of a computing device (e.g., client device120) of a user and calculates a risk indicating that the supplier would not be able to satisfy a condition of a set of conditions of the computing device of the user. Then, risk program200can perform an optimization to determine whether adding a second supplier to the business network or utilize an existing participant (e.g., member server(s)130) of the business network satisfies the condition. As a result, risk program200automatically adds the second supplier to the business network. Also, risk program200can utilize external requirements and/or business network constraints (as discussed in step202) to identify suppliers, which can be an existing participant in the business network or new participant. In an alternative example, risk program200can perform mitigation actions to reduce costs of a business network, which improves (e.g., optimizes) performance of the business network. In this example, risk program20can generate recommendations to streamline a business network, such as eliminating relationships, adding relationships, eliminating duplicative data, visibility, etc. FIG.4depicts a block diagram of components of client device120, member server(s)130, and server140, in accordance with an illustrative embodiment of the present invention. It should be appreciated thatFIG.4provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environment may be made. FIG.4includes processor(s)401, cache403, memory402, persistent storage405, communications unit407, input/output (I/O) interface(s)406, and communications fabric404. Communications fabric404provides communications between cache403, memory402, persistent storage405, communications unit407, and input/output (I/O) interface(s)406. Communications fabric404can be implemented with any architecture designed for passing data and/or control information between processors (such as microprocessors, communications and network processors, etc.), system memory, peripheral devices, and any other hardware components within a system. For example, communications fabric404can be implemented with one or more buses or a crossbar switch. Memory402and persistent storage405are computer readable storage media. In this embodiment, memory402includes random access memory (RAM). In general, memory402can include any suitable volatile or non-volatile computer readable storage media. Cache403is a fast memory that enhances the performance of processor(s)401by holding recently accessed data, and data near recently accessed data, from memory402. Program instructions and data (e.g., software and data410) used to practice embodiments of the present invention may be stored in persistent storage405and in memory402for execution by one or more of the respective processor(s)401via cache403. In an embodiment, persistent storage405includes a magnetic hard disk drive. Alternatively, or in addition to a magnetic hard disk drive, persistent storage405can include a solid state hard drive, a semiconductor storage device, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a flash memory, or any other computer readable storage media that is capable of storing program instructions or digital information. The media used by persistent storage405may also be removable. For example, a removable hard drive may be used for persistent storage405. Other examples include optical and magnetic disks, thumb drives, and smart cards that are inserted into a drive for transfer onto another computer readable storage medium that is also part of persistent storage405. Software and data410can be stored in persistent storage405for access and/or execution by one or more of the respective processor(s)401via cache403. With respect to client device120, software and data410includes data of user interface122and application124. With respect to server140, software and data410includes data of storage device142and risk program200. Communications unit407, in these examples, provides for communications with other data processing systems or devices. In these examples, communications unit407includes one or more network interface cards. Communications unit407may provide communications through the use of either or both physical and wireless communications links. Program instructions and data (e.g., software and data410) used to practice embodiments of the present invention may be downloaded to persistent storage405through communications unit407. I/O interface(s)406allows for input and output of data with other devices that may be connected to each computer system. For example, I/O interface(s)406may provide a connection to external device(s)408, such as a keyboard, a keypad, a touch screen, and/or some other suitable input device. External device(s)408can also include portable computer readable storage media, such as, for example, thumb drives, portable optical or magnetic disks, and memory cards. Program instructions and data (e.g., software and data410) used to practice embodiments of the present invention can be stored on such portable computer readable storage media and can be loaded onto persistent storage405via I/O interface(s)406. I/O interface(s)406also connect to display409. Display409provides a mechanism to display data to a user and may be, for example, a computer monitor. The programs described herein are identified based upon the application for which they are implemented in a specific embodiment of the invention. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience, and thus the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The terminology used herein was chosen to best explain the principles of the embodiment, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. | 43,726 |
11943358 | SUMMARY In some embodiments, methods and systems facilitate the identification of anonymized participants on distributed ledger-based networks (DLNs). For example, the methods and systems facilitate the auditing of the ownership of an account on a ZKP-enabled DLN. In such embodiments, the methods may include the steps of: sending, from a device of a sender on a network of computing nodes to a user of a distributed ledger on the network, a request for information about an access the user has to an account on the distributed ledger; receiving, in response to the request and on the distributed ledger, a ZKP that the user has access to the account and a public input for use in verifying the ZKP; causing a self-executing code segment on the distributed ledger to perform a computation of the received ZKP using the public input to verify the user has access to the account, the computation of the received ZKP occurring without any interaction between the user and the sender after the ZKP is received; and generating a confirmation verifying the user has access to the account based on a result of the computation of the ZKP. DETAILED DESCRIPTION In some embodiments, participants of a distributed ledger-based network (DLN) (also referred herein as a blockchain network) may use the network to conduct a variety of activities without the supervision of a central authority, such activities including but not limited to exchanging digital assets, managing the transfer of real world physical assets between the participants by using asset tokens as representations of the assets and their transfers on the DLNs, etc. For example, token commitments can be used to represent a physical off-the-blockchain asset on the DLN, and the transfer of the physical asset from a first to a second participant of the DLN can be represented on the DLN by the invalidation of the token commitment that assigned ownership of the physical asset to the first participant and the registration on the DLN of a new token commitment that assigns ownership of the physical asset to the second participant. In some embodiments, the participants may maintain one or more accounts, addresses or wallets (referred hereinafter as “accounts”) on the DLN for use in sending and/or receiving token commitments as well as other tokens such as but not limited to physical asset tokens, security tokens, utility tokens, etc. The accounts may be identified on the DLN by public addresses or public keys (of asymmetric key pairs that each includes the public key and a private key, for example) that are available to anyone with access to the DLN (or even the public at large). For instance, using the public key of an account, any other account on the DLN may be able to send tokens to the account of the public key without a prior permission from the account owner (i.e., accounts may be configured to receive tokens without prior permission from account owners). To obtain access to an account (e.g., to be able to send tokens from an account to other accounts on the DLN), however, one may need to be in possession of the private key of the asymmetric pair to which the public key of the account belongs. Owing to the decentralized nature of the DLN, in some implementations, the sending and/or receiving of tokens may be accomplished without the management of a central authority. In some embodiments, a self-executing code or program (e.g., a smart contract) may be used to manage transactions between DLN participants on the DLN. In particular, smart contracts may be used to facilitate transactions that depend on the fulfillment of certain conditions for the transaction to be consummated. For example, parties participating in a transaction for a sale of a digital music file can use a smart contract on the DLN to manage the sale of the music file. The self-executing code or smart contract can regulate the exchange of the music file and the correct payment for the file between the parties without involvement from a third party (e.g., central authority). For instance, the smart contract may allow the payment to be disbursed after the delivery of the music file is verified. Throughout the instant disclosure, in some embodiments, the terms “self-executing code” or “self-executing code segment” and “smart contract” may be used interchangeably. Similarly, in some embodiments, the terms “distributed ledger-based network” and “blockchain network” may be used interchangeably. As noted above, in some embodiments, the trust that DLNs provide with no need for a central authority derives from the decentralized nature of the networks as well as the transparency of the networks to at least all the participants of the networks (and in the case of public networks, to the public at large). In some implementations, multiple computing nodes may make up a DLN, and actions undertaken on the DLN, such as transactions between participants of the DLN, can be recorded on all or nearly all ledgers that are stored on the multiple computing nodes after at least a substantial number of the computing nodes (or their owners) agree on the validity of the transactions. The distributed ledgers can be immutable or nearly immutable in the sense that to alter the distributed ledgers, at least a substantial portion of the computing nodes would have to agree, which can be increasingly difficult when the number of computing nodes is large (and the distributed ledger gets longer). In some implementations, the trust in DLNs, engendered at least partly due to the decentralization thereof, may be augmented with the transparency of the DLNs. For example, DLNs allow any interested person with access to the DLNs to inspect the distributed ledgers on the networks to obtain detailed information on all transactions that are recorded on the ledgers since the inception of the DLNs (e.g., as the ledgers are, as discussed above, largely immutable in at least most cases). In some implementations, the detailed information may not include the identity of the entity or person that owns or has access to an account on the DLNs (e.g., whether the account was involved in a transaction or not). For example, an account may be identified by a public address or a public key, and the identity of the entity or person that has access to the account via a private key may not be apparent from the information gleaned from the distributed ledgers. Throughout the instant disclosure, in some embodiments, “ownership of an account” may refer to having access to or possession of the private key of the account, which may or may not include legal ownership of the account. In some embodiments, it may be desirable for the ownership of DLN accounts to be verified by an auditor, who may be one of the participants of the DLN. For example, a person or a business entity may claim that they own an account, and an auditor may be tasked with verifying the ownership of the account for variety of legal and/or business reasons. An “auditor” herein refers to a participant of a DLN that is attempting to confirm the ownership of an account on the DLN, examples of such “auditors” including financial, legal or business auditors, members of law enforcement agencies, etc. An auditor may employ a variety of techniques to verify the ownership of an account on the DLN. For example, the auditor may request for the private key from the auditee. As another example, the auditor may request that the auditee engage in a test transaction from the account that is being audited or respond to a cryptographic challenge as a demonstration or proof of ownership of the account. One or more of these techniques, however, may be undesirable or impractical to implement in DLNs as ways of performing audits of DLN accounts. If an auditor receives a private key from a person or entity purporting to be the owner of an account, the auditor may check to see if the private key provides the auditor access to the account, and if true, may confirm that the person or entity is in fact the owner of the DLN account. An auditee, however, may not be willing to share private keys of accounts on DLNs for security reasons, in particular in view of increased cyber threats (e.g., due to company policy, auditee's insurance or other contractual conditions and warranties, etc.). Further, the auditor may not be willing or able to assume the liability that comes from possessing a private key to somebody else's account. An auditor may also consider a test transaction conducted from an account on the DLN by a person or entity purporting to be the owner of the account to be a satisfactory demonstration of ownership of the account. For example, the test transaction may be in the form of a small amount of tokens transferred from the account being audited to an account of the auditor, and the execution of the transaction in the manner (e.g., the amount of the tokens, the timing of the transfer, etc.) as stipulated beforehand by the auditor and the person or entity may be deemed as a satisfactory evidence that the person or entity in fact owns the account (e.g., has access to or ownership of the private key of the account). The auditor may also pose a cryptographic challenge to the purported owner, and if the challenge is met with success, then the auditor may confirm the purported owner as the real owner whose account is being audited. For example, the cryptographic challenge may be created using the public key of the account on the DLN, and may be designed to be solved only with the use of the private key of the same account (i.e., the private key of the asymmetric key pair of the account). If the purported owner manages to solve the challenge, then that may be considered as a proof that the purported owner has the private key in his/her possession (and as such, owns the account). Requesting a test transaction from purported owners and/or presenting purported owners with cryptographic challenges to prove ownership of an account on the DLN may not, however, be desirable or practical for several reasons. First, both techniques, being interactive, would require a response from the auditee, who may not be able to, or be willing to, engage in the auditing process. Further, the techniques may increase the cost of the audits. For example, either the auditor and/or the auditee would have to absorb the transaction fee that may be generated when a test transaction is made on the DLN. Accordingly, one or more of the embodiments disclosed herein disclose methods and systems that facilitate the auditing of ownership of accounts on DLNs without a necessary input or response from the purported owners of the accounts during the audit process. Further, the methods and systems may also allow the audits to be completed without the security and privacy of the owners and their account being compromised during the audit processes (e.g., without the private keys of the accounts being exposed on the DLN or publicly, without the identity of the owners being exposed on the DLN or to the public (except the auditor, in some cases), etc.). One or more of the disclosed embodiments provide enhanced security, privacy and convenience to the account audit process in the DLNs via the implementation of ZKPs in the DLNs. FIG.1shows a schematic of an audit of the ownership of an account on a ZKP-enabled DLN, according to some embodiment. In some embodiments, the ZKP-enabled DLN or blockchain network100includes multiple computing nodes102a-102econfigured to communicate amongst each other via a peer-to-peer (P2P) connection. In some implementations, the computing nodes102a-102ecan be computing devices including but not limited to computers, servers, processors, data/information processing machines or systems, and/or the like, and may include data storage systems such as databases, memories (volatile and/or non-volatile), etc. In some implementations, the P2P connections may be provided by wired and/or wireless communications systems or networks (not shown) such as but not limited to the internet, intranet, local area networks (LANs), wide area networks (WANs), etc., utilizing wireless communication protocols or standards such as WiFi®, LTE®, WiMAX®, and/or the like. In some embodiments, the ZKP-enabled DLN100may include self-executing codes or smart contracts that are configured to execute upon fulfillment of conditions that are agreed upon between parties transacting or interacting on the ZKP-enabled DLN100(e.g., an auditor and an auditee). For example, some or all of the computing nodes102a-102emay include copies of a self-executing code that self-execute upon fulfillment of the conditions. In some implementations, the computing nodes102a-102emay communicate amongst each other with the results of the executions of their respective self-executing codes, for example, to arrive at a consensus on the results. In some implementations, one or a few of the computing nodes102a-102emay have self-executing codes that self-execute, and the results would be transmitted to the rest of the computing nodes102a-102efor confirmation. In some embodiments, a self-executing code or a smart contract can facilitate the completion of transactions or interactions on the ZKP-enabled DLN100by providing the participating parties confidence that the other party would deliver the promised product or action. For example, a smart contract can be used to verify a proof provided by one participant (e.g.,110b) of an interaction or a transaction, allowing a second participant (e.g.,110a) to proceed with the interaction or transaction to completion. For instance, an auditee110b, in response to a request112from an auditor110asent via a computing node102a, may generate and provide to a smart contract residing on the ZKP-enabled DLN100(e.g., a smart contract residing on each or nearly each of the computing nodes102a-102ethat make up the ZKP-enabled DLN100) a proof (e.g., a ZKP)114that the auditee110bis in fact the owner of an account on the ZKP-enabled DLN100(e.g., the auditee owns, has access to or possesses a private key of the account). In such implementations, the smart contract may compute the proof to verify that the auditee110bis in fact an owner of the account on the ZKP-enabled DLN100. In some cases, the computation of the proof may occur when activated or initiated by the auditor110a, or when a certain condition (e.g., date, time, etc.) is fulfilled. In some embodiments, the ZKP-enabled DLN100may be linked to one or more oracles (not shown) or data feeds that provide external data to the ZKP-enabled DLN100. For example, an oracle can be a hardware (e.g., computing node) or software (stored and/or executing on hardware) that is configured to gather or receive data from systems external to the ZKP-enabled DLN100(e.g., sensors, information sources such as the interne (via a web API, for example), etc.) and provide the collected data or information to a smart contract on the ZKP-enabled DLN100. In some implementations, as discussed above, self-executing codes or smart contracts can automatically execute upon realization of some conditions of a transaction, and the oracles may provide the data that can be used to evaluate whether the conditions are met. For example, a transaction may be contingent on the price of a stock, a weather condition, date, time, etc., and an oracle may provide the requisite information to the smart contract facilitating the transaction. The smart contract, upon receiving the information, may self-execute after determining that the condition for the transaction has been fulfilled. In some embodiments, the oracles may facilitate for the smart contracts to send data out to external systems. For example, a smart contract may be configured to verify a ZKP provided by an auditee at a certain date and time, and send out the verification results to an auditor's device or system when the verification is complete. In some implementations, an oracle may serve as a transit hub for the data including the verification results during its transmission to the auditor device or system. In some embodiments, at least a substantial number of the computing nodes102a-102ecan include copies of a distributed ledger104a-104eonto which transactions that occur on the network are recorded. The recording of the transactions on the distributed ledger104a-104emay occur when some substantial proportion of the computing nodes102a-102e, or a subset thereof, agree on the validity of the transactions. The distributed ledger104a-104ecan be immutable or nearly immutable in the sense that to alter the distributed ledger104a-104e, at least this substantial portion of the computing nodes102a-102ewould have to agree, which can be increasingly difficult when the number of computing nodes102a-102eis large (and the distributed ledger104a-104egets longer). As noted above, one or more of the disclosed embodiments provide enhanced security, privacy and convenience to account audit processes in DLNs via the implementation of ZKPs in the DLNs. In some embodiments, ZKPs can be used by a first entity, the “prover” or “provider” of the proofs, to convince a second entity, the “verifier” of the proofs, that a statement about some secret information is truthful without having to reveal the secret information to the verifier. For example, the first entity can be an auditee110bclaiming to own an account on a ZKP-enabled DLN, the second entity can be an auditor110aattempting to determine ownership of the account, the secret information can be the private key of the account the possession of which indicates ownership of the account, and the statement can be a statement stating that the auditee owns, has access to or possesses the private key. In such cases, ZKPs can be used by the auditee110bto prove to the auditor110athat the auditee110bis in fact the owner of the account without having to disclose the secret information (e.g., the private key) to the auditor110a(or anyone else on the ZKP-enabled DLN or publicly). ZKPs can be interactive, i.e., require interaction from the prover for the verifier to verify the truthfulness of the statement. In some embodiments, the ZKPs can be non-interactive, requiring no further interaction from the prover for the verifier to verify the statement. Examples of non-interactive ZKPs include zero-knowledge succinct non-interactive argument of knowledge (zk-SNARK) proof, zero-knowledge scalable transparent argument of knowledge (zk-STARK) proof, etc. Discussions related to the use of ZKPs to provide privacy to participants of ZKP-enabled DLNs interacting on the networks (e.g., using the ZKP-enabled DLNs to represent the transfer of assets between the participants) can be found in U.S. Provisional Application No. 62/719,636, filed Aug. 18, 2018, entitled “Methods and Systems of ZKP-Based Secure PE Transactions on Public Networks,” and U.S. Provisional Application No. 62/748,002, filed Oct. 19, 2018, entitled “Methods and Systems of ZKP-Based Secure Private Enterprise Transactions on Public Networks,” both of which are incorporated herein by reference in their entireties. With reference toFIG.1, in some embodiments, an auditor110amay present a request112to an auditee110bpurporting to be an owner of an account on a ZKP-enabled DLN100for proof of ownership of the account. For example, the account may be identified on the ZKP-enabled DLN100with a public key that is part of an asymmetric key pair of the account, and the auditor110amay request that the auditee110bpresent a proof showing ownership, access to or possession of the private key of the public key-private key pair. In such embodiments, the auditee110bmay generate a proof that proves to the auditor110a, when verified, that the auditee110bis an owner of the account without having to necessarily reveal the private key to either the auditor110a, other participants of the ZKP-enabled DLN100or the public at large. For example, in some implementations, the auditee110bmay generate a ZKP (e.g., zk-SNARK proof)114of ownership of the account on the ZKP-enabled DLN100. In some embodiments, the auditor110amay not accept a ZKP from the auditee110bthat is generated using setup other than a trusted setup (e.g., a ZKP generating system setup by the auditor110aor another entity entrusted by the auditor110a). For example, the auditor110amay not accept a ZKP from the auditee110bunless the ZKP is generated in accordance with the following procedure. In some embodiments, the ZKP114provided by the auditee110bto prove to the auditor110athat the auditee110bowns an account on the ZKP-enabled DLN100may be generated as follows. Initially, a function C that is configured to take two inputs and return Boolean results (e.g., “true” or “false” outputs) may be generated. In some implementations, the function C may have the property where C(x, w) has a “true” output when x is the public key and w is the private key of the asymmetric key pair (and has a “false” output otherwise). The C function may be generated either by the auditor110a(e.g., using the computing node102a), the auditee110b(e.g., using the computing node102b) and/or any other entity. In some instances, the C function may be available to and/or validated by both the auditor110aand the auditee110b. In some instances, the C function may be generated on the ZKP-enabled DLN100. In some embodiments, the C function may be generated on a computing node (not shown) that is not part of the ZKP-enabled DLN100(e.g., a computing node that is off-chain). In some embodiments, after the generation of the C function, a key generator algorithm G (e.g., a zk-SNARK key generator algorithm) may be used to generate a proving key Pkand/or a verification key Vkthat can be used to generate the ZKP. For example, the key generator algorithm G may be such that when a parameter L and the function C are plugged into the key generator algorithm G, the results are the proving key Pkand/or the verification key Vk, as follows: (Pk, Vk)=G(L, C). In some implementations, the proving key Pkand/or the verification key Vkmay be generated, using a computing device, by a third party that is different from the auditor110aand/or the auditee110b. For example, the third party may be a trusted entity that is trusted by the auditor110aand/or the auditee110bto not disclose the parameter L publicly or at least to the auditee110b. In some instances, the computing device used to generate the proving key Pkand/or the verification key Vkmay be different from the computing node102aof the auditor110aand the computing node102bof the auditee110b. In some implementations, the proving key Pkand/or a verification key Vkmay be generated by the auditor110ausing the computing node102a. In some implementations, the proving key Pkand/or a verification key Vkmay not be generated by the auditee110b(e.g., using the computing node102b). In some instances, the proving key Pkand/or the verification key Vkmay be generated on the ZKP-enabled DLN100. In yet other instances, the proving key Pkand/or the verification key Vkmay be generated on a computing node (not shown) that is not part of the ZKP-enabled DLN100(e.g., a computing node that is off-chain). In some implementations, the parameter L may be destroyed after the generation of the ZKP114without having been made available to the auditee110b(i.e., to the generator of the ZKP). In some embodiments, after the generation of the proving key Pkand/or a verification key Vk, the proving key Pkmay be provided to the auditee110bat the computing node102band the verification key Vkmay be provided to the auditor110aat the computing node102a. In some embodiments, the auditee110bmay use the proving key Pk, the public key x and/or the private key w of the auditee110bon the ZKP-enabled DLN100to generate the ZKP114that the auditee110bowns the account (e.g., the auditee110bknows or owns the private key w). For example, the auditee110b, using the computing node102b, may apply an algorithm P on the proving key Pk, the public key x and/or the private key w to return the ZKP114prf=P(Pk, x, w). In some implementations, the auditee110b, using the computing node102b, may generate the ZKP114prf on the ZKP-enabled DLN100. In some implementations, the auditee110bmay use a computing node (not shown) that is not part of the ZKP-enabled DLN100(e.g., a computing node that is off-chain) to generate the ZKP114prf. In some embodiments, the auditee110b, via the computing node102b, may make the ZKP114prf available to the auditor110ato facilitate the audit process to determine whether the auditee110bowns the account identified by the public key x. For example, the auditee110b, using the computing node102b, may provide the ZKP114prf to the auditor110aat the computing node102a, and/or may provide the ZKP114prf to the smart contract executing on the ZKP-enabled DLN100. For instance, the auditee110b, via the computing node102b, may provide the ZKP114prf to the auditor110aat the computing node102ain response to a request112from the auditor110a. In some implementations, the auditee110b, using the computing node102b, may provide the ZKP114prf to the smart contract on the ZKP-enabled DLN100for later use by the auditor110awhen the auditor110ais ready to audit the ownership of the account. In some embodiments, after the ZKP114prf is made available to the auditor110a(e.g., via the computing node102a) and/or provided to the smart contract, the auditor110amay proceed with the verification of the ZKP114prf to determine if in fact the auditee110bowns the account on the ZKP-enabled DLN100. For example, the auditor110a, using the computing node102a, may proceed with applying an algorithm V on the verification key Vk, the public key x and/or the ZKP114prf (i.e., V(Vk, x, pro) that returns “true” only when the ZKP114prf is a valid proof. In other words, V(Vk, x, prf)=“true” only when the private key w is the private key of the asymmetric key pair of the account the ownership of which is being audited (and the account identified by the public key of the public key-private key pair). In some implementations, the auditor110amay apply, using the computing node102a, the algorithm V on the verification key Vk, the public key x and/or the ZKP114prf on the ZKP-enabled DLN100. and/or off of the ZKP-enabled DLN100. In some implementations, the auditor110amay use a computing node (not shown) that is not part of the ZKP-enabled DLN100(e.g., a computing node that is off-chain) to apply the algorithm V on the verification key Vk, the public key x and/or the ZKP114prf on the ZKP-enabled DLN100. In some implementations, when the auditor110aapplies the algorithm V (e.g., using the computing node102aor an external computing node (not shown)) and a “true” result is returned, the auditor110amay consider the result as an evidence or confirmation that the auditee110bhas access to the private key w and owns the account on the ZKP-enabled DLN100that is being audited. In some embodiments, after the auditee110bmakes the ZKP114prf available to the auditor110a(e.g., by providing, via the computing node102b, the ZKP114prf to the auditor110aand/or the smart contract), there may not be any further interaction between the auditee110band the auditor110aand/or between the computing node102band the computing node102a. In other words, the audit process of the account may not be interactive between the auditor110aand the auditee110bafter the ZKP114prf is generated and made available by the auditee110b. For example, the auditor110aor the computing node102amay not provide a cryptographic challenge to the auditee110bor to the computing node102b, and/or the auditee110bor the computing node102bmay not initiate a test transaction (e.g., as a demonstration that the auditee110bowns the account) as part of the audit process (e.g., after the ZKP114prf is made available by the auditee110b(e.g., using the computing node102b) or after the auditee110b, at the computing node102b, receives the request112from the auditor110a). In some implementations, the auditee110bmay not reveal the private key of the account to the auditor110during the audit process (e.g., after the ZKP114prf is made available by the auditee110bor after the auditee110breceives the request112from the computing node102aof the auditor110a). For example, the auditor110amay not have access or knowledge of the private key of the account at least until after the verification of the ZKP114prf (e.g., the application of the algorithm V returning “true”). In some embodiments, the verification of the ZKP114prf may be performed without any identifying information of the auditor110a, the auditee110b, the account and/or contents of the account being revealed or made public as a result of the audit process or the verification process. FIG.2shows a flow chart illustrating the generation and use of a ZKP in auditing the ownership of an account on a ZKP-enabled DLN, according to some embodiments. In some embodiments, a first participant110a(hereinafter referred as an “auditor”) of the ZKP-enabled DLN100may be an auditor retained for determining whether a second participant110b(e.g., a client, hereinafter referred as an “auditee”) owns an account on the ZKP-enabled DLN100that the auditee claims to do so. For example, the auditee110bmay claim that an account on the ZKP-enabled DLN100that is identified by the public key of an asymmetric key pair of the account belongs to the auditee, and an auditor110amay be tasked to determine whether the auditee110bin fact owns the account. For instance, the auditor110amay be tasked to determine if the auditee110bowns, has access to or possesses the private key of the asymmetric key pair of the account. In some implementations, ownership or possession of the private key may be deemed as a proof of ownership of the account. In some cases, the account may be identified by a blockchain network public address which may be obtained by hashing, amongst other things, the public key. In some embodiments, in response to a request at202by an auditor110a, the auditee110bmay use the computing device102ato generate, at204, a ZKP that the auditee110bowns the account on the ZKP-enabled DLN100. In some embodiments, the auditee110bmay use the computing device102bto generate the ZKP without necessarily having been requested by the auditor110a. For example, after an auditor-auditee relationship is established between the auditor110aand the auditee110b, the auditee may use the computing device102bto generate the ZKP for later use by the auditor110a. In some embodiments, the ZKP may include the proof that the auditee110bowns, has access to or possesses the private key of an asymmetric key pair of an account on the ZKP-enabled DLN100identified by the public key of the asymmetric key pair. In some embodiments, at206, the auditee110bmay use the computing device102bto provide the ZKP to the auditor110aand/or the smart contract of the ZKP-enabled DLN100. In some implementations, the auditee110bmay provide or publish the ZKP to the smart contract anonymously. After the ZKP is provided to the auditor110aand/or the smart contract, in some embodiments, the auditor110amay proceed with having the smart contract verify the ZKP to determine whether the auditee110bin fact owns the account. For example, the auditor110amay cause the smart contract to start the computation of the ZKP to determine the validity of the ZKP. In some implementations, the smart contract may be programmed to initiate (e.g., automatically) the computation of the ZKP based on a pre-determined condition so that the auditor110acan use the results of the computation or verification of the ZKP when auditing the auditee's102bownership of the account. For example, the smart contract may compute the ZKP at pre-determined dates and times (e.g., the date and time information obtained by the smart contract from an oracle, as discussed above). In some embodiments, at208, the smart contract of the ZKP-enabled DLN100may verify the ZKP (e.g., verify that the ZKP is valid) and the auditor110amay use the computing device102ato generate a confirmation that the auditee110bis an owner of the account. In some implementations, the verification of the ZKP and/or the generation of the confirmation may occur without any further interaction between the auditor110aand the auditee110bafter the auditee110bprovided the ZKP to the auditor110a(e.g., using the computing device102b) and/or the smart contract. For example, the ZKP may be verified and/or the confirmation may be generated without the auditee110bor the auditee's computing device102bresponding to a cryptographic challenge (e.g., from the auditor110a) or being engaged in a test transaction (e.g., with the auditor110a). In some implementations, a test transaction can be a transaction originating from the account being audited and supposedly undertaken by the auditee110bas a demonstration of the auditee's110bownership of the account. In some implementations, the verification of the ZKP and/or the generation of the confirmation may occur without any identifying information of the auditor110a, the auditee110b, the account and/or contents of the account being revealed to the other participants of the ZKP-enabled DLN100or to the public at large as a result of the audit process. For example, the verification of the ZKP and/or the generation of the confirmation may occur without the private key of the account being revealed to the auditor110aand/or the to the other participants of the ZKP-enabled DLN100. While various embodiments have been described and illustrated herein, one will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the embodiments described herein. More generally, one will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. One will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the disclosure, including the appended claims and equivalents thereto, disclosed embodiments may be practiced otherwise than as specifically described and claimed. Embodiments of the present disclosure are directed to each individual feature, system, tool, element, component, and/or method described herein. In addition, any combination of two or more such features, systems, articles, elements, components, and/or methods, if such features, systems, articles, elements, components, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure. The above-described embodiments can be implemented in any of numerous ways. For example, embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be stored (e.g., on non-transitory memory) and executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. Further, it should be appreciated that a computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, netbook computer, or a tablet computer. Additionally, a computer may be embedded in a device not generally regarded as a computer but with suitable processing capabilities, including a smart phone, smart device, or any other suitable portable or fixed electronic device. Also, a computer can have one or more input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets. As another example, a computer can receive input information through speech recognition or in other audible format. Such computers can be interconnected by one or more networks in any suitable form, including a local area network or a wide area network, such as an enterprise network, and intelligent network (IN) or the Internet. Such networks can be based on any suitable technology and can operate according to any suitable protocol and can include wireless networks, wired networks or fiber optic networks. The various methods or processes outlined herein can be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software can be written using any of a number of suitable programming languages and/or programming or scripting tools, and also can be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine. In this respect, various disclosed concepts can be embodied as a computer readable storage medium (or multiple computer readable storage media) (e.g., a computer memory, one or more floppy discs, compact discs, optical discs, magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other non-transitory medium or tangible computer storage medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the disclosure discussed above. The computer readable medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present disclosure as discussed above. The terms “program” or “software” are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of embodiments as discussed above. Additionally, it should be appreciated that according to one aspect, one or more computer programs that when executed perform methods of the present disclosure need not reside on a single computer or processor, but can be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the disclosure. Computer-executable instructions can be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules can be combined or distributed as desired in various embodiments. Also, data structures can be stored in computer-readable media in any suitable form. For simplicity of illustration, data structures may be shown to have fields that are related through location in the data structure. Such relationships can likewise be achieved by assigning storage for the fields with locations in a computer-readable medium that convey relationship between the fields. However, any suitable mechanism can be used to establish a relationship between information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationship between data elements. Also, various concepts can be embodied as one or more methods, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments can be constructed in which acts are performed in an order different than illustrated, which can include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments. All publications, patent applications, patents, and other references mentioned herein are incorporated by reference in their entirety. All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms. The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.” The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc. As used herein, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of” “Consisting essentially of,” when used in claims, shall have its ordinary meaning as used in the field of patent law. As used herein, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc. All transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 2111.03. | 44,632 |
11943359 | DETAILED DESCRIPTION The present disclosure will now be described with reference to the figures, which in general relate to secure compute network devices, systems, and methods. A network device (also referred to herein as a “network node”) may include, but is not limited to, a router or a switch. Secure compute (also known as privacy-preserving compute) is a subfield of cryptography with the goal of creating methods for computing a result for some input(s) while keeping the input(s) private. One type of a secure compute is a zero-knowledge proof (also referred to as a zero-knowledge protocol). In one embodiment, a network node acts as a “verifier” by verifying whether a zero-knowledge proof that was generated by an input node is correct. Three requirements are met by a zero-knowledge protocol: completeness, soundness, and zero-knowledge. In this context, completeness means that if the input is true, the zero-knowledge proof always returns “true.” Soundness means that if the input is false, it is not possible to trick the zero-knowledge proof to return “true.” Zero knowledge means that if the statement is true, the verifier will have no idea what the statement actually is. Another type of a secure compute is a secure multi-party computation (MPC). An MPC protocol allows n “players” to compute a function of their data while maintaining the privacy of their data. A network node computes some function of the data, without the input nodes revealing their data, in one embodiment. For example, two entities that have confidential patient data could perform joint research on the patient data without revealing the confidential patient information. Thus, the network node performs a secure MPC, in one embodiment. Embodiments are not limited to the type of a secure compute being a zero-knowledge proof or a secure multi-party computation. Other types of privacy preserving computes can be implemented. The one or more parties (which are referred to herein as “input nodes”) encrypt information that is desired to be kept private before sending the encrypted information to one or more network nodes, in one embodiment. The one or more network nodes perform the secure compute without decrypting the encrypted information, in one embodiment. Therefore, the one more network nodes may determine a result of the secure compute while maintaining the privacy of the one or more party's information A significant challenge in implementing a secure compute, such as a zero-knowledge proof or an MPC is that it may take a relatively large amount of computer memory and or computation time, which can make it difficult to provide a practical implementation. Embodiments in which the secure compute is performed by a network node move the burden of performing the secure compute from electronic devices that either do not have the computation power and/or for which performing the secure compute would be too burdensome. Another significant challenge in implementing a secure compute is to prevent dis-honest entities from attempting to produce an incorrect result of the secure compute. A secure compute has one correct result, as defined herein. However, dis-honest entities could attempt to force an incorrect result. The one or more network nodes that perform the secure compute provide some mechanism to ensure that the result of the secure compute can be trusted to be a correct result, in one embodiment. Herein, a “trusted result” means that the result of the secure compute can be trusted to be a correct result. Examples of mechanisms to ensure that the result of the secure compute can be trusted to be a correct result include, but are not limited to, multiple network nodes acting as peer nodes to generate a trusted result, notarizing results of the secure compute from multiple network devices to produce the trusted result, and/or performing the secure compute in a trusted execution environment (TEE). Additionally, routing traffic can be reduced by selection of which network node(s) perform the secure compute. It is understood that the present embodiments of the disclosure may be implemented in many different forms and that claims scopes should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the inventive embodiment concepts to those skilled in the art. Indeed, the disclosure is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the disclosure as defined by the appended claims. Furthermore, in the following detailed description of the present embodiments of the disclosure, numerous specific details are set forth in order to provide a thorough understanding. However, it will be clear to those of ordinary skill in the art that the present embodiments of the disclosure may be practiced without such specific details. FIG.1illustrates a communication system100in which embodiments may be practiced. The communication system100includes one or more input nodes102(also referred to as “input device”), one or more result nodes (also referred to as “result device”)104, and a number of network nodes (also referred to as “network devices”)106. The network nodes106reside in one or more networks110. The network nodes106may be routers, switches, etc. At least one of the network nodes106is configured to perform a secure compute. A secure compute has one or more inputs and at least one result, in one embodiment. An input node102is a node that provides an input to the secure compute. One or more input nodes102may provide different inputs to the secure compute, when there are two or more inputs. A result node104is a node that receives a result of the secure compute. Thus, the terms “input node” and “result node” refer to the role that the respective nodes play in the secure computes. A node might be an input node with respect to one secure compute and a result node with respect to another secure compute. It is possible for a node to be both an input node and a result node with respect to the same secure compute, when there are more than one input nodes for that secure compute. An input node102or a result node104could be user equipment (UE) including, but not limited to, wireless transmit/receive unit (UE), mobile station, fixed or mobile subscriber unit, pager, cellular telephone, personal digital assistant (PDA), smartphone, laptop, computer, touchpad, wireless sensor, wearable devices or consumer electronics device. An input node102or a result node104could be a web server. An input node102may also be referred to herein as an input device, which is understood to be an electronic device such as a UE or web server. A result node104may also be referred to herein as a result device, which is understood to be an electronic device such as a UE or web server. The network nodes106are configured to receive and process routing requests in the network110. A network node106may include, for example, a router or a switch. In one embodiment, a network node106routes packets between two nodes (e.g., between two input nodes102, between an input node102and a result node104, between two result nodes104) in accordance with an Open System Interconnection (OSI) network layer (or layer 3) protocol. In one embodiment, a network node106delivers frames between two nodes in accordance with an Open System Interconnection (OSI) link layer (or layer 2) protocol. The network nodes106are not limited to these examples for receiving and processing routing requests in the network110. For example, in a multi-protocol label switching (MPLS) embodiment, an MPLS header can be added between the network layer header (i.e., the layer 3 header) and the link layer header (i.e., the layer 2 header) of the Open Systems Interconnection model (OSI model). Because MPLS often operates at a layer that is generally considered to lie between the traditional definitions of OSI Layer 2 (data link layer) and Layer 3 (network layer), MPLS is often referred to as a layer 2.5 protocol. At least one of the network nodes106is configured to perform a type of secure compute, in one embodiment. The secure compute may also be referred to herein as “privacy-preserving compute.” One type of a secure compute is a zero-knowledge proof (also referred to as a zero-knowledge protocol). Zero-knowledge protocols are known to those of ordinary skill in the art. It is not required that all of the network nodes106be able to perform the secure compute. In one embodiment, an input node102or a result node104sends a request (or query) into the network110to look for one or more network nodes106that are capable of performing the type of secure compute. The sender of the query (e.g., input node102or result node104) selects one or more of network nodes106to perform the secure compute. Note that there may be many types of zero-knowledge proof. For example, there may be many different protocols for computing and verifying a zero-knowledge proof. Hence, the sender of the query may look for one or more network nodes106that are capable of performing the particular type of zero-knowledge protocol. A zero-knowledge proof may be considered to be a secure two-party computation, where the verifier (e.g., network node106) verifies the validity of a statement by the prover (e.g., input node102). This concept can be extended to secure multiparty computation (MPC). An MPC protocol allows n “players” to compute a function of their inputs while maintaining the privacy of their inputs. One example of a secure multiparty computation is referred to as a secure sum protocol, which allows multiple parties to compute a function of their individual data without revealing the data to one another. As one example, party A has data x1, party B has data x2, and party C has data x2. Each of the parties may be a different input node102. A network node106computes some function of the data, without the inputs nodes102revealing their data. For example, two entities that have confidential patient data could perform joint research on the patient data without revealing the confidential patient information. Thus, the network node106performs a secure multiparty computation, in one embodiment. Techniques for secure multiparty computation are known to those of ordinary skill in the art. As noted, the secure compute may have one or more inputs, from a corresponding one or more input nodes102. Each of the input nodes that has an input for the secure compute sends its input to each of the selected network nodes106, in one embodiment. After the one or more selected network nodes106perform the secure compute, the result of the secure compute is provided to the result node(s)104. The network nodes106are configured to ensure that the result of the secure compute is trusted in one embodiment. In other words, the network nodes106are configured to ensure that the result of the secure compute is the correct result. Further details of how the network nodes106ensure that the result is trusted are described herein. The following example of a secure compute will be used for purpose of illustration. An input node102might be a computer laptop and a result node104might be a web server. A user of the computer laptop might be making a purchase on an e-commerce server that includes the web server. One of the network nodes106may perform a secure compute related to the purchase. An example of the secure compute is for the network node106to verify that the user of the laptop has sufficient monetary funds to make the purchase. This transaction could involve blockchain technology; however, the use of blockchain technology is not required. The network node(s)106could perform any type of secure compute. The secure compute allows the input node102(or nodes) to keep some input information private, while still allowing the result node104to be certain with respect to the input information. For example, the input node102could keep private the total amount of money that a user has in an account, while still assuring the result node104that the user has sufficient funds to make a purchase. In one embodiment, the input nodes102keep their information private by encrypting the information before sending it to the network node(s)106. The network node(s)106are able to generate a result for the secure compute without decrypting the information, in one embodiment. In some cases, the secure compute involves more than one input node102. An example of this is what is sometimes referred to as “the millionaire's problem.” In one version of the millionaire's problem, two or more people want to determine who has more wealth without revealing their exact wealth to anyone. More generally, this can be viewed as the problem of comparing two numbers to determine which is larger without revealing the actual value of either number. There are many practical applications of this problem. One practical example is if a buyer only wishes to pay a certain amount and a seller wants to sell for some minimum amount, but neither wishes to reveal their respective amounts. One or more of the network nodes106performs a secure compute with respect to these or other problems, which keeps certain inputs from the input nodes102private. For example, a network node106could determine which of two people has more wealth without either person revealing their wealth to the network node106. In one embodiment, a single network node106performs the secure compute. The network node106has a trusted execution environment (TEE), in one embodiment. The secure compute is performed in the TEE, in one embodiment. A TEE is a secure area of a main processor, in one embodiment. A TEE guarantees code and data loaded inside are protected with respect to confidentiality and integrity, in one embodiment. Examples of TEE include, but are not limited to, AMD Platform Security Processor (PSP), ARM TrustZone® Technology, Intel Trusted Execution Technology, and MultiZone® Security Trusted Execution Environment. The network node106may send the result to a result node104. Thus, in this manner a network node106may provide the result to the result node104. Moreover, the result node104can trust the result to be the correct result due to the use of, for example, the TEE. Note that a security mechanism other than a TEE can be used in conjunction with the network node106. In one embodiment, two or more network nodes106perform the same secure compute. Each network node106may provide its result of the secure compute to one of the nodes, which may serve as a “notary node”. The notary node may compare the results to determine a final result. For example, if all of the results match, the results may be considered to be the final result, in one embodiment. If the results do not match, the results may be discarded, or a majority of the results may be considered to be the final result in one embodiment. The notary node is one of the network nodes106, in one embodiment. The notary node could be a node that is not a network node106, in which case, two or more network nodes106send their respective results to the notary node to be notarized, in one embodiment. The notary node could be the result node104, in which case, two or more network nodes106send their respective results to the result node104to be notarized, in one embodiment. Moreover, the result node104can trust the result to be the correct result due to, for example, the consensus mechanism provided by the notary node. In one embodiment, two or more network nodes106acts as peer nodes (or more simply “peers”) to ensure that the result of the secure compute is trusted to be the correct result. Each node may perform the secure compute. The network nodes106may act as peers and compare their results. For example, two or more network nodes106act as peers in a blockchain network. A blockchain network is comprised of peer nodes, each of which can hold copies of ledger. Thus, a network node106holds of copy of a blockchain ledger, in one embodiment. The result of the secure compute is recorded in a blockchain ledger, in one embodiment. The result node104obtains the result from the blockchain ledger, in one embodiment. Thus, in this manner one or more network nodes106may provide the result to the result node104. Moreover, the result node104can trust the result to be correct, for example, the consensus mechanism provided by the network nodes106acting as peers. AlthoughFIG.1illustrates one example of a communication system, various changes may be made toFIG.1. For example, the communication system100could include any number of input nodes102, network nodes106, result nodes104, or other components in any suitable configuration. FIG.2illustrates an embodiment of a network node (e.g., router) in accordance with embodiments of the disclosure. The network node106may comprise a plurality of input/output ports210/230and/or receivers (Rx)212and transmitters (Tx)232for receiving and transmitting data from other nodes, a processor220, including a storage222, to process data and determine which node to send the data. Although illustrated as a single processor, the processor220is not so limited and may comprise multiple processors. The processor220may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs. The processor220may be configured to implement any of the schemes described herein using any one or combination of steps described in the embodiments. In some embodiments, the processor220is configured to perform a secure compute. Moreover, the processor220may be implemented using hardware, software, or both. The storage222(or memory) may include cache224and long-term storage226, and may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein. Although illustrated as a single storage, storage222may be implemented as a combination of read only memory (ROM), random access memory (RAM), or secondary storage (e.g., one or more disk drives or tape drives used for non-volatile storage of data). The storage222contains instructions that may be executed on the processor220in order to perform a secure compute, in one embodiment. FIG.3illustrates high level block diagram of a computing system300. The computing system300may be used to implement an input node102, a result node104, or a network node106. Specific devices may utilize all of the components shown, or only a subset of the components, and levels of integration may vary from device to device. Furthermore, a device may contain multiple instances of a component, such as multiple processing units, processors, memories, transmitters, receivers, etc. The computing system300may comprise one or more input/output devices, such as network interfaces, storage interfaces, and the like. The computing system300may include a central processing unit (CPU)310, a memory320, amass storage device330, and an I/O interface360connected to a bus370. The bus may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus or the like. The CPU310may comprise any type of electronic data processor. The CPU310may be configured to implement any of the schemes described herein, using any one or combination of steps described in the embodiments. The memory320may comprise any type of system memory such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), a combination thereof, or the like. In an embodiment, the memory320may include ROM for use at boot-up, and DRAM for program and data storage for use while executing programs. In embodiments, the memory320is non-transitory. The memory320is a non-transitory computer-readable medium storing computer instructions, that when executed by the CPU310, cause the one or more processors to perform various functionality described herein. In one embodiment, the memory320comprises software modules that may be executed on the CPU310. The mass storage device330may comprise any type of storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus. The mass storage device330may comprise, for example, one or more of a solid state drive, hard disk drive, a magnetic disk drive, an optical disk drive, or the like. The computing system300also includes one or more network interfaces350, which may comprise wired links, such as an Ethernet cable or the like, and/or wireless links to access nodes or one or more networks110. The network interface350allows the computing system300to communicate with remote units via the network110. For example, the network interface350may provide wireless communication via one or more transmitters/transmit antennas and one or more receivers/receive antennas. In an embodiment, computing system300is coupled to a local-area network or a wide-area network for data processing and communications with remote devices, such as other processing units, the Internet, remote storage facilities, or the like. Herein, the term “network interface” will be understood to include a port. The components depicted in the computing system ofFIG.3are those typically found in computing systems suitable for use with the technology described herein, and are intended to represent a broad category of such computer components that are well known in the art. Many different bus configurations, network platforms, and operating systems can be used. FIG.4illustrates a high level block diagram of a computing system400that may be used in one embodiment of a network node106. The computing system400has some elements in common with computing system300, which will not be described in detail. For example, the computing system400has a CPU410, memory440(a.k.a., mass storage), a network interface450, I/O interface460connected to a bus470. The memory440stores various modules. The routing module440A is configured to receive and process routing requests in a network, such as network110. In one embodiment, the routing module440A is configured to route packets in accordance with an Open System Interconnection (OSI) layer 3 protocol. In one embodiment, the routing module440A is configured to deliver frames in accordance with an Open System Interconnection (OSI) layer 2 protocol. The capability qualifier module440B is configured to inform other nodes (typically not another network node106) that the network device106is able to perform a particular type (or types) of secure compute. The type of secure compute may indicate a type of zero-knowledge proof, a type of MPC, or some other privacy preserving compute. The capability qualifier module440B is configured to add information to a packet header that indicates that the network device106is able to perform a particular type of secure compute, in one embodiment. The packet header may be for one or more of the packets that are routed by the network device106. The capability qualifier module440B is configured to receive a query, via a network, whether the network device106is able to perform a particular type of secure compute, in one embodiment. The capability qualifier module440B is configured to respond to the query to indicate that the network device is able to perform the particular type of secure compute, in one embodiment. The secure compute module440C is configured to perform a secure compute. As has been mentioned above, a secure compute (also known as privacy-preserving compute) is a subfield of cryptography with the goal of creating methods for computing a result over for some input(s) while keeping the input(s) private. In one embodiment, the secure compute includes a zero-knowledge proof (also referred to as a zero-knowledge protocol). In one embodiment, the secure compute includes a secure MPC. The trust providing module440D is configured to perform an operation that ensures that the result of the secure compute is the correct result (e.g., is a trusted result). In one embodiment, trust providing module440D is configured to communicate with other network devices106that performed the same secure compute. This may allow, for example, the network device106to act as a peer with other network devices in the network that performed the secure compute. In one embodiment, the network device106is configured to act as a peer node in a blockchain network. By acting as a peer node in a blockchain network when performing the secure compute, the network device106helps to ensure that the result of the secure compute is trusted to be correct. The result providing module440E is configured to provide the result of the secure compute to a result device104connected to the network110. In one embodiment, the result is sent from the network device106to the result device104. In one embodiment, the network device106records the result in a blockchain ledger. The blockchain ledger is stored on the network device106, in one embodiment. For example, the blockchain ledger could be stored in memory440(a.k.a., mass storage). The result device104obtains the result from the blockchain ledger, in one embodiment. The secure compute may require substantial computation power, in some embodiments. The secure compute accelerator480is configured to accelerate the secure compute, in one embodiment. The secure compute accelerator480may contain hardware (e.g., a processor and possibly additional memory). The secure compute accelerator480comprises a hardware accelerator, in one embodiment. The hardware accelerator is configured to perform a secure compute, in one embodiment. Thus, the secure compute accelerator480is specifically designed to perform the secure compute, in one embodiment. Hence, the secure compute accelerator480is able to perform the secure compute faster than a general purpose processor, in one embodiment. The secure compute accelerator480is not required. Hence, in one embodiment, the secure compute is performed on the CPU410. FIG.5illustrates example user equipment. The user equipment500may be used to implement an input node102or a result node104. The user equipment (UE) may for example be a mobile telephone, but may be other devices in further examples such as a desktop computer, laptop computer, tablet, hand-held computing device, automobile computing device and/or other computing devices. As shown in the figure, the UE500includes at least one processor504. The processor504implements various processing operations of the UE500. For example, the processor504may perform signal coding, data processing, power control, input/output processing, or any other functionality enabling the UE500to operate in the system100(FIG.1). The processor504may include any suitable processing or computing device configured to perform one or more operations. For example, the processor504may include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit. The UE500also includes at least one transceiver502. The transceiver502is configured to modulate data or other content for transmission by at least one antenna510. The transceiver502is also configured to demodulate data or other content received by the at least one antenna510. Each transceiver502may include any suitable structure for generating signals for wireless transmission and/or processing signals received wirelessly. Each antenna510includes any suitable structure for transmitting and/or receiving wireless signals. It is appreciated that one or multiple transceivers502could be used in the UE500, and one or multiple antennas510could be used in the UE500. Although shown as a single functional unit, a transceiver502may also be implemented using at least one transmitter and at least one separate receiver. The UE500further includes one or more input/output devices508. The input/output devices508facilitate interaction with a user. Each input/output device508includes any suitable structure for providing information to or receiving information from a user, such as a speaker, microphone, keypad, keyboard, display, or touch screen. In addition, the UE500includes at least one memory506. The memory506stores instructions and data used, generated, or collected by the UE500. For example, the memory506could store software or firmware instructions executed by the processor(s)504and data used to reduce or eliminate interference in incoming signals. Each memory506includes any suitable volatile and/or non-volatile storage and retrieval device(s). Any suitable type of memory may be used, such as random access memory (RAM), read only memory (ROM), hard disk, optical disc, subscriber identity module (SIM) card, memory stick, secure digital (SD) memory card, and the like. FIG.6is a flowchart of one embodiment of a process600of secure computing. The process600is performed by a network node106, in one embodiment. Step602includes receiving and processing routing requests in a network110. Step602includes routing packets in accordance with an Open System Interconnection (OSI) layer 3 protocol, in one embodiment. Step602includes delivering frames in accordance with an Open System Interconnection (OSI) layer 2 protocol, in one embodiment. Step602includes receiving and processing routing requests in accordance with an MPLS protocol, in one embodiment. Step604includes sending an indication into the network110that the network device106is able to perform a type of secure compute. In one embodiment, step604includes the network device106adding information to a header of a packet that indicates that the network device106is able to perform the type of secure compute. For example, as packets are being processed in step602, the network device106may add information to a header of one or more of the packets. The packet may be a packet that is delivered between the input node102and the result node104; however, this is not required. Thus, the network device106is one of the network devices106that processes a packet transmitted between the input node102and result node104, in one embodiment. Other techniques may be used for the network node106to indicate that it is capable of performing the type of secure compute. In one embodiment, the network device106responds to a query from, for example, the input node102or the result node104about the capabilities of the network node106. For example, the network device106may receive a query of whether the network device106is able to perform a type of secure compute. This request is received from a node that will be providing input to the secure compute (or an input node102), in one embodiment. This request is received from a node that will be receiving the result of the secure compute (or a result node104), in one embodiment. Step606includes a network node106performing the type of secure compute based on an input from the input node(s)102. The network node106may receive an input to the secure compute from each of one or more input nodes102prior to step606, in one embodiment. In one embodiment, the network node106performs the secure compute based on an input from a single input node102. In one embodiment, the network node106performs the secure compute based on a separate input from each of two or more input nodes102. In one embodiment, the network node106is one of two or more network nodes106that perform the same secure compute. In one embodiment, the secure compute includes a zero-knowledge proof. In this case, the input node102might provide a proof and a verification to the network node106. The network node106may verify the correctness of the zero-knowledge proof, based on the verification key. Step608includes the network node106ensuring that the result of the secure compute is trusted to be correct. There are numerous ways that the network node106may ensure that the result is trusted to be correct. In one embodiment, the network node106acts as a peer with other network devices in the network110that performed the secure compute to ensure that the result is trusted to be correct. The network node106acts as a peer in a blockchain network to ensure that the result is trusted to be correct, in one embodiment. The network node106notarizes results of the secure compute from a set of network devices, in one embodiment. The network node106performs the secure compute in a TEE to ensure that the result is trusted to be correct, in one embodiment. Step610includes the network node106providing a result of the secure compute to the result node104. Step610includes providing the result either directly from the network node106to the result node104or indirectly from the network node106to the result node104. In one embodiment, the network node106provides the result directly to the result node104. In one embodiment, the network node106provides the result directly to a notary node, which provides the final result to the result node104. In one embodiment, the network node106adds the result to a blockchain ledger in order to indirectly provide the result to the result node104. That is, the result node104obtains the result from the blockchain ledger in order to indirectly obtain the result from the network node106. FIG.7is a flowchart of one embodiment of a process700of secure computing. The process700may be performed in communication system100, but is not limited thereto. In process700, there may be one or more input nodes102, one or more result nodes104, and one or more network nodes106may perform the secure compute. Step702includes nodes communicating a desire to perform a secure compute. In one embodiment, a result node104sends a request to an input node102to perform a zero-knowledge proof. This request passes through the network110, and may be processed by one or more network nodes106that are capable of performing secure computes. However, it is possible that none of the network nodes106that process the request are capable of performing secure computes. Steps704and705describe different options for learning which one or more network nodes106are capable of performing secure computes. Either or both steps may be performed. Step704(Option A) includes either a result node104or an input node102sending one or more requests into the network110to look for a network node106that is capable of performing secure computes. In one embodiment, the one or more requests are sent to network nodes106that processed the request of step702. However, the request(s) may be sent to network nodes106that did not process the request of step702. One or more network nodes106may respond to the request of step704, indicating that the respective network node106is capable of performing secure computes. Step705(Option B) includes either a result node104or an input node102examining a header of one or more received packets to determine which network node (or nodes)106is/are capable of performing secure computes. In one embodiment, the one or more received packets are associated with the communication between the nodes in step702. For example, a result node104or an input node102may examine the header of a packet it received from the other node in step702. However, the one or more packets are not required to be associated with the communication between the nodes in step702. Step706includes selecting one or more network nodes106that is/are capable of performing the secure compute. This selection is based on the network nodes106that respond to the request of step704, in one embodiment. This selection is based on examining the headers of packets in step705, in one embodiment. In one embodiment, a single network node106is selected. In one embodiment, multiple network nodes106are selected. The selection is made by a result node104, in one embodiment. The selection is made by an input node102, in one embodiment. The selection in step706is based on the distance to the network node106from the input node102, in one embodiment. The selection in step706is based on the distance to the network node106from the result node104, in one embodiment. For example, in either case a network node106having the shortest distance may be selected. The selection in step706is based on the route between the input node102and the result node104. For example, the network node106can be selected to achieve the shortest route between the input node102and the result node104. The selection in step706is based on capabilities of the network node106, in one embodiment. For example, there may be different algorithms for performing the secure compute. The selection could be made based on which algorithm(s) a network node106is able to perform. The selection in step706could be based on more than one factor. For example, the selection could be based on both distance (e.g., distance to the network node106, routing distance between input node102and result node104, etc.) and capability (e.g., secure compute algorithms which network node106is able to perform). In one embodiment, the network node106is selected based on a matrix of factors. Step708includes one or more input nodes102preparing the input for the secure compute. Step708includes each input node102encrypting the information that is desired to be kept private, in one embodiment. In one embodiment, an input node102prepares a zero-knowledge proof. Further details of an input node102preparing a zero-knowledge proof are discussed in connection with process1200inFIG.12(see, for example, step1208). The input in step708is not limited to being a zero-knowledge proof. Step708may include each node of an MPC preparing their respective inputs. For example, process700might be used to allow several parties to place a bid without revealing the actual bid by, for example, encrypting the bids. Step710includes the one or more input nodes102sending their respective input to each of the one or more selected network nodes106. Step712includes each of the one or more selected network nodes106performing the secure compute. In one embodiment, each network node106verifies the zero-knowledge proof. In one embodiment, each network node106performs an MPC. Step714includes ensuring that that the results of the secure compute is trusted to be correct. In one embodiment, the trust is ensured by a security mechanism on the network node106. For example, the network node106may perform the secure compute in a TEE. In one embodiment, the trust is ensured by multiple network nodes106sending the result of the secure compute to a notary node, which notarizes the result. By “notarizing the result,” it is meant that the notary compares the results from the multiple network nodes106to verify that the result can be trusted to be correct. In one embodiment, the result can be trusted if all of the results are the same. In one embodiment, the trust is ensured by multiple network nodes106acting as peers in a blockchain network, and recording the result into the blockchain ledger. Step716includes one or more result nodes104obtaining the result. In one embodiment, a result node104obtains the result directly from a network node106that performed the secure compute. In one embodiment, a result node104obtains the result indirectly from a notary node that notarized the result. In one embodiment, a result node104obtains the result from a blockchain ledger, into which the result has been recorded. FIGS.8,9, and10describe three different embodiments of secure compute. These are three different embodiments of process700.FIG.8describes an embodiment in which a single network node106performs the secure compute.FIG.9describes an embodiment in which multiple network nodes106perform the secure compute.FIG.10describes an embodiment in which the result of the secure compute is recorded in a blockchain ledger. Referring now toFIG.8, a flowchart of one embodiment of a process800of secure computing using a single network node106is depicted. The process800may be performed in communication system100, but is not limited thereto. In process800, there may be one or more input nodes102, and one or more result nodes104. Step802includes nodes communicating a desire to perform a secure compute. In one embodiment, a result node104sends a request to an input node102to perform a zero-knowledge proof. Step804(Option A) includes either a result node104or an input node102sending one or more requests into the network110to look for a network node106that is capable of performing secure computes. In one embodiment, the one or more requests are sent to network nodes106that processed the request of step802. However, the request(s) may be sent to network nodes106that did not process the request of step802. One or more network nodes106may respond to the request of step804, indicating that the respective network node106is capable of performing secure computes. Step805(Option B) includes either a result node104or an input node102examining a header of one or more received packets to determine which network node (or nodes)106is/are capable of performing secure computes. In one embodiment, the one or more received packets are associated with the communication between the nodes in step802. However, the one or more packets are not required to be associated with the communication between the nodes in step802. Step806includes selecting a network node106to perform the secure compute. This selection is based on the network nodes106that respond to the request of step804, in one embodiment. This selection is based on examining the headers of packets in step805, in one embodiment. For example, a result node104or an input node102may examine the header of a packet it received from the other node in step802. Step808includes one or more input nodes102preparing the input to the secure compute. In one embodiment, an input node102prepares a zero-knowledge proof. Step808may include each node of an MPC preparing their respective inputs. Step810includes the one or more input node102sending their respective input to the selected network node106. Step812includes the selected network node106performing the secure compute. In one embodiment, the network node106performs the secure compute in a TEE, which is one embodiment of step714(ensuring that the result is trusted). Other trust mechanisms can be used at the network node106to ensure that the result is trusted. Step814includes the selected network node106providing the result to the result node104. In one embodiment, the selected network node106sends the result directly to the result node104. Referring now toFIG.9, a flowchart of one embodiment of a process900of secure computing using multiple network nodes106is depicted. The process900may be performed in communication system100, but is not limited thereto. In process900, there may be one or more input nodes102, and one or more result nodes104. Step902includes nodes communicating a desire to perform a secure compute. In one embodiment, a result node104sends a request to an input node102to perform a zero-knowledge proof. Step904(Option A) includes either a result node104or an input node102sending one or more requests into the network110to look for a network node106that is capable of performing secure computes. In one embodiment, the one or more requests are sent to network nodes106that processed the request of step902. However, the request(s) may be sent to network nodes106that did not process the request of step902. One or more network nodes106may respond to the request of step904, indicating that the respective network node106is capable of performing secure computes. Step905(Option B) includes either a result node104or an input node102examining a header of one or more received packets to determine which network node (or nodes)106is/are capable of performing secure computes. In one embodiment, the one or more received packets are associated with the communication between the nodes in step802. For example, a result node104or an input node102may examine the header of a packet it received from the other node in step902. However, the one or more packets are not required to be associated with the communication between the nodes in step902. Step906includes selecting multiple network nodes106that are capable of performing the secure compute. In the event that only one network node106responds that it is capable of performing the secure compute, process800could be performed. Step908includes one or more input nodes102preparing the input. In one embodiment, an input node102prepares a zero-knowledge proof. Step808may include each node of an MPC preparing their respective inputs. Step908also includes the one or more input node102sending their respective input to each of the selected network nodes106. Step910includes each of the selected network nodes106performing the secure compute. Step912includes the selected network nodes106providing their respective results to a notary node. The notary node could be the result node, but that is not required. Step914includes the notary node notarizing the results. In one embodiment, the result from each of the network nodes106should match for the result to be valid. If the result is valid (step916), then the notary node sends the valid result to the one of more result nodes104, in step918. If the result is not valid, then the notary node drops the result (step920). Note that the notary node can optionally be a result node. Steps914-920are one embodiment of step714(ensuring that the result is trusted). In one embodiment, the notary node performs a “majority rules” algorithm in which the notary node selects as the final result what result is most prevalent. For example, if each network node provides a TRUE or FALSE result, the notary node determines whether there are more TRUE or FALSE results. In one embodiment, an odd number of network nodes are used to prevent ties. In another embodiment, an even number of network nodes are permitted, with the results being discarded in the event of a tie. Referring now toFIG.10, a flowchart of one embodiment of a process1000of secure computing using multiple network nodes106and a blockchain is depicted. The process1000may be performed in communication system100, but is not limited thereto. In process1000, there may be one or more input nodes102, and one or more result nodes104. Step1002includes nodes communicating a desire to perform a secure compute. In one embodiment, a result node104sends a request to an input node102to perform a zero-knowledge proof. Step1004(Option A) includes either a result node104or an input node102sending one or more requests into the network110to look for a network node106that is capable of performing secure computes. In one embodiment, the one or more requests are sent to network nodes106that processed the request of step1002. However, the request(s) may be sent to network nodes106that did not process the request of step1002. One or more network nodes106may respond to the request of step1004, indicating that the respective network node106is capable of performing secure computes. Step1005(Option B) includes either a result node104or an input node102examining a header of one or more received packets to determine which network node (or nodes)106is/are capable of performing secure computes. In one embodiment, the one or more received packets are associated with the communication between the nodes in step802. For example, a result node104or an input node102may examine the header of a packet it received from the other node in step1002. However, the one or more packets are not required to be associated with the communication between the nodes in step1002. Step1006includes selecting multiple network nodes106that are capable of performing the secure compute. In the event that only one network node106responds that it is capable of performing the secure compute, process800could be performed. Step1008includes one or more input nodes102preparing the input. In one embodiment, an input node102prepares a zero-knowledge proof. Step1008may include each node of an MPC preparing their respective inputs. Step1008also includes the one or more input node102sending their respective input to each of the selected network nodes106. Step1010includes each of the selected network nodes106performing the secure compute. Step1012includes the selected network nodes106acting as peers in a blockchain network to verify result. Step1014includes at least one of the selected network nodes106recording the verified result in a blockchain ledger. The blockchain ledger may be stored on one or more of the peers. In one embodiment, all of the blockchain peers store a copy of the blockchain ledger. Steps1012and1014are one embodiment of step714(ensuring that the result is trusted). Step1016includes the result node(s)104obtaining the result from the blockchain ledger. In some embodiments, the secure compute that is performed by a network node106is a zero-knowledge proof. The zero-knowledge proof is a non-interactive zero-knowledge proof, in one embodiment.FIGS.11-16show details of embodiments in which the secure compute that is performed by a network node106is a zero-knowledge proof.FIG.11depicts an embodiment of a communication system1100in which a single network node106verifies a zero-knowledge proof.FIG.12describes a flowchart of one embodiment of a process1200that may be used in communication system1100.FIG.13depicts an embodiment of a communication system1300in which multiple network nodes106verify a zero-knowledge proof, and in which there is a notary node to notarize the results.FIG.14depicts a flowchart of one embodiment of a process1400that may be used in communication system1300.FIG.15depicts an embodiment of a communication system1500in which multiple network nodes106, which act as peers in a blockchain network, verify a zero-knowledge proof.FIG.16describes a flowchart of one embodiment of a process1600that may be used in communication system1500. Referring now toFIG.11, a communication system1100in which embodiments may be practiced is depicted. The communication system1100includes an input node102, a result node104, and several network nodes106in a network1110. Six network nodes106a-106fare explicitly depicted. There may be other network nodes106in a portion of the network1110a, which are not depicted. The depicted network nodes106a-106fare considered to reside in one or more networks. In communication system1100, network node106ehas been selected to verify a zero-knowledge proof. In general, input node102performs a zero-knowledge proof (Prf), and sends the proof to the verifier network node106e. The verifier network node106everifies the proof, and sends a result of the proof to a result node104. The zero-knowledge proof comprises a mathematical algorithm (“P” or “proof algorithm”), in one embodiment. The proof algorithm inputs a proving key (“pk”), a random input (“x”), and a private statement (“w”), in one embodiment. The private statement is what the prover (e.g., person operating the input node102) wishes to keep private. The proving key and the random input are publicly available information, in one embodiment. The proof serves to encrypt the private statement (“w”), in one embodiment. The verifier network node106eperforms a verification of the proof. To do so, the verifier network node106eexecutes a mathematical algorithm (“V” or “verification algorithm”), in one embodiment. The verification algorithm (V) inputs the random input (“x”), a verifying key (“vk”) and the proof (prf), in one embodiment. The verifying key is publicly available, in one embodiment. The verification algorithm (V) outputs a binary value (e.g., TRUE, FALSE), in one embodiment. The verifier network node106esends the result (e.g., TRUE, FALSE) to the result node104. In one embodiment, the zero-knowledge proof is based on zero-knowledge succinct non-interactive augments of knowledge (zk-SNARKs). zk-SNARKs are a type of non-interactive zero-knowledge proof. By non-interactive it is meant that the proof itself can used by the verifier without any further interaction from the prover. A zk-SNARK protocol may be based on three algorithms (G, P, V). The generator (G) algorithm may be used to generate the proving key (pk) and the verifying key (pv). The input to the generator (G) algorithm is a parameter lambda and a program. The parameter lambda is kept secret. The proving key (pk) and the verifying key (vk) may each be distributed publicly. In one embodiment, the network node106eserves as the generator node to generate the proving key (pk) and the verifying key (pv). However, the generator node could be some other node. The generator node is not required to be a network node106. The zk-SNARK protocol is one example of a protocol for zero-knowledge proofs. Other protocols for zero-knowledge proofs may be used. In one embodiment, the network node106advertises (see, for example, step604ofFIG.6) the type(s) of protocol that the network node106is capable of performing for a zero-knowledge proof Referring now toFIG.12, a flowchart of one embodiment of a process1200of verifying a zero-knowledge proof is depicted. The process1200may be used in communication system1100. Step1202includes a result node104sending a zero-knowledge proof request to an input node102. This request refers to a request for the input node102to perform the zero-knowledge proof. This request could be initiated in response to a user of the input node102wanting to proof some piece of information to an entity controlling the result node104, without the user revealing the piece of information. Step1204(Option A) includes either a result node104or an input node102sending one or more requests into the network110to look for a network node106that is capable of performing secure computes. In one embodiment, the one or more requests are sent to network nodes106that processed the request of step1202. However, the request(s) may be sent to network nodes106that did not process the request of step1202. One or more network nodes106may respond to the request of step1204, indicating that the respective network node106is capable of performing secure computes. Step1205(Option B) includes either a result node104or an input node102examining a header of one or more received packets to determine which network node (or nodes)106is/are capable of performing secure computes. In one embodiment, the one or more received packets are associated with the communication between the nodes in step1202. However, the one or more packets are not required to be associated with the communication between the nodes in step1202. Step1206includes selecting a verifier network node106. The selection may be based on the location of network nodes106. For example, a network node106that is close to the input node102and/or the result node104may be selected. Step1208includes the input node102generating the zero-knowledge proof. The zero-knowledge proof has as inputs a proving key “pk” and a statement “w”, in one embodiment. The proving key (“pk”) is publicly available, in one embodiment. The statement “w” is what an entity (e.g., user) at the input node102wants to prove but to keep secret. The zero-knowledge proof has as inputs a proving key “pk”, a random parameter “x”, and a statement “w”, in one embodiment. The random parameter “x” is publicly available, in one embodiment. Step1210includes the input node102sending the zero-knowledge proof to the selected verifier network node106e. The input node102may also send a verification key “vk” to the selected verifier network node106e. Since the verification key “vk” may be publicly available, it is not required that the input node102send the verification key “vk” to the selected verifier network node106e. The input node102may also send a random parameter “x” to the selected verifier network node106e. Since the random parameter “x” may be publicly available, it is not required that the input node102send the random parameter “x” to the selected verifier network node106e. Step1212includes the selected verifier network node106everifying the zero-knowledge proof. In one embodiment, the selected verifier network node106eexecutes a verifier algorithm that inputs the zero-knowledge proof, the verification key “vk”, and the random parameter “x”. The verifier algorithm outputs a binary value (e.g., TRUE, FALSE), in one embodiment. Step1214includes a decision of whether the zero-knowledge proof is correct. This decision is made on based on whether the verifier algorithm outputs TRUE or FALSE, in one embodiment. In the zero-knowledge proof is correct (e.g., TRUE), then step1216may be performed. Step1216includes the selected verifier network node106esending a result of TRUE to the result node104. In the zero-knowledge proof is incorrect (e.g., FALSE), then step1218may be performed. Step1218includes the selected verifier network node106esending a result of FALSE to the result node104. Referring now toFIG.13, a communication system1300in which embodiments may be practiced is depicted. The communication system1300includes an input node102, a result node104, and several network nodes106. Six network nodes106a-106fare explicitly depicted in network1310. There may be other network nodes106in portion1310aof the network, which are not depicted. In communication system1300, network nodes106a,106e, and106fhas been selected to verify a zero-knowledge proof. Input node102performs a zero-knowledge proof (Prf), and sends the proof to each of the selected network nodes102a,106eand106f. Each selected network node102a,106eand106fverifies the proof. The selected network node102a,106eand106fact as peers to verify the result, in one embodiment. In one embodiment, one of the network nodes106as a notary node to notarize the results. For example, network node106emay receive Result_a from network node106aand Result_a from network node106fNetwork node106emay send a final result (e.g., Result_final) to the result node104, after notarizing the results. The zero-knowledge proof comprises a mathematical algorithm that inputs a proving key (“pk”), a random input (“x”), and a private statement (“w”), in one embodiment. The verification algorithm (V) inputs the random input (“x”), a verifying key (“vk”) and the proof (pro, in one embodiment. In one embodiment, the zero-knowledge proof is based on zero-knowledge succinct non-interactive augments of knowledge (zk-SNARKs). Other protocols for zero-knowledge proofs may be used. Referring now toFIG.14, a flowchart of one embodiment of a process1400of verifying a zero-knowledge proof is depicted. The process1400may be used in communication system1300. Step1402includes a result node104sending a zero-knowledge proof request to an input node102. This request refers to a request for the input node102to perform the zero-knowledge proof. This request could be initiated in response to a user of the input node102wanting to proof some piece of information to an entity controlling the result node104, without the user revealing the piece of information. Step1404(Option A) includes either a result node104or an input node102sending one or more requests into the network110to look for a network node106that is capable of performing secure computes. In one embodiment, the one or more requests are sent to network nodes106that processed the request of step1402. However, the request(s) may be sent to network nodes106that did not process the request of step1402. One or more network nodes106may respond to the request of step1404, indicating that the respective network node106is capable of performing secure computes. Step1405(Option B) includes either a result node104or an input node102examining a header of one or more received packets to determine which network node (or nodes)106is/are capable of performing secure computes. In one embodiment, the one or more received packets are associated with the communication between the nodes in step1402. However, the one or more packets are not required to be associated with the communication between the nodes in step1402. Step1406includes selecting multiple verifier network nodes106. The selection may be based on the location of network nodes106. For example, a network node106that is close to the input node102and/or the result node104may be selected. For the sake of discussion, network nodes102a,102e, and102fare selected. Step1408includes the input node102generating the zero-knowledge proof. The zero-knowledge proof has as inputs a proving key “pk” and a statement “w”, in one embodiment. The zero-knowledge proof has as inputs a proving key “pk”, a random parameter “x”, and a statement “w”, in one embodiment. Step1410includes the input node102sending the zero-knowledge proof to each of the selected verifier nodes (e.g., network nodes102a,102e, and102f). The input node102may also send a verification key “vk” to the selected verifier nodes. Since the verification key “vk” may be publicly available, it is not required that the input node102send the verification key “vk” to the selected verifier nodes. The input node102may also send a random parameter “x” to the selected verifier nodes. Since the random parameter “x” may be publicly available, it is not required that the input node102send the random parameter “x” to the selected verifier nodes. Step1412includes the selected verifier nodes (e.g., network nodes102a,102e, and102f) verifying the zero-knowledge proof. In one embodiment, each selected verifier node executes a verifier algorithm that inputs the zero-knowledge proof, the verification key “vk”, and the random parameter “x”. The verifier algorithm outputs a binary value (e.g., TRUE, FALSE), in one embodiment. Thus, for the sake of discussion each selected network node102a,102e, and102fgenerates a separate result. Step1414includes verifier network nodes sending their respective results to a notary node. The notary node may be one of the selected network nodes that verified the zero-knowledge proof. However, the notary node is not required to have verified the zero-knowledge proof. The notary node may be a network node, but is not required to be a network node. For the sake of discussion, network node106eacts as the notary node, in one embodiment. Thus, network node106asends Result_a to network node106e, and network node106fsends Result_f to network node106e, in one embodiment. These results are a binary value (e.g., TRUE, FALSE), in one embodiment. Step1416includes the notary node notarizing the results to generate a trusted result. For example, network node106ecompares its result with Result_a and Result_f to verify that all results match (e.g., are all TRUE or all FALSE). Assuming that the results match, then the notary node sends the notarized result to the result node104. The notarized result is either “TRUE” or “FALSE”, in one embodiment. If the results do not match (e.g., a mix of TRUE and FALSE), then the notary node implements a procedure for notarizing failing, in one embodiment. This may include determining whether there is a network node106that had an outlier result. For example, it may include determining whether one network node106produced a different result from all other network nodes106. The notary node may also send a message to the result node104that the process of notarizing the results failed and that the zero-knowledge proof should be retried, but with a different set of network nodes. In one embodiment, the notary node performs a “majority rules” algorithm in which the notary node selects as the final result what result is most prevalent. For example, if each network node provides a TRUE or FALSE result, the notary node determines whether there are more TRUE or FALSE results. In one embodiment, an odd number of network nodes are used to prevent ties. In another embodiment, an even number of network nodes are permitted, with the results being discarded in the event of a tie. Referring now toFIG.15, a communication system1500in which embodiments may be practiced is depicted. The communication system1500includes an input node102, a result node104, and several network nodes106. Six network nodes106a-106fare explicitly depicted in network1510. There may be other network nodes106in portion1510aof the network, which are not depicted. In communication system1500, network nodes106a,106e, and106fhas been selected to verify a zero-knowledge proof. Input node102performs a zero-knowledge proof (Prf), and sends the proof to each of the selected network nodes102a,106eand106f. Each selected network node102a,106eand106fis a peer in a blockchain network, in one embodiment. Each selected network node102a,106eand106fverifies the proof. Each selected network node102a,106eand106fstores the result1504in a blockchain ledger. For example, network node102astores the result in blockchain ledger1502a; network node102estores the result1504in blockchain ledger1502e; and network node102fstores the result1504in blockchain ledger1502fThe reference number1502may be used to refer to the blockchain ledger in general, without reference to a specific copy of the blockchain ledger. Thus, a different copy of the blockchain ledger1502is stored on each of the selected network node102a,106eand106f, in one embodiment. The result node104is able to obtain the result1504from the blockchain ledger1502that is stored on at least one of the selected network nodes106. The zero-knowledge proof comprises a mathematical algorithm that inputs a proving key (“pk”), a random input (“x”), and a private statement (“w”), in one embodiment. The verification algorithm (V) inputs the random input (“x”), a verifying key (“vk”) and the proof (pro, in one embodiment. In one embodiment, the zero-knowledge proof is based on zero-knowledge succinct non-interactive augments of knowledge (zk-SNARKs). Other protocols for zero-knowledge proofs may be used. Referring now toFIG.16, a flowchart of one embodiment of a process1600of verifying a zero-knowledge proof is depicted. The process1600may be used in communication system1500. Step1602includes a result node104sending a zero-knowledge proof request to an input node102. This request refers to a request for the input node102to perform the zero-knowledge proof. This request could be initiated in response to a user of the input node102wanting to proof some piece of information to an entity controlling the result node104, without the user revealing the piece of information. Step1604(Option A) includes either a result node104or an input node102sending one or more requests into the network110to look for a network node106that is capable of performing secure computes. In one embodiment, the one or more requests are sent to network nodes106that processed the request of step1602. However, the request(s) may be sent to network nodes106that did not process the request of step1602. One or more network nodes106may respond to the request of step1604, indicating that the respective network node106is capable of performing secure computes. Step1605(Option B) includes either a result node104or an input node102examining a header of one or more received packets to determine which network node (or nodes)106is/are capable of performing secure computes. In one embodiment, the one or more received packets are associated with the communication between the nodes in step1602. However, the one or more packets are not required to be associated with the communication between the nodes in step1602. Step1606includes selecting multiple verifier network nodes106. The selection may be based on the location of network nodes106. For example, a network node106that is close to the input node102and/or the result node104may be selected. For the sake of discussion, network nodes102a,102e, and102fare selected. Step1608includes the input node102generating the zero-knowledge proof. The zero-knowledge proof has as inputs a proving key “pk” and a statement “w”, in one embodiment. The zero-knowledge proof has as inputs a proving key “pk”, a random parameter “x”, and a statement “w”, in one embodiment. Step1610includes the input node102sending the zero-knowledge proof to each of the selected verifier nodes (e.g., network nodes102a,102e, and102f). The input node102may also send a verification key “vk” to the selected verifier nodes. Since the verification key “vk” may be publicly available, it is not required that the input node102send the verification key “vk” to the selected verifier nodes. The input node102may also send a random parameter “x” to the selected verifier nodes. Since the random parameter “x” may be publicly available, it is not required that the input node102send the random parameter “x” to the selected verifier nodes. Step1612includes the selected verifier nodes (e.g., network nodes102a,102e, and1020verifying the zero-knowledge proof. In one embodiment, each selected verifier node executes a verifier algorithm that inputs the zero-knowledge proof, the verification key “vk”, and the random parameter “x”. The verifier algorithm outputs a binary value (e.g., TRUE, FALSE), in one embodiment. Thus, for the sake of discussion each selected network node102a,102e, and102fgenerates a separate result. Step1614includes verifier network nodes (e.g., network nodes102a,102e, and1020acting as peer nodes in a blockchain network. The peer nodes may compare the results from the respective verifier network nodes to ensure that the result can be trusted to be correct. For example, the peer nodes may determine whether all of the results match (e.g., all TRUE or all FALSE). In the event that the results do not all match, then the process may perform a failure process. In one embodiment, the peer nodes perform a “majority rules” algorithm in which the peer nodes select as the final result what result is most prevalent. For example, if each peer node provides a TRUE or FALSE result, the peer nodes determine whether there are more TRUE or FALSE results. In one embodiment, an odd number of peer nodes are used to prevent ties. In another embodiment, an even number of peer nodes are permitted, with the results being discarded in the event of a tie. Step1616includes at least one of the verifier network nodes106recording the result in the blockchain ledger1502. In one embodiment, each of the verifier network nodes106that verified the zero-knowledge proof stores the result in a copy of the blockchain ledger stored on the respective network node106. For example, network node106astores the result in blockchain ledger1502a, network node106estores the result in blockchain ledger1502e, and network node106fstores the result in blockchain ledger1502f. Step1618includes the result node104obtaining the result from the blockchain ledger1502. The technology described herein can be implemented using hardware, software, or a combination of both hardware and software. The software used is stored on one or more of the processor readable storage devices described above to program one or more of the processors to perform the functions described herein. The processor readable storage devices can include computer readable media such as volatile and non-volatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer readable storage media and communication media. Computer readable storage media may be implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer readable storage media is an example of a non-transitory computer-readable medium. Examples of computer readable storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. A computer readable medium or media does (do) not include propagated, modulated or transitory signals. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a propagated, modulated or transitory data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as radio frequency (RF) and other wireless media. Combinations of any of the above are also included within the scope of computer readable media. In alternative embodiments, some or all of the software can be replaced by dedicated hardware control circuit components. For example, and without limitation, illustrative types of hardware control circuit components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Control Devices (CPLDs), special purpose computers, etc. In one embodiment, software (stored on a storage device) implementing one or more embodiments is used to program one or more processors. The one or more processors can be in communication with one or more computer readable media/storage devices, peripherals and/or communication interfaces. It is understood that the present subject matter may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this subject matter will be thorough and complete and will fully convey the disclosure to those skilled in the art. Indeed, the subject matter is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the subject matter as defined by the appended claims. Furthermore, in the following detailed description of the present subject matter, numerous specific details are set forth in order to provide a thorough understanding of the present subject matter. However, it will be clear to those of ordinary skill in the art that the present subject matter may be practiced without such specific details. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated. For purposes of this document, each process associated with the disclosed technology may be performed continuously and by one or more computing devices. Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device. Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. | 77,124 |
11943360 | While the embodiments described herein are amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the particular embodiments described are not to be taken in a limiting sense. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure. DETAILED DESCRIPTION Aspects of the present disclosure relate generally to the field of blockchain data administration, and more specifically to blockchain data administration through generative cryptograms. Blockchain data and transaction management can be inefficient. The management of a mature blockchain requires a resource heavy processing framework. Additional considerations including data privacy, additional channel states, and transaction context, further increase resource consumption. Due to the nature of blockchains, each node within the blockchain is required to maintain data and ensure the processed transaction is verified for the world state and verification of every channel in which it participates. Occasionally, the data associated with maintaining the blockchain can result in a phenomenon known as “blockchain bloat”, in which the blockchain becomes very large consuming a tremendous amount of memory and storage. Further, blockchain bloat leads to performance and scalability issues due to the heavy demands on computing, interconnection, and input/output requirements. Embodiments of the present disclosure appreciate the need for increased computational efficiency though continued decentralization of data. Further, in embodiments of the present disclosure appreciate the need for a reduction in repeat verification and validation of blockchains, thus reducing the storage and computational requirements. In an embodiment, a blockchain transaction may be identified. A block proposal may be detected by a node in the blockchain. The proposed transaction may be validated. A cryptogram capturing verification of the transaction may be generated. The transaction with the cryptogram can be committed to the ledger. A cryptogram is an encrypted representation of particular information. A cryptogram can be used by a node to determine if verification of a processed block has occurred. In an embodiment, a cryptogram can be an encrypted representation of a nonce, a timestamp for the transaction, a transaction context or root, and the storage address scheme of the ledger storage or file handles. In an embodiment, the cryptogram is the only version of a storage pointer for the blockchain and is utilized in subsequent transaction processing. In an embodiment, the storage address scheme of the ledger storage can be a decentralized storage file handling (e.g., interplanetary file storage (“IPFS”)) or a higher level storage framework (e.g., Filecoin®, Storj®, etc.). The file handling captured by the cryptogram can be content based or storage based. For example, a decentralized storage network model for file sharing may include segmenting files into different parts and storing the parts across a network of nodes. The files are tracked by a hash code (HashID). The parts can be called on and reassembled to create the original file. For example, 8990A is the HashID of a file that will be segmented into four parts. Each, segment of the file will receive its own hash. Each node peer ID will reference the file segment by its HashID, which is 8990A. Therefore: File HASH ID: 8990A Node 1: Chunk 1 ID g(12) Node 2: Chunk 2 ID g(34) Node 3: Chunk 3 ID g(56) Node 4: Chunk 4 ID g(78) Call the root Hash: g(12345678) In an embodiment, an example of a content based storage system can be as follows. A distributed hash table can be used for file system storage and retrieval. Files on the blockchain can be stored as key-value pairs. The data can be broken up into chunks (e.g., 256 KB, 1 MB, 2 MB, etc.) and stored on separate nodes across a blockchain network. The chucks can be identified by a HashID. If the data is requested, it is retrieved by HashID rather than the file itself. This prevents changing the database if the location of a chunk for a file is moved to a different node. Before turning to the Figures, it will be readily understood that the instant components, as generally described and illustrated in the Figures herein, may be arranged and designed in a wide variety of different configurations. Accordingly, the following detailed description of the embodiments of at least one of a method, apparatus, non-transitory computer readable medium and system, as represented in the attached Figures, is not intended to limit the scope of the application as claimed but is merely representative of selected embodiments. The instant features, structures, or characteristics as described throughout this specification may be combined or removed in any suitable manner in one or more embodiments. For example, the usage of the phrases “example embodiments,” “some embodiments,” or other similar language, throughout this specification refers to the fact that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment. Accordingly, appearances of the phrases “example embodiments,” “in some embodiments,” “in other embodiments,” or other similar language, throughout this specification do not necessarily all refer to the same group of embodiments, and the described features, structures, or characteristics may be combined or removed in any suitable manner in one or more embodiments. Further, in the Figures, any connection between elements can permit one-way and/or two-way communication even if the depicted connection is a one-way or two-way arrow. Also, any device depicted in the drawings can be a different device. For example, if a mobile device is shown sending information, a wired device could also be used to send the information. In addition, while the term “message” may have been used in the description of embodiments, the application may be applied to many types of networks and data. Furthermore, while certain types of connections, messages, and signaling may be depicted in exemplary embodiments, the application is not limited to a certain type of connection, message, and signaling. Detailed herein are a method, system, and computer program product that utilize specialized blockchain components to utilize generative cryptograms in blockchain data management. In some embodiments, the method, system, and/or computer program product utilize a decentralized database (such as a blockchain) that is a distributed storage system, which includes multiple nodes that communicate with each other. The decentralized database may include an append-only immutable data structure resembling a distributed ledger capable of maintaining records between mutually untrusted parties. The untrusted parties are referred to herein as peers or peer nodes. Each peer maintains a copy of the database records and no single peer can modify the database records without a consensus being reached among the distributed peers. For example, the peers may execute a consensus protocol to validate blockchain storage transactions, group the storage transactions into blocks, and build a hash chain over the blocks. This process forms the ledger by ordering the storage transactions, as is necessary, for consistency. In various embodiments, a permissioned and/or a permission-less blockchain can be used. In a public, or permission-less, blockchain, anyone can participate without a specific identity (e.g., retaining anonymity). Public blockchains can involve native cryptocurrency and use consensus based on various protocols such as Proof of Work or Proof of Stake. Whereas, a permissioned blockchain database provides secure interactions among a group of entities that share a common goal but which do not fully trust one another, such as businesses that exchange funds, goods, (private) information, and the like. Further, in some embodiments, the method, system, and/or computer program product can utilize a blockchain that operates arbitrary, programmable logic, tailored to a decentralized storage scheme and referred to as “smart contracts” or “chaincodes.” In some cases, specialized chaincodes may exist for management functions and parameters which are referred to as system chaincode (such as managing access to a different blockchain, a bridging blockchain client, etc.). In some embodiments, the method, system, and/or computer program product can further utilize smart contracts that are trusted distributed applications that leverage tamper-proof properties of the blockchain database and an underlying agreement between nodes, which is referred to as an endorsement or endorsement policy. An endorsement policy allows chaincode to specify endorsers for a transaction in the form of a set of peer nodes that are necessary for endorsement. When a client sends the transaction to the peers (e.g., endorsers) specified in the endorsement policy, the transaction is executed to validate the transaction. After validation, the transactions enter an ordering phase in which a consensus protocol is used to produce an ordered sequence of endorsed transactions grouped into blocks. In some embodiments, the method, system, and/or computer program product can utilize nodes that are the communication entities of the blockchain system. A “node” may perform a logical function in the sense that multiple nodes of different types can run on the same physical server. Nodes are grouped in trust domains and are associated with logical entities that control them in various ways. Nodes may include different types, such as a client or submitting-client node that submits a transaction-invocation to an endorser (e.g., peer), and broadcasts transaction-proposals to an ordering service (e.g., ordering node). Another type of node is a peer node which can receive client submitted transactions, commit the transactions, and maintain a state and a copy of the ledger of blockchain transactions. Peers can also have the role of an endorser, although it is not a requirement. An ordering-service-node or orderer is a node running the communication service for all nodes, and which implements a delivery guarantee, such as a broadcast to each of the peer nodes in the system when committing/confirming transactions and modifying a world state of the blockchain, which is another name for the initial blockchain transaction which normally includes control and setup information. In some embodiments, the method, system, and/or computer program product can utilize a ledger that is a sequenced, tamper-resistant record of all state transitions of a blockchain. State transitions may result from chaincode invocations (e.g., transactions, transfers, exchanges, etc.) submitted by participating parties (e.g., client nodes, ordering nodes, endorser nodes, peer nodes, etc.). Each participating party (such as a peer node) can maintain a copy of the ledger. A transaction may result in a set of asset key-value pairs being committed to the ledger as one or more operands, such as creates, updates, deletes, and the like. The ledger includes a blockchain (also referred to as a chain) which is used to store an immutable, sequenced record in blocks. The ledger also includes a state database that maintains a current state of the blockchain. In some embodiments, the method, system, and/or computer program product described herein can utilize a chain that is a transaction log that is structured as hash-linked blocks, and each block contains a sequence of N transactions where N is equal to or greater than one. The block header includes a hash of the block's transactions, as well as a hash of the prior block's header. In this way, all transactions on the ledger may be sequenced and cryptographically linked together. Accordingly, it is not possible to tamper with the ledger data without breaking the hash links. A hash of a most recently added blockchain block represents every transaction on the chain that has come before it, making it possible to ensure that all peer nodes are in a consistent and trusted state. The chain may be stored on a peer node file system (e.g., local, attached storage, cloud, etc.), efficiently supporting the append-only nature of the blockchain workload. The current state of the immutable ledger represents the latest values for all keys that are included in the chain transaction log. Since the current state represents the latest key values known to a channel, it is sometimes referred to as a world state. Chaincode invocations execute transactions against the current state data of the ledger. To make these chaincode interactions efficient, the latest values of the keys may be stored in a state database. The state database may be simply an indexed view into the chain's transaction log, it can therefore be regenerated from the chain at any time. The state database may automatically be recovered (or generated) upon peer node startup, and before transactions are accepted. Some benefits of the instant solutions described and depicted herein include a method, system, and computer program product for implementing new, novel blockchain components that utilize generative cryptograms in blockchain data management. The exemplary embodiments solve the issues of blockchain data management (e.g., blockchain bloat) and the occurrences of interruptions involving said blockchains. It is noted that blockchain is different from a traditional database in that blockchain is not a central storage, but rather a decentralized, immutable, and secure storage, where nodes may share in changes to records in the storage. Some properties that are inherent in blockchain and which help implement the blockchain include, but are not limited to, an immutable ledger, smart contracts, security, privacy, decentralization, consensus, endorsement, accessibility, and the like, which are further described herein. According to various aspects, the system described herein is implemented due to immutable accountability, security, privacy, permitted decentralization, availability of smart contracts, endorsements and accessibility that are inherent and unique to blockchain. In particular, the example embodiments provide numerous benefits over a traditional database. For example, through the blockchain, the embodiments provide for immutable accountability, security, privacy, permitted decentralization, availability of smart contracts, endorsements and accessibility that are inherent and unique to the blockchain. Meanwhile, a traditional database could not be used to implement the example embodiments because it does not bring all parties on the network, it does not create trusted collaboration and does not provide for an efficient commitment of transactions involving verifiable credentials. The traditional database does not provide for tamper proof storage and does not provide for preservation of asset related costs (e.g., computing costs, such as processing power, fees, etc.) if an asset exchange is interrupted. Thus, the proposed embodiments described herein utilizing blockchain networks cannot be implemented by the traditional database. Turning now toFIG.1A, illustrated is a blockchain architecture100, in accordance with embodiments of the present disclosure. In some embodiments, the blockchain architecture100may include certain blockchain elements, for example, a group of blockchain nodes102. The blockchain nodes102may include one or more blockchain nodes, e.g., peers104-110(these four nodes are depicted by example only). These nodes participate in a number of activities, such as a blockchain transaction addition and validation process (consensus). One or more of the peers104-110may endorse and/or recommend transactions based on an endorsement policy and may provide an ordering service for all blockchain nodes102in the blockchain architecture100. A blockchain node may initiate a blockchain authentication and seek to write to a blockchain immutable ledger stored in blockchain layer116, a copy of which may also be stored on the underpinning physical infrastructure114. The blockchain configuration may include one or more applications124which are linked to application programming interfaces (APIs)122to access and execute stored program/application code120(e.g., chaincode, smart contracts, etc.) which can be created according to a customized configuration sought by participants and can maintain their own state, control their own assets, and receive external information. This can be deployed as a transaction and installed, via appending to the distributed ledger, on all blockchain nodes104-110. The blockchain base or platform112may include various layers of blockchain data, services (e.g., cryptographic trust services, virtual execution environment, etc.), and underpinning physical computer infrastructure that may be used to receive and store new transactions and provide access to auditors which are seeking to access data entries. The blockchain layer116may expose an interface that provides access to the virtual execution environment necessary to process the program code and engage the physical infrastructure114. Cryptographic trust services118may be used to verify transactions such as asset exchange transactions and keep information private. The blockchain architecture100ofFIG.1Amay process and execute program/application code120via one or more interfaces exposed, and services provided, by blockchain platform112. The application code120may control blockchain assets. For example, the application code120can store and transfer data, and may be executed by peers104-110in the form of a smart contract and associated chaincode with conditions or other code elements subject to its execution. As a non-limiting example, smart contracts may be generated to execute the transfer of assets/resources, the generation of assets/resources, etc. The smart contracts can themselves be used to identify rules associated with authorization (e.g., asset transfer rules, restrictions, etc.), access requirements (e.g., of a datastore, of an off-chain datastore, of who may participate in a transaction, etc.), and/or usage of the ledger. For example, the verifiable credentials126may be processed by one or more processing entities (e.g., virtual machines) included in the blockchain layer116. The result128may include a plurality of linked shared documents (e.g., with each linked shared document recording the issuance of a smart contract in regard to the verifiable credentials126being committed by a selected group of peers based on an asset exchange schema, issuer policy, etc.). In some embodiments, the physical infrastructure114may be utilized to retrieve any of the data/information/assets/etc. described herein. A smart contract may be created via a high-level application and programming language, and then written to a block in the blockchain. The smart contract may include executable code that is registered, stored, and/or replicated with a blockchain (e.g., a distributed network of blockchain peers). A transaction is an execution of the smart contract code that can be performed in response to conditions associated with the smart contract being satisfied. The executing of the smart contract may trigger a trusted modification(s) to a state of a digital blockchain ledger. The modification(s) to the blockchain ledger caused by the smart contract execution may be automatically replicated throughout the distributed network of blockchain peers through one or more consensus protocols. The smart contract may write data to the blockchain in the format of key-value pairs. Furthermore, the smart contract code can read the values stored in a blockchain and use them in application operations. The smart contract code can write the output of various logic operations into the blockchain. The code may be used to create a temporary data structure in a virtual machine or other computing platform. Data written to the blockchain can be public and/or can be encrypted and maintained as private. The temporary data that is used/generated by the smart contract is held in memory by the supplied execution environment, then deleted once the data needed for the blockchain is identified. A chaincode may include the code interpretation of a smart contract, with additional features. As described herein, the chaincode may be program code deployed on a computing network, where it is executed and validated by chain validators together during a consensus process. The chaincode receives a hash and retrieves from the blockchain a hash associated with the data template created by use of a previously stored feature extractor. If the hashes of the hash identifier and the hash created from the stored identifier template data match, then the chaincode sends an authorization key to the requested service. The chaincode may write to the blockchain data associated with the cryptographic details (e.g., thus committing a transaction associated with assets, etc.). FIG.1Billustrates an example of a blockchain transactional flow150between nodes of the blockchain in accordance with an example embodiment. Referring toFIG.1B, the transaction flow may include a transaction proposal191sent by an application client node160to an endorsing peer node181(e.g., in some embodiments, the transaction proposal191may include a schema that prescribes a selected set of peers [peer nodes181-184] to be used for a specific transaction). The endorsing peer node181may verify the client signature and execute a chaincode function to initiate the transaction. The output may include the chaincode results, a set of key/value versions that were read in the chaincode (read set), and the set of keys/values that were written in chaincode (write set). The proposal response192is sent back to the client node160along with an endorsement signature, if approved. The client node160assembles the endorsements into a transaction payload193and broadcasts it to an ordering service node184. The ordering service node184then delivers ordered transactions as blocks to all peer nodes181-183on a channel. Before committal to the blockchain, each peer node181-183may validate the transaction. For example, the peers may check the endorsement policy to ensure that the correct allotment of the specified peers have signed the results and authenticated the signatures against the transaction payload193(e.g., all the specified peers from the schema have validated and approved commitment of the transaction to the blockchain). Referring again toFIG.1B, the client node160initiates the transaction proposal191by constructing and sending a request to the peer node181, which in this example is an endorser. The client node160may include an application leveraging a supported software development kit (SDK), which utilizes an available API to generate a transaction proposal191. The proposal is a request to invoke a chaincode function so that data can be read and/or written to the ledger. The SDK may reduce the package of the transaction proposal191into a properly architected format (e.g., protocol buffer over a remote procedure call (RPC)) and take the client's cryptographic credentials to produce a unique signature for the transaction proposal191. In response, the endorsing peer node181may verify (a) that the transaction proposal191is well formed, (b) the transaction has not been submitted already in the past (replay-attack protection), (c) the signature is valid, and (d) that the submitter (client node160, in the example) is properly authorized to perform the proposed operation on that channel. The endorsing peer node181may take the transaction proposal191inputs as arguments to the invoked chaincode function. The chaincode is then executed against a current state database to produce transaction results including a response value, read set, and write set. However, no updates are made to the ledger at this point. In some embodiments, the set of values, along with the endorsing peer node's181signature is passed back as a proposal response192to the SDK of the client node160which parses the payload for the application to consume. In response, the application of the client node160inspects/verifies the endorsing peers signatures and compares the proposal responses to determine if the proposal response is the same. If the chaincode only queried the ledger, the application would inspect the query response and would typically not submit the transaction to the ordering service node184. If the client application intends to submit the transaction to the ordering service node184to update the ledger, the application determines if the specified endorsement policy has been fulfilled before submitting. Here, the client may include only one of multiple parties to the transaction. In this case, each client may have their own endorsing node, and each endorsing node will need to endorse the transaction. The architecture is such that even if an application selects not to inspect responses or otherwise forwards an unendorsed transaction, the endorsement policy will still be enforced by peers and upheld at the commit validation phase. After successful inspection, in the transaction payload step193, the client node160assembles endorsements into a transaction and broadcasts the transaction proposal191and response within a transaction message to the ordering node184. The transaction may contain the read/write sets, the endorsing peers signatures and a channel ID (e.g., if a specific [off-chain] datastore is to be utilized). The ordering node184does not need to inspect the entire content of a transaction in order to perform its operation, instead the ordering node184may simply receive transactions from all channels in the network, order them chronologically by channel, and create blocks of transactions per channel. The blocks of the transaction are delivered from the ordering node184to all peer nodes181-183on the channel. The transactions194within the block are validated to ensure any endorsement policy is fulfilled and to ensure that there have been no changes to ledger state for read set variables since the read set was generated by the transaction execution. Transactions in the block are tagged as being valid or invalid. Furthermore, in steps195each peer node181-183appends the block to the channel's chain, and for each valid transaction the write sets are committed to current state database. An event is emitted, to notify the client application that the transaction (invocation) has been immutably appended to the chain, as well as to notify whether the transaction was validated or invalidated. Referring now toFIG.2, illustrated is an example blockchain network200for data management utilizing generative cryptograms, in accordance with embodiments of the present disclosure. In some embodiments, the blockchain network comprises a cryptogram generation engine202. Cryptogram generation engine202is a computer program that can generate a cryptogram for a transaction post processing of said transaction. While cryptogram generation engine202is shown operational on peer1104, cryptogram generation engine202can be operational on one or more blockchain nodes102(e.g., peer2106, peer3108, and peer4110) within the blockchain network. Further, one or more cryptogram generation engines202can be located as a separate node within the blockchain network and within a specific channel of the blockchain network. The cryptogram can be generated after the transaction has been committed to the ledger. A cryptogram can contain data related to a file handle for new data on the blockchain and/or data related to the file scheme of the blockchain network. A cryptogram can be a storage pointer for the updated data within the blockchain. In some embodiments, cryptogram generation engine202can generate a cryptogram which captures the verification of a processed block, removing the need for additional verification by other nodes. For example, in a bi-lateral transaction within a channel, two parties can undergo a transaction. Once the transaction has occurred and committed to the ledger, a cryptogram can be generated with a nonce (i.e., hash), a timestamp for the transaction, a transaction root, and the storage address scheme of the data updated through the transaction within the database. In some embodiments, cryptogram generation engine202can generate a cryptogram that can act as an access control list in bi-lateral and multi-lateral transactions. For example, in a transaction between three members within a private channel consisting of six member, only the parties within the transaction will have the key to view the data generated by the transaction. This is because the hash of the cryptogram will prevent the parties without the key from being to decipher the data's file address or handle within the cryptogram. Further, in some embodiments, the cryptogram can show verification by the endorser node within the channel. For example, if a new transaction is proposed in the channel or within the network, the nodes within the channel will not be required to verify the transaction, as the cryptogram signed by the proposing parties can act as verification of the proposed transaction. In some embodiments, cryptogram generation engine202can generate a signed cryptogram. The signed cryptogram can remove duplicity of data by removing the requirement of verification by other nodes within the channel or network. Further, signed cryptograms can provide agreement and provenance within a channel for bi-lateral and multi-lateral transaction processing. In some embodiments, cryptogram generation engine202can disseminate a generated cryptogram to nodes involved in the transaction. For example, after a transaction is processed, the transaction is committed to the ledger. A cryptogram can be generated by cryptogram generation engine202for the transaction, with the data related to the transaction. The cryptogram can be sent to the nodes involved in the transaction and can be sent to all the nodes in the channel or network as well. It is noted, that as embodied inFIG.2and throughout this specification, that as implemented, the novelty presented is multiplicative. Referring now toFIG.3, illustrated is a flowchart of an example method300of blockchain data management through generative cryptograms. In some embodiments, the method300may be performed by a processor, node, and/or peer node in a blockchain network (such as the blockchain network200ofFIG.2). In some embodiments, the method300proceeds to operation302, where the processor identifies that a transaction has been initiated, via a peer (e.g., peer1104, peer2106, peer3108, peer4110) within blockchain architecture100. In some embodiments, the method300proceeds to operation304, where the processor processes the exchange, via a peer within blockchain architecture100. In some embodiments, the method300proceeds to operation306, where the processor generates a cryptogram for the transaction via cryptogram generation engine202, where the cryptogram captures verification of the transaction at the node. In some embodiments, the method300proceeds to operation308, where the processor commits the transaction to the ledger of the blockchain network. In some embodiments, discussed below, there are one or more operations of the method300not depicted for the sake of brevity. It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present disclosure are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of portion independence in that the consumer generally has no control or knowledge over the exact portion of the provided resources but may be able to specify portion at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. Service Models are as follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes. FIG.4A, illustrated is a cloud computing environment410is depicted. As shown, cloud computing environment410includes one or more cloud computing nodes400with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone400A, desktop computer400B, laptop computer400C, and/or automobile computer system400N may communicate. Nodes400may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment410to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices400A-N shown inFIG.4Aare intended to be illustrative only and that computing nodes400and cloud computing environment410can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). FIG.4B, illustrated is a set of functional abstraction layers provided by cloud computing environment410(FIG.4A) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.4Bare intended to be illustrative only and embodiments of the disclosure are not limited thereto. As depicted below, the following layers and corresponding functions are provided. Hardware and software layer415includes hardware and software components. Examples of hardware components include: mainframes402; RISC (Reduced Instruction Set Computer) architecture based servers404; servers406; blade servers408; storage devices411; and networks and networking components412. In some embodiments, software components include network application server software414and database software416. Virtualization layer420provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers422; virtual storage424; virtual networks426, including virtual private networks; virtual applications and operating systems428; and virtual clients430. In one example, management layer440may provide the functions described below. Resource provisioning442provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing444provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal446provides access to the cloud computing environment for consumers and system administrators. Service level management448provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment450provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer460provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation462; software development and lifecycle management464; virtual classroom education delivery466; data analytics processing468; transaction processing470; and blockchain data management through generative cryptogram472. FIG.5, illustrated is a high-level block diagram of an example computer system501that may be used in implementing one or more of the methods, tools, and modules, and any related functions, described herein (e.g., using one or more processor circuits or computer processors of the computer), in accordance with embodiments of the present disclosure. In some embodiments, the major components of the computer system501may comprise one or more CPUs502, a memory subsystem504, a terminal interface512, a storage interface516, an I/O (Input/Output) device interface514, and a network interface518, all of which may be communicatively coupled, directly or indirectly, for inter-component communication via a memory bus503, an I/O bus508, and an I/O bus interface unit510. The computer system501may contain one or more general-purpose programmable central processing units (CPUs)502A,502B,502C, and502D, herein generically referred to as the CPU502. In some embodiments, the computer system501may contain multiple processors typical of a relatively large system; however, in other embodiments the computer system501may alternatively be a single CPU system. Each CPU502may execute instructions stored in the memory subsystem504and may include one or more levels of on-board cache. System memory504may include computer system readable media in the form of volatile memory, such as random access memory (RAM)522or cache memory524. Computer system501may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system526can be provided for reading from and writing to a non-removable, non-volatile magnetic media, such as a “hard drive.” Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), or an optical disk drive for reading from or writing to a removable, non-volatile optical disc such as a CD-ROM, DVD-ROM or other optical media can be provided. In addition, memory504can include flash memory, e.g., a flash memory stick drive or a flash drive. Memory devices can be connected to memory bus503by one or more data media interfaces. The memory504may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of various embodiments. One or more programs/utilities528, each having at least one set of program modules530may be stored in memory504. The programs/utilities528may include a hypervisor (also referred to as a virtual machine monitor), one or more operating systems, one or more application programs, other program modules, and program data. Each of the operating systems, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Programs528and/or program modules530generally perform the functions or methodologies of various embodiments. Although the memory bus503is shown inFIG.5as a single bus structure providing a direct communication path among the CPUs502, the memory subsystem504, and the I/O bus interface510, the memory bus503may, in some embodiments, include multiple different buses or communication paths, which may be arranged in any of various forms, such as point-to-point links in hierarchical, star or web configurations, multiple hierarchical buses, parallel and redundant paths, or any other appropriate type of configuration. Furthermore, while the I/O bus interface510and the I/O bus508are shown as single respective units, the computer system501may, in some embodiments, contain multiple I/O bus interface units510, multiple I/O buses508, or both. Further, while multiple I/O interface units are shown, which separate the I/O bus508from various communications paths running to the various I/O devices, in other embodiments some or all of the I/O devices may be connected directly to one or more system I/O buses. In some embodiments, the computer system501may be a multi-user mainframe computer system, a single-user system, or a server computer or similar device that has little or no direct user interface, but receives requests from other computer systems (clients). Further, in some embodiments, the computer system501may be implemented as a desktop computer, portable computer, laptop or notebook computer, tablet computer, pocket computer, telephone, smartphone, network switches or routers, or any other appropriate type of electronic device. It is noted thatFIG.5is intended to depict the representative major components of an exemplary computer system501. In some embodiments, however, individual components may have greater or lesser complexity than as represented inFIG.5, components other than or in addition to those shown inFIG.5may be present, and the number, type, and configuration of such components may vary. As discussed in more detail herein, it is contemplated that some or all of the operations of some of the embodiments of methods described herein may be performed in alternative orders or may not be performed at all; furthermore, multiple operations may occur at the same time or as an internal part of a larger process. The present disclosure may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. Although the present disclosure has been described in terms of specific embodiments, it is anticipated that alterations and modification thereof will become apparent to the skilled in the art. Therefore, it is intended that the following claims be interpreted as covering all such alterations and modifications as fall within the true spirit and scope of the disclosure. | 54,226 |
11943361 | DETAILED DESCRIPTION The following detailed description is made with reference to the accompanying drawings and is provided to assist in a comprehensive understanding of various example embodiments of the present disclosure. The following description includes various details to assist in that understanding, but these are to be regarded merely as examples and not for the purpose of limiting the present disclosure as defined by the appended claims and their equivalents. The words and phrases used in the following description are merely used to enable a clear and consistent understanding of the present disclosure. In addition, descriptions of well-known structures, functions, and configurations may have been omitted for clarity and conciseness. Those of ordinary skill in the art will recognize that various changes and modifications of the examples described herein can be made without departing from the spirit and scope of the present disclosure. FIG.1Aillustrates structural components implementing a communication system100at time t0. As shown in the figure, communication system100includes: a gateway device102, a client device104, a smart media device106, a mobile device108, a secure Wi-Fi network110, a service provider server120, an Internet124, a mobile service provider122, and an external server126. Client device104is configured to communicate with gateway device102via secure communication channel112. Smart media device106is configured to communicate with gateway device102via secure communication channel114. Mobile device108is configured to communicate with gateway device102via secure communication channel116. Gateway device102is configured to communicate with service provider server120via physical media/wiring118. Service provider server120is configured to communicate with Internet124via secure communication channel105. Mobile service provider122is configured to communicate with Internet124via secure communication channel109. Lastly, external server126is configured to communicate with Internet124via secure communication channel107. Gateway device102, also referred to as a Wi-Fi APD, residential gateway, or RG, is an electronic device that is to be located so as to establish a local area network (LAN) at a user premises. The user premises may include a residential dwelling, office, or any other business space of a user. The terms home, office, and premises may be used synonymously herein. Gateway device102may be any device or system that is operable to allow data to flow from one discrete network to another, which as will be described in greater detail below, will be from a wireless local area network (WLAN) to an external network, e.g., the Internet, which is shown as Internet124. Gateway device102may perform such functions as web acceleration and HTTP compression, flow control, encryption, redundancy switchovers, traffic restriction policy enforcement, data compression, TCP performance enhancements (e.g., TCP performance enhancing proxies such as TCP spoofing), quality of service functions (e.g., classification, prioritization, differentiation, random early detection (RED), TCP/UDP flow control), bandwidth usage policing, dynamic load balancing, and routing. As will be described in greater detail below, gateway device102establishes, or is part of WLAN110, using Wi-Fi for example, such that client device104, smart media device106, and mobile device108are able to communicate wirelessly with gateway device102. The term Wi-Fi as used herein may be considered to refer to any of Wi-Fi 4, 5, 6, 6E, or any variation thereof. Further, it should be noted that gateway device102is able to communicate with service provider server120via physical media/wiring118, which may optionally be a wireless communication system, such as 4G, or 5G and further is able to connect to Internet124via service provider server120. Service provider server120is configured to connect gateway device102to external server126by way of secure communication channel107, Internet124, and secure communication channel105. Service provider server120is also configured to connect gateway device102to mobile service provider122via secure communication channel109, Internet124, and secure communication channel105 Gateway device102serves as a gateway or access point to Internet124for one or more electronic devices, referred to generally herein as client device104, smart media device106, and mobile device108that wirelessly communicate with gateway device102via, e.g., Wi-Fi. Client device104, smart media device106, and mobile device108can be desk top computers, laptop computers, electronic tablet devices, smart phones, appliances, or any other so called internet of things (IoT) equipped devices that are equipped to communicate information via Wi-Fi network110. Within Wi-Fi network110, electronic devices are often referred to as being stations in Wi-Fi network110. In IEEE 802.11 (Wi-Fi) terminology, a station (abbreviated as STA) is a device that has the capability to use the 802.11 protocol. For example, a station may be a laptop, a desktop PC, PDA, APD, or Wi-Fi phone. An STA may be fixed, mobile or portable. Generally, in wireless networking terminology, a station, wireless client, and node are often used interchangeably, with no strict distinction existing between these terms. A station may also be referred to as a transmitter or receiver based on its transmission characteristics. IEEE 802.11-2012 defines station as: a logical entity that is a singly addressable instance of a medium access control (MAC) and physical layer (PHY) interface to the wireless medium (WM). A wireless access point (WAP), or more generally just access point (AP), is a networking hardware device that allows other Wi-Fi devices to connect to a Wi-Fi network. A service set ID (SSID) is an identification (in IEEE 802.11) that is broadcast by access points in beacon packets to announce the presence of a network access point for the SSID. SSIDs are customizable IDs that can be zero to 32 bytes, and can be in a natural language, such as English. In Wi-Fi network110, gateway device102is an access point for Wi-Fi network110. External server126may store all secure information that was created during the initial setup of secure Wi-Fi network110. The secure information includes the encrypted SSO password and the keys that were configured for all smart media devices inside secure Wi-Fi network110to use in order to access gateway device102to perform various tasks like monitoring network health, creating a new user account, configuring access level for certain devices in the network, etc. The scenario illustrated inFIG.1Arepresents a secure communication network that has already been established where the encrypted SSO password and keys are stored remotely on external server126. The encrypted SSO password stored on external server126is mainly used by all client devices in Wi-Fi network110to access and control gateway device102. The encrypted keys stored on external server126are used by client devices in Wi-Fi network110to decrypt the encrypted SSO password in order to gain access to gateway device102. The functionalities of both the encrypted SSO password and the keys will be explained in further details below. A user of client device104may create an account that is managed by external server126. Client device104may create the account for managing gateway device102. This method of account creation would require the user to remember both the username and password every time they access the account by way of client device104. Some conventional systems provide a password-hidden login and recovery mechanism such that the user only needs to remember the username to log into the account by way of client device104. Client device104may be used to manage the features provided by gateway device102. Further, a Single Sign On password (SSO password) methodology may be used for managing user accounts across various other cloud system components. Hence it is important to make sure that the SSO password is synchronized across these various other cloud system components for a seamless user experience with client device104. In some cases, for the above methodology to work, client device104may utilize several unique, secure techniques to support a SSO password, wherein the user would be required to enter both username and password. With this process, the user would be burdened with remembering and entering both the username and the SSO password each time to access their user account as stored in external server126. This burden may be lightened with the use of a hidden SSO password. The use of a hidden SSO password first involves client device104automatically generating a SSO password upon account creation. This auto-generated SSO password is stored in the secure keychain memory portion of client device104. This same keychain-stored, auto-generated SSO password is then used for account sign in when the user logs into the account by way of client device104. A client device may be authenticated by sending a one-time passcode to the email that is associated with the user of client device104. The auto-generated SSO password is encrypted using a network device specific random key. This encrypted network device specific random key is unique for each network device. The encrypted SSO password is then stored in external server126during an onboarding process performed by client device104, such as when gateway device102is onboarded. The same SSO password is also shared to various network devices, such as smart media device106, during the onboarding process of gateway device102. The user of client device104may view the SSO password via client device104after successful authentication. External server126may validate a request by smart media device106to access the health and the network status of gateway device102based on the SSO password and user name as provided by smart media device106. Further, client device104may encrypt the SSO password with an encryption key. In some cases, the encryption key may be generated using unique identifiers from both gateway device102and client device104. For example, the encryption key may be derived using a serial number of gateway device102, a phone number of client device104, and a time stamp. Client device104may then store the encrypted SSO password in external server126. In the event that client device104loses the SSO password, e.g., instructions on client device104that enable client device104to interact with gateway device102are un-installed and re-installed, client device104may request the encrypted SSO password from external server126. Then, client device104may obtain the encryption key from gateway device102to decrypt the encrypted SSO password in order to obtain the SSO password for use with gateway device102. However, there are two scenarios when the network device specific random key is not accessible and thus the SSO password is not recoverable by client device104. The first scenario is when the instructions on client device104are uninstalled and then reinstalled after factory resetting gateway device102. The second scenario is when the instructions on client device104are installed on another client device after factory resetting gateway device102. Factory resetting gateway device102requires the user of client device104to re-onboard gateway device102and re-register gateway device102with the corresponding user account in external server126. In this situation, client device104will reset the password to a new password. Resetting the SSO password to a new password will be helpful only in synchronizing client device104when re-registering (programmatically) client device104with external server126during onboarding. However, smart media device106will not have the new password in this situation. Hence the services provided by smart media device106will stop working due to the failure of authenticating smart media device106with external server126. In order for smart media device106to reconnect with external server126, the user of client device104will need to manually change the SSO password used by smart media device106. Unfortunately, in many cases, the user may not realize that client device104had changed the SSO password, as the entire password generation and authentication process is hidden from the user. Therefore, services provided by smart media device106would no longer work for reasons unknown to the user. This would lead to a very frustrated user likely calling customer support to rectify the poor service provided by gateway device102. The problematic situation described above will be further discussed with reference toFIGS.1B-D. FIG.1Billustrates the secure electronic communication network ofFIG.1Aat time t1. As shown inFIG.1B, at time t1, gateway device102has recently been upgraded with new firmware and all existing configurations, including the secure information, were wiped out during the process. This causes secure Wi-Fi network110to go down where all devices inside Wi-Fi network110can no longer communicate to one another and are also unable to communicate to anything outside of Wi-Fi network110including: service provider server120, Internet124, external server126, and mobile service provider122. Since gateway device102lost all the secure information after the firmware upgrade, it cannot connect to external server126to retrieve the secure information. In order to reestablish all connections, gateway device102has to be manually reconfigured and onboarded again. This process will be further discussed with reference toFIG.1C. FIG.1Cillustrates the secure electronic communication network ofFIG.1Aat time t2. As shown inFIG.1C, at time t2, gateway device102has recently finished onboarding to service provider server120and is connected to it via secure communication channel130. Gateway device102is now able to connect to Internet124via service provider server120. Gateway device102is also able to connect to external server126via service provider server120and Internet124. Additionally, gateway device102is also able to connect to mobile service provider122via service provider server120and Internet124. Gateway device102also created an open Wi-Fi SSID and started broadcasting internally for all devices in Wi-Fi network110to start onboarding. Note that this open Wi-Fi SSID is not the same as the original Wi-Fi SSID, so all devices in Wi-Fi network110have to go through the process of onboarding again. This process will be further discussed with reference toFIG.1D. FIG.1Dillustrates the secure electronic communication network ofFIG.1Aat time t3; As shown inFIG.1D, at time t3, client device104just finished onboarding to Wi-Fi network110using the new Wi-Fi SSID that gateway device102broadcasted for all clients to onboard to the network. Client device104is now able to connect to Internet124via gateway device102and service provider server120. Client device104is also able to connect to external server126via gateway device102, service provider server120, and Internet124. However, as client device104onboards to Wi-Fi network110, it automatically generates a new set of keys and an SSO password. This new set of keys and SSO password will not be the same as the previous set of keys and SSO password, which were stored on external server126prior to gateway device102being reset. This new SSO password has now replaced the original SSO password stored on external server126. Similar to client device104, smart media device106onboards to Wi-Fi network110and is now able to connect to Internet124via gateway device102. However, when the user tries to use smart media device106to control gateway device102, it does not work. This is due to smart media device106failing to authenticate itself with gateway device102using the original SSO password. The scenario presented inFIGS.1B-Dpoints out a limitation of storing an SSO password on an external server for a fast onboarding of clients in a Wi-Fi network where the gateway device plays a central role. If anything happens to the gateway device that causes it to lose all configurations and secure information and the main client device for re-onboarding the gateway device reinstalls the instructions for onboarding the gateway device, then the whole process of onboarding using the SSO password stored on the external server no longer works since not all clients will be able to start their services after the gateway device is rebooted and reconfigured. What is needed is a system and method for using a client device to restore the original SSO stored on an external server to a gateway device if anything happens to the gateway device that causes it to wipe out the original SSO password. A system and method in accordance with the present disclosure is provided for restoring the SSO password stored on an external server to a gateway device once the gateway device has successfully recovered from any major issues that caused it to completely wipe out the original configuration, including the original SSO password. This will help all client devices in the network to recover seamlessly and continue to function and provide services normally. In accordance with the present disclosure, a system and method is provided to use a client device to restore the original SSO stored on an external server associated with a gateway device in cases where the gateway device has wiped out all of its original configuration information, including all secure information for the Wi-Fi network. The client device must connect to the gateway device locally to provide instructions for the gateway device to perform the onboarding process. Aspects of the present disclosure propose a solution to restore the SSO password for the two scenarios discussed above in which the network device specific random key is not accessible and thus the SSO password is not recoverable by client device104. In accordance with aspects of the present disclosure, a client device uses the following procedures for recovering the original SSO password: first, the client device stores the encrypted SSO password in an external server during the initial account creation process; second, if the client device needs to obtain the encrypted SSO password from the external server, e.g. the client device reinstalls the instructions for onboarding the gateway device, then the external server will authenticate the client device by providing a one-time passcode (OTP) to the client device; third, the client device resets the password of the user account in an external server to a temporary password; fourth, the client device proceeds with a re-onboarding process by wirelessly connecting to the network device, e.g., a gateway device; fifth, the client device retrieves the original encrypted SSO password from the external server; sixth, the client device wirelessly retrieves a device specific random key stored in the network device to be re-onboarded; seventh, the client device decrypts the encrypted SSO password using a device specific key and resets the temporary password of the user account back to the original value of the SSO password; eighth, the client device re-registers the network device to the external server by using the original SSO password; and ninth, the client device re-registers with any other external servers that may be used by the smart media device by using the original SSO password. At this point, all components of the network, e.g., the client device and the smart media device, are using the same SSO password to interact with the gateway device. Therefore, the user will be able to use all services provided by the smart media device seamlessly. Further, it should be noted that the nine procedures discussed above are performed by the client device without the user's knowledge, wherein the client device only needs the OTP in order to authenticate the user's email identification. An example system and method for using a client device to restore the SSO stored on an external server associated with a gateway device in the case where the gateway device has wiped out its entire original configuration including all secure information for the Wi-Fi network will now be described in greater detail with reference toFIGS.2-4. FIG.2illustrates an example method for seamlessly recovering the SSO password stored in a Wi-Fi communication network, in accordance with aspects of the current disclosure. As shown inFIG.2, method200starts (S202), and the network is in an initial state where the user is onboarding the gateway device (S204). This is a standard process where the gateway device establishes connection to the service provider. Returning toFIG.2, once the gateway device is onboarded (S204), it establishes connection to the service provider and, in turn, establishes Internet service for the Wi-Fi network (S206). For example, as shown inFIG.1A, once gateway device102is onboarded, it connects to service provider server120and also establishes Internet services to Wi-Fi network110. Returning toFIG.2, once the Internet service is established (S206), the user generates a SSO password (S208). For example, returning toFIG.1A, the user of client device104uses a combination of unique keys from gateway device102to generate a SSO password. Returning toFIG.2, after the user generates a SSO password (S208), the user then uses this new SSO password to register the gateway device to the external server (S210). For example, returning toFIG.1A, the user registers gateway device102to external server126via service provider server120and Internet124. Returning toFIG.2, once the gateway device is registered using the SSO password (S210), this SSO password also gets encrypted using an encryption key then stores it on an external server to be used for all devices inside the Wi-Fi network (S212). For example, returning toFIG.1A, after the user has finished registering gateway device102, the user of client device104then encrypts the SSO password using an encryption key. This encryption key may be derived using a serial number of gateway device102, a phone number of client device104, and a time stamp. Then the SSO password and the encryption key get stored on external server126. Returning toFIG.2, once the SSO password and encryption key have been stored on the external server (S212), the user then shares the SSO password with all devices in the network (S214). This SSO password will be used by all devices in the network to connect to the external server to enable access to the gateway device for controlling and monitoring the network. For example, referring toFIG.1A, after the SSO password has been shared to smart media device106, when a user uses smart media device106to request a network health check, smart media device106then connects to external server126by way of gateway device102, service provider server120, and Internet124. Smart media device106uses the shared SSO password to authenticate with external server126. After the authentication completes, smart media device106can access a network health check from gateway device102. Returning toFIG.2, after sharing the SSO password with all devices in the network (S214), all devices in the network perform normally. However, after some time, the gateway device gets rebooted (S216). This could be the result of the gateway device recently updating its firmware and losing its entire original configuration including the secure information for the Wi-Fi network. This will be further described with additional references toFIGS.3A-F. FIG.3Aillustrates an electronic communication network300at time t4, in accordance with aspects of the present disclosure. As shown in the figure, communication network300includes: a gateway device302, a client device304, smart media device106, mobile device108, a secure Wi-Fi network310, service provider server120, Internet124, mobile service provider122, and an external server316. Client device304is configured to communicate with gateway device302via secure communication channel330. Smart media device106is configured to communicate with gateway device302via secure communication channel114. Mobile device108is configured to communicate with gateway device302via secure communication channel116. Gateway device302is configured to communicate with service provider server120via physical media/wiring118. Lastly, external server316is configured to communicate with Internet124via secure communication channel107. Gateway device302, also referred to as a Wi-Fi APD, residential gateway, or RG, is an electronic device that is to be located so as to establish a local area network (LAN) at a user premises. The user premises may include a residential dwelling, office, or any other business space of a user. The terms home, office, and premises may be used synonymously herein. Gateway device302may be any device or system that is operable to allow data to flow from one discrete network to another, which as will be described in greater detail below, will be from a wireless local area network (WLAN) to an external network, e.g., the Internet, which is shown as Internet124. Gateway device102may perform such functions as web acceleration and HTTP compression, flow control, encryption, redundancy switchovers, traffic restriction policy enforcement, data compression, TCP performance enhancements (e.g., TCP performance enhancing proxies such as TCP spoofing), quality of service functions (e.g., classification, prioritization, differentiation, random early detection (RED), TCP/UDP flow control), bandwidth usage policing, dynamic load balancing, and routing. As will be described in greater detail below, gateway device302establishes, or is part of WLAN310, using Wi-Fi for example, such that client device304, smart media device106, and mobile device108are able to communicate wirelessly with gateway device302. For purposes of discussion, suppose that some time prior to time t4, client device304has onboarded gateway device302and registered with external server316. This may be performed in a manner similar to that discussed above with reference toFIG.1Ain which an SSO password is encrypted with an encryption key that was generated using unique identifiers from both gateway device302and client device304. Client device304then stores the encrypted SSO password in external server316. Still further, during this initial onboarding process, client device304registers gateway device302with external server316. During registration, the identity of gateway device302is associated with an account of the user of client device304. Further contact information such as an email address or phone number of the user of client device304and the encrypted SSO password is additionally associated with the account of the user of client device. As shown inFIG.3A, at some time after the initial onboarding of gateway device at time t4, Wi-Fi network310has lost all connectivity due to gateway device302rebooting after a firmware upgrade which caused all configurations, including secure information for Wi-Fi network310, to be wiped. This results in all client devices in Wi-Fi network310losing all connectivity. Client device304, smart device106and mobile device108are no longer connected to gateway device302, as indicated by dashed lines112and114, respectively; and gateway device302is disconnected from service provider server120, as indicated by dashed line118. In order to re-establish Wi-Fi network310, the user has to manually reconfigure gateway device302to connect to service provider server120. FIG.4illustrates an exploded view of client device304, gateway device302, and external server316ofFIG.3A. As shown inFIG.4, gateway device302includes: a controller402, a memory404, which has stored therein an onboarding program406; at least one radio, a sample of which is illustrated as a radio410, and an interface circuit408. In this example, controller402, memory404, radio410, and interface circuit408are illustrated as individual devices. However, in some embodiments, at least two of controller402, memory404, radio410and interface circuit408may be combined as a unitary device. Whether as individual devices or as combined devices, controller402, memory404, radio410, and interface circuit408may be implemented as any combination of an apparatus, a system and an integrated circuit. Further, in some embodiments, at least one of controller402, memory404, and interface circuit408may be implemented as a computer having a non-transitory computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such non-transitory computer-readable recording medium refers to any computer program product, apparatus or device, such as a magnetic disk, optical disk, solid-state storage device, memory, programmable logic devices (PLDs), DRAM, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired computer-readable program code in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Disk or disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc. Combinations of the above are also included within the scope of computer-readable media. For information transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer may properly view the connection as a computer-readable medium. Thus, any such connection may be properly termed a computer-readable medium. Combinations of the above should also be included within the scope of computer-readable media. Example tangible computer-readable media may be coupled to a processor such that the processor may read information from and write information to the tangible computer-readable media. In the alternative, the tangible computer-readable media may be integral to the processor. The processor and the tangible computer-readable media may reside in an integrated circuit (IC), an application specific integrated circuit (ASIC), or large-scale integrated circuit (LSI), system LSI, super LSI, or ultra LSI components that perform a part or all of the functions described herein. In the alternative, the processor and the tangible computer-readable media may reside as discrete components. Example tangible computer-readable media may also be coupled to systems, non-limiting examples of which include a computer system/server, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, set-top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Such a computer system/server may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Further, such a computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules maybe located in both local and remote computer system storage media including memory storage devices. Components of an example computer system/server may include, but are not limited to, one or more processors or processing units, a system memory, and a bus that couples various system components including the system memory to the processor. The bus represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. A program/utility, having a set (at least one) of program modules, may be stored in the memory by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. The program modules generally carry out the functions and/or methodologies of various embodiments of the application as described herein. Controller402may be implemented as a hardware processor such as a microprocessor, a multi-core processor, a single core processor, a field programmable gate array (FPGA), a microcontroller, an application specific integrated circuit (ASIC), a digital signal processor (DSP), or other similar processing device capable of executing any type of instructions, algorithms, or software for controlling the operation and functions of the gateway device302in accordance with the embodiments described in the present disclosure. Memory404can store various programming, user content, and data including onboarding program406. As will be described in greater detail below, onboarding program406includes instructions, that when executed by controller402enables gateway device302to be onboarded, or brought into a state of usability as a network device, by client device304. Additionally, as will be described in greater detail below, onboarding program406also includes instructions, that when executed by controller402enables gateway device302to change the temporary password to the SSO password retrieved from external server316. Interface circuit408can include one or more connectors, such as RF connectors, or Ethernet connectors, and/or wireless communication circuitry, such as 5G circuitry and one or more antennas. Interface circuit408receives content from external server316(as shown inFIG.3A) by known methods, non-limiting examples of which include terrestrial antenna, satellite dish, wired cable, DSL, optical fibers, or 5G as discussed above. Through interface circuit408, gateway device302receives an input signal, including data and/or audio/video content, from external server316and can send data to external server316. Radio410(and preferably two or more radios), may also be referred to as a wireless communication circuit, such as a Wi-Fi WLAN interface radio transceiver and is operable to communicate with client device304and with external server316. Radio410includes one or more antennas and communicates wirelessly via one or more of the 2.4 GHz band, the 5 GHz band, 6 GHz band, and the 60 GHz band, or at the appropriate band and bandwidth to implement the Wi-Fi 4, 5, 6, or 6E protocols. Gateway device302can also be equipped with a radio to implement a Bluetooth interface radio transceiver and antenna, which communicates wirelessly in the ISM band, from 2.400 to 2.485 GHz. As an alternative, at least one of the radios can be a radio meeting a Radio Frequency for Consumer Electronics (RF4CE) protocol, Zigbee protocol, and/or IEEE802.15.4 protocol, which also communicates in the ISM band. External server316includes a controller430, a memory432, which has stored therein an onboarding program434, and an interface circuit436. Controller430may be implemented as a hardware processor such as a microprocessor, a multi-core processor, a single core processor, a field programmable gate array (FPGA), a microcontroller, an application specific integrated circuit (ASIC), a digital signal processor (DSP), or other similar processing device capable of executing any type of instructions, algorithms, or software for controlling the operation and functions of external server316in accordance with the embodiments described in the present disclosure. Memory432can store various programming, user content, and data including onboarding program434. Memory432includes a data structure, a non-limiting example of which includes a look up table that includes an entry for an account of the user of client device304. The data structure additionally includes identifying information of gateway device302that associates gateway device302with the user of client device304. This data structure additionally includes an entry for the account of the user of client device for an encrypted SSO password for use by client device304. As will be described in more detail below, this data structure additionally includes space for entry of a temporary encrypted SSO password for temporary use by client device304. As will be described in greater detail below, onboarding program434includes instructions, that when executed by controller430enables client device304to initiate onboarding of gateway device302; and transmits the OTP and encrypted SSO password to client device304as requested by client device304during the onboarding of gateway device302. Client device304includes: a controller412; a memory414, which has stored therein an onboarding program416; and at least one radio, a sample of which is illustrated as a radio420; an interface circuit418, a user interface circuit422, a display424, microphone426, and a speaker428. In this example, controller412, memory414, radio420, interface circuit418, user interface circuit422, display424, and speaker428are illustrated as individual devices. However, in some embodiments, at least two of controller412, memory414, radio420, interface circuit418, user interface circuit422, display424, and speaker428may be combined as a unitary device. Further, in some embodiments, at least one of controller412and memory414may be implemented as a computer having tangible computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. As will be described in greater detail below, controller412is configured to execute instructions stored in memory414to cause client device304to transmit an OTP request to external server316; obtain the OTP from external server316; transmit the OTP to external server316to authenticate client device304; transmit an encrypted SSO password request to external server316; onboard gateway device302using a temporary password; receive the encrypted SSO password from external server316; obtain the key from gateway device302; decrypt the encrypted SSO password using the key to obtain the SSO password; and change the temporary password of gateway device302to the SSO password. In some embodiments, as will be described in greater detail below, controller412is configured to execute instructions stored in memory414to additionally cause client device304to instruct gateway device302to perform a factory reset. Controller412may be implemented as a hardware processor such as a microprocessor, a multi-core processor, a single core processor, a field programmable gate array (FPGA), a microcontroller, an application specific integrated circuit (ASIC), a digital signal processor (DSP), or other similar processing device capable of executing any type of instructions, algorithms, or software for controlling the operation and functions of client device304in accordance with the embodiments described in the present disclosure. Memory414can store various programming, and user content, and data including onboarding program416. As will be described in greater detail below, onboarding program416includes instructions, that when executed by controller412enables client device304to initiate onboarding onto gateway device302. Interface circuit418can include one or more connectors, such as RF connectors, or Ethernet connectors, and/or wireless communication circuitry, such as 5G circuitry and one or more antennas. Interface circuit418further enables controller412to decode communication signals received by radio420from gateway device302and to encode communication signals to be transmitted by radio420to gateway device302. User interface circuit422may be any device or system that is operable to enable a user to access and control controller412to manually operate or configure client device304. User interface circuit422may include one or more layers including a human-machine interface (HMI) machines with physical input hardware such as keyboards, mice, game pads and output hardware such as computer monitors, speakers, and printers. Additional UI layers in user interface circuit422may interact with one or more human senses, including: tactile UI (touch), visual UI (sight), and auditory UI (sound). Radio420, may include a Wi-Fi WLAN interface radio transceiver that is operable to communicate with gateway device302, as shown inFIGS.3A-Fand also may include a cellular transceiver operable to communicate with a mobile service provider122through a cellular network (not shown). Radio420includes one or more antennas and communicates wirelessly via one or more of the 2.4 GHz band, the 5 GHz band, 6 GHz band, and the 60 GHz band, or at the appropriate band and bandwidth to implement the Wi-Fi 4, 5, 6, or 6E protocols. Client device304can also be equipped with a radio to implement a Bluetooth interface radio transceiver and antenna, which communicates wirelessly in the ISM band, from 2.400 to 2.485 GHz. As an alternative, at least one of the radios can be a radio meeting a RF4CE protocol, Zigbee protocol, and/or IEEE802.15.4 protocol, which also communicates in the ISM band. Insofar as gateway device302provides connection to service provider server120, such as a multiple systems operator (MSO), gateway device302can be equipped with connectors to connect with a television or display device, and can also include programming to execute an electronic program guide and/or other suitable graphical user interface (GUI), and can with such configuration be referred to as a so-called set top box. Such a set top box can be included in the system shown inFIGS.3A-Fas gateway device302or in addition thereto. Moreover, inclusion of one or more of far-field microphones, (for e.g., voice command and/or presence recognition, and/or telephone communication), cameras, (for e.g., gesture and/or presence recognition, and/or video telephone communication), and speakers, and associated programming, can enable the gateway device to be a smart media device. Returning toFIG.2, after the gateway device has been rebooted (S216), it is determined whether or not instructions on the client device have also been reinstalled (S218). For example, as shown inFIG.4, a user of client device304may user interface422to instruct controller412to reinstall onboarding program416. In such a case, controller412will determine that instructions have been reinstalled. If it is determined that the instructions on the client device have not been reinstalled (N at S218), then it is also determined whether the instructions have been installed on a second client device in the network (S220). For example, other client devices may include a controller that is configured to execute instructions stored on a memory in a manner similar to client device304. As such, other client devices may install onboarding program416so as to be able to perform functions similar to client device304. If these instructions are installed on another client device, then gateway device302will be informed when the other client device associates with gateway device302. In such instances, gateway device302may inform client device304that another client device has installed onboarding program416. If it is determined that the instructions are not installed on any other device (N at S220), then method200stops (S234). In this situations, since the instructions have not been reinstalled on client device304and they have not been installed on any other client device, then it does not matter that gateway device302has been rebooted. Specifically, client device304will be able to re-onboard gateway device302and provide gateway device302with the previously created SSO. However, if it is determined that the instructions have either been reinstalled on the client device (Y at S218) or the instructions have been installed on another client device (Y at S220), then the gateway device has to re-onboard (S222). This will be further discussed with reference toFIG.3B. FIG.3Billustrates the electronic communication network inFIG.3Aat time t5. As shown in the figure, at time t5, gateway device302has been re-onboarded and is now able to connect to service provider server120. Gateway device302is communicating with service provider server120through a secure communication link320. Additionally, gateway device302is also able to communicate to Internet124via service provider server120. Furthermore, gateway device302also connects to external server316by way of Internet124and service provider server120as well as to mobile service provider122by way of Internet124and service provider server120. Client device304is also able to connect to service provider server120via gateway device302. Additionally, client device304is able to connect to Internet124and external server316via gateway device302and service provider server120. However smart media device106and mobile device108are still not able to connect to gateway device302since they only know the original Wi-Fi SSID initially configured prior to the firmware upgrade of gateway device302. Returning toFIG.2, after the gateway device has re-onboarded (S222), the client device resets the SSO password used to register with the external server as a temporary SSO password (S224). This will be further discussed with reference toFIG.3C. FIG.3Cillustrates the electronic communication network ofFIG.3Aat time t6. As shown inFIG.3C, after gateway device302has re-onboarded, at time t6, client device304communicates with external server316via uplink communication channel332(via gateway device302, service provider server120, and Internet124) to notify external server316that the SSO password that it used previously when re-onboarding gateway device302is a temporary password. Client device304also notifies external server316to retain the original SSO password it has initially set. Returning toFIG.2, after the client device has successfully reset the SSO password as a temporary password on the external server (S224), the client device retrieves the encrypted SSO password from the external server (S226). This will be further discussed with reference toFIG.3D. FIG.3Dillustrates the electronic communication network ofFIG.3Aat time t7. As shown in the figure, at time t7, after client device304has successfully re-onboarded gateway device302, notified external server316that the SSO password it used to register gateway device302is only a temporary SSO password, and requested external server316to retain the original encrypted SSO password, client device304then retrieves the encrypted SSO password from external server316via downlink communication channel334(via Internet124, service provider server120, and gateway device302). However, in order for client device304to retrieve the encrypted SSO password from external server316, client device304must identify itself to external server316. As shown inFIG.3D, the following authentication exchanges between client device304and external server316are over uplink communication channel332and downlink communication channel334. To start the authentication process, client device304sends an OTP request to external server316. Once it receives the OTP request via uplink communication channel332, external server316sends back to client device304an OTP verification via downlink communication channel334so that client device304may authenticate itself. OTP verification may be provided to client device304by a manner indicated in memory432as shown inFIG.4. For example, when gateway device302is initially registered and associated with client device304, a user of client device304may provide contact information to external server316. This contact information may then be used by controller430when executing instructions in onboarding program434to provide OTP verification via downlink communication channel334. Non-limiting mechanisms to provide OTP verification via downlink communication channel334include: email, wherein an email address of the user of client device304, which may have been provided by the user when initially registering gateway device302, is used to send an email having an OTP for the user to access, or a phone call or text message, wherein a phone number of client device304, which may have been provided by the user when initially registering gateway device302, is used to send an automated voice message or text message having an OTP for the user to access. After client device304has received the OTP verification reply from external server316, client device304then submits the OTP verification to external server316to authenticate itself. After client device304has been successfully verified by external server316, client device304sends a request for the encrypted SSO password stored in external server316. External server316, in turn, sends the encrypted SSO password to client device304via downlink communication channel334. Returning toFIG.2, after the client device has retrieved the encrypted SSO password from the external server (S226), the client device also retrieves the key from the gateway device (S228). This will be further discussed with reference toFIG.3E. FIG.3Eillustrates the electronic communication network ofFIG.3Aat time t8. As shown inFIG.3E, at time t8, client device304communicates with gateway device302via communication channel330and sends an instruction350to gateway device302to retrieve the key for decrypting the encrypted password received previously from external server316. Returning toFIG.2, after the client device has obtained the key from the gateway device (S228), the client device uses the key to decrypt the encrypted SSO password (S230). For example, returning toFIG.3E, after client device304has obtained the key from gateway device302, client device304uses the key to decrypt the encrypted SSO password previously provided by external server316. Returning toFIG.2, after the client device has decrypted the encrypted SSO password (S230), the client device instructs the gateway device to replace the temporary password with the decrypted SSO password (S232). For example, returning toFIG.3E, after client device304successfully decrypts the encrypted SSO password received from external server316using the key provided by gateway device302, client device304instructs gateway device302to replace the temporary password it had previously created for onboarding with the decrypted SSO password. Returning toFIG.2, after the gateway device has replaced the temporary password with the decrypted SSO password (S232), method200ends (S234). This will be further discussed with reference toFIG.3F. FIG.3Fillustrates the electronic communication network ofFIG.3Aat time t9. As shown inFIG.3F, at time t9, after gateway device302replaced the temporary password with the original decrypted SSO password, a user can use the instructions installed on any of the client devices in the network to access and control gateway device302. Client device304is now connected to gateway device302via secure communication link112; smart media device106is connected to gateway device302via secure communication link114; and mobile device108is connected to gateway device302via secure communication link116. In a conventional system where the network is configured to use a SSO password for all devices in the network to access and control the gateway device, when the gateway device goes down for some reason and loses its configuration including all secure information, and at the same time, if the user reinstalls the instructions to access the gateway device on a client device or installs the instructions on another client device, then the user will not be able to access and control the gateway device. In order to resync the SSO password to all devices in the network, the user must manually reconfigure and re-onboard all devices in the network, creating a new SSO password. The process to recover all devices in the network is extensive and cumbersome and also voids the original SSO password since it is no longer applicable with the new configurations. In accordance with the present disclosure, a system and method is provided for seamlessly recovering the original SSO password stored on an external server and restoring it in the gateway device by way of a client device. In short, after the client device is able to authenticate itself with the external server, it can retrieve the encrypted SSO password from the external server; and along with a unique key provided by the gateway device, the client device can decrypt the encrypted SSO password. The client device can then re-onboard the gateway device and instruct the gateway device to replace the temporary password it used to re-onboard with the decrypted SSO password. This process provides a faster way to not only recover the gateway device to its original configuration with the original SSO password, but it also allows all clients in the network to reconnect to the gateway device using the same SSO password, hence, recovering the network seamlessly. The operations disclosed herein may constitute algorithms that can be affected by software, applications (apps, or mobile apps), or computer programs. The software, applications, computer programs can be stored on a non-transitory computer-readable medium for causing a computer, such as the one or more processors, to execute the operations described herein and shown in the drawing figures. The foregoing description of various preferred embodiments have been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The example embodiments, as described above, were chosen and described in order to best explain the principles of the present disclosure and its practical application to thereby enable others skilled in the art to best utilize the present disclosure in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the present disclosure be defined by the claims appended hereto. | 55,397 |
11943362 | BEST MODE Hereinafter, a configuration and operation of a system for providing personal information using a one-time private key based on a blockchain of proof of use according to the present disclosure will be described in detail with reference to the accompanying drawings, and a method, in the system, for providing personal information using a one-time private key based on a blockchain of proof of use will be described. FIG.1is a diagram illustrating a configuration of a system for providing personal information using a one-time private key based on a blockchain of proof of use according to the present disclosure. Referring toFIG.1, a system for providing personal information using a one-time private key based on a blockchain of proof of use according to the present disclosure includes a user terminal100, a blockchain alternative authentication server200, a user-identification institution server300, a service provider server400, and a blockchain network500. The user terminal100, the blockchain alternative authentication server200, the user-identification institution server300, the service provider server400, and the blockchain network500are connected over a wired/wireless data communication network10in a wired/wireless manner to perform data communication with each other. The wired/wireless data communication network10may be a data communication network in which at least one or more of the following networks are combined: a wide area network (WAN) including a Wi-Fi network; a mobile communication network, such as 3G (the third generation), 4G, 5G, etc.; Wibro networks, etc. Preferably, the user terminal100is a mobile terminal, called a mobile phone, a smartphone, etc., having terminal identification information, but is not limited thereto. The user terminal100may be a desktop computer, a laptop computer, or the like having the terminal identification information. The terminal identification information may be a phone number, an electric serial number (ESN), the International Mobile Equipment Identity (IMEI), an Internet Protocol (IP) address, a MAC address, or the like. Preferably, the terminal identification information is unique information that does not change. The user terminal100provides personal information, user-authentication information, and terminal identification information only once at the beginning in order to become a member of a blockchain alternative authentication service according to the present disclosure. The user terminal100receives alternative authentication keys including a public key, a private key, etc. issued and stores and keeps the same. The user terminal100accesses the service provider server400that provides any service and provides the service provider server400with the stored and kept public key that is anonymous for registration as a member to receive the service, login, and request for the service. The alternative authentication keys, such as a public key, a private key, etc., are hash random numbers, and may be provided as QR codes. In addition, the user terminal100receives and displays information for requesting approval and confirmation, and receives a user's response thereto and provides the same to the corresponding configuration. In addition, examples of the user terminal100may include a mobile terminal that provides an alternative authentication key for making a service use request, and a manager terminal of a service-providing host that is an offline external service provider or of a service-providing node that is an internal service provider. That is, in an offline manner, a mobile terminal provides a manager terminal with a public key that is called and displayed after user authentication and terminal authentication from the user, and the mobile terminal uses the received public key through the alternative authentication server200to verify the reliability of the public key and perform a use process and operation, such as a request for providing required personal information, and so on. A configuration and operation of the user terminal100will be described later in detail with reference toFIG.2. The user-identification institution server300performs general user identification when user-identification request information is received from the blockchain alternative authentication server200, and provides a user-identification result value (Duplication Information (DI)) according to a result of user identification. The service provider server400may be either a service-providing node server510or a service-providing host server410, wherein the service-providing node server510is a server of a service-providing node that is an internal service provider participating as a node in the blockchain network500, and the service-providing host server410is a server of a service-providing host that is an external service provider not participating as a node in the blockchain network500. The service provider server400is a server installed in a government agency, an educational institution, a medical institution, a telecommunications company, a financial company, a transportation company, an asset management company, a credit information company, a portal company, a SNS, a game company, a shopping mall, a delivery company, a ticketing company, an electronic voting company, etc. The service provider server400receives personal information that is required by the service provider, through the blockchain network500by using an alternative authentication service that the alternative authentication server200provides. In the case in which the service provider server400is the service-providing node server510, the term server is used, but a terminal device, such as a mobile terminal, etc., may be used. As the service provider server400, the service-providing node server510receives and stores various types of personal information from users once at the beginning, and provides an appropriate service to a user and a service provider on the basis of the stored personal information. The service-providing host server410receives the personal information stored in the service-providing node server510and provides an appropriate service to a service provider. As described above, the personal information that the service provider server400uses may be classified into three types: personal unique identification information, personal alternative identification information, and sensitive personal information. The personal unique identification information may be a name, a birth date, a sex, biometric information, a nationality, a photo, etc. The personal alternative identification information may be an address, a phone number, an email address, identification information (ID), a card number, a bank account number, location information, a cookie ID, a terminal ID, a MAC address, an IP address, IMEA, an advertisement identifier, etc. The sensitive personal information may be user's medical records, pharmaceutical records, academic records, information on assets under the name, military records, credit status information, ticketing records (air, ships, and trains), entry and exit records, licenses and qualifications, patents (application, registration, and maintenance), family relations, criminal records, labor union and association joining status, religion, political orientation, sexual orientation, etc. According to the present disclosure, when the user terminal100provides a public key among the alternative authentication keys as service access information for using a service, the service provider server400makes a request to the user terminal100for personal information items (or referred to as “fields”) required to access the service, and receives, from the blockchain alternative authentication server200accordingly, a first one-time private key and an access (or reply) address of the service-providing node server510from which personal information of the user of the user terminal100is provided. When the first one-time private key and the node server access address are received, the service provider server400transmits personal information provision request information including the first one-time private key to the service-providing node server510of the node server access information, receives personal information, which includes a personal information field required to provide the service, of the user of the user terminal100from the service-providing node server510, and provides the service accordingly. As the service provider server400, the service-providing node server510records a personal information transaction statement including changed personal information when the personal information including the sensitive personal information of the user makes a change, for example, deletion, addition, modification, etc. as the service is provided. After the service-providing node server510adds an electronic signature thereof, the service-providing node server510generates a detailed statement of use of the personal information transaction statement and transmits the same to the alternative authentication server, and the alternative authentication server200performs proof of use on the detailed statement. The personal information transaction statement for the personal information changed as described above may be distributed and stored in other multiple service-providing node servers510including the above-described service-providing node server510. The blockchain network500is composed of multiple nodes, that is, a plurality of the service-providing node servers510, and is managed by the blockchain alternative authentication server200so that a personal information transaction statement and a user public key are distributed and stored in multiple service-providing node servers510selected by the blockchain alternative authentication server200. In addition, all the service-providing node servers510update an existing block, which is stored, with a new block for storage each time the new block is received from the alternative authentication server. A service-providing node server510-1of a service-providing node, which is one of service providers, detects a personal information transaction statement matched to a user public key when personal information including sensitive personal information of a user who uses a user terminal100is changed during use of the personal information for providing a unique service. The service-providing node server510-1receives a one-time private key, performs pair authentication, decrypts the personal information transaction statement, and records the changed personal information. The service-providing node server510-1encrypts the changed personal information with a public key of the user terminal100and stores a personal information transaction statement encrypted with an electronic signature of the service-providing node server510-1. In the above description, a service-providing node server510-2that provides the personal information to the service-providing node server510-1receives a second one-time private key and a personal information request statement including service provider identification information from the blockchain alternative authentication server200. The service-providing node server510-2receives service provider information and personal information provision request information including a first one-time private key from the service-providing node server510-1, and compares the personal information request statement and the personal information provision request information to check the service provider information and a required personal information field. Next, while the personal information transaction statement that is stored is detected using the user public key, the service-providing node server510-2performs pair authentication on the first one-time private key and the second one-time private key and decrypts the personal information transaction statement encrypted with the public key of the user terminal100by using the second one-time private key. The service-providing node server510-2extracts the personal information corresponding to a required personal information field required by the service-providing node server510-1from the personal information transaction statement and generates a personal information submission. The personal information submission is encrypted using an encryption key included in the service provider identification information and is provided to the service-providing node server510-2of a reply address. The blockchain alternative authentication server200generates alternative authentication keys, such as a public key, a private key, etc. unique to the user of the user terminal100, and provides the alternative authentication keys to the user terminal100. The blockchain alternative authentication server200generates a personal information transaction statement including the personal information received from the user terminal100and performs encryption with the public key. The encrypted personal information transaction statement and the public key are distributed and stored in the service-providing node servers510participating in the blockchain network500. When the blockchain alternative authentication server200receives a user public key, service provider information for receiving a service, a required personal information field, and one-time private key issue request information from any user terminal100, the blockchain alternative authentication server200generates a first one-time private key and a second one-time private key in a pair, provides the first one-time private key to the service provider server400, and provides the second one-time private key to the service-providing node server510to which the personal information is to be provided. In addition, the blockchain alternative authentication server200performs proof of use (POU) on a detailed statement of use, which is a record of details of use of the personal information transaction statement stored in a distributed manner in the blockchain network500, to generate one or more new blocks of a predetermined file size for a fact confirmation certificate including an electronic signature of the blockchain alternative authentication server200, and forms an update chain with the existing blocks stored in all the service-providing node servers510. A detailed configuration of the blockchain alternative authentication server200will be described later with reference toFIG.3. FIG.2is a diagram illustrating a configuration of a user terminal of a system for providing personal information using a one-time private key based on a blockchain of proof of use according to the present disclosure, wherein the user terminal is a mobile terminal. Referring toFIG.2, the user terminal100includes a storage unit110, a display unit120, an input unit130, a wireless communication unit140, a biometric recognition information detection unit150, a camera160, and a terminal control unit170. The storage unit110includes: a program area storing a control program for controlling the overall operation of the user terminal100according to the present disclosure; a temporary area temporarily storing data generated during the execution of the control program; and a data area semi-permanently storing data required or generated during the execution of the control program. The data area may store alternative authentication keys, such as user's personal information, a public key, and a private key, a terminal identification information, etc. According to an embodiment, the data area may store biometric information (Fast Identity Online (FIDO)) and a personal identification number (PIN). The display unit120displays various types of information including operation state information of the user terminal100into one or more among text, graphics, still images, videos, etc. The input unit130includes at least one of the following: a key input device, such as a keyboard, a keypad, etc., for inputting multiple functions and letters; a button device including a power button, a volume button, a special function button, etc.; and a touch pad integrated with a screen of the display unit120and outputting position information that corresponds to a position that the user touches on the screen. The input unit130enables the user to input various commands and information. The wireless communication unit140may include: a long-distance wireless communication unit141that is connected to the wired/wireless data communication network10in a wireless manner and performs data communication with other servers and devices connected to the wired/wireless data communication network10; and a short-distance wireless communication unit142that is directly connected to other user terminals100or other devices a short distance away and performs data communication. The long-distance wireless communication unit141may include at least one among a first long-distance wireless communication unit (not shown) capable of accessing a Wi-Fi network, and a second long-distance wireless communication unit (an LTE wireless communication unit, and a CDMA wireless communication unit, which are not shown) capable of accessing a mobile communication network. The short-distance wireless communication unit142may include any one or more among radio-frequency identification (RFID), a Bluetooth wireless communication unit, and a short-distance wireless communication unit (Near Field Communication (NFC)). The biometric recognition information detection unit150detects biometric information of the user of the user terminal100and outputs the same to the terminal control unit170. The biometric recognition information detection unit150includes at least one among the following: a fingerprint detection unit151detecting a fingerprint from a user's finger and outputting fingerprint information; an iris detection unit152detecting an iris from a user's eye and outputting iris information; and a voice feature detection unit153detecting a voice feature from a user's voice and outputting voice feature information. The biometric recognition information detection unit150may further include: a face recognition detection unit (not shown) detecting a feature from an acquired facial image and outputting facial feature information; and an action recognition detection unit (not shown) detecting an action feature according to user's action (for example, a walk, a signature, an input pattern, a gesture, etc.) and outputting action feature information. The camera160photographs an object within the angle of view and outputs image data to the terminal control unit170. According to the present disclosure, the camera160may photograph a QR code including a public key and output a result to the terminal control unit170. The terminal control unit170includes a personal information acquisition unit171, a user-authentication information acquisition unit172, a service registration unit173, and a service processing unit174, and controls the overall operation of the user terminal100according to the present disclosure. The personal information acquisition unit171causes a personal information input user interface means to be displayed on the display unit120so that personal information described above is input, and acquires personal information through the displayed personal information input user interface means and the input unit130. The user-authentication information acquisition unit172includes: a personal identification information acquisition unit181causing the user-authentication information input user interface means to be displayed on the display unit120, and receiving a personal identification number through the user-authentication information input user interface means displayed on the display unit120and the input unit130; a biometric information acquisition unit182acquiring biometric information through the biometric recognition information detection unit150; and a terminal identification information acquisition unit183acquiring terminal identification information from the storage unit110. The service registration unit173includes a service registration request unit184and an alternative authentication key reception unit185. The service registration request unit184is configured to: access the blockchain alternative authentication server200; make a blockchain alternative authentication service registration (use) request to the blockchain alternative authentication server200; and acquire, when user-identification personal information and user-authentication information request information are received from the blockchain alternative authentication server200in response to the service registration request, personal information for user identification and user-authentication information through the personal information acquisition unit171and the user-authentication information acquisition unit172, and transmit the same to the blockchain alternative authentication server200. After the service registration request, the alternative authentication key reception unit185receives a public key and a private key that are alternative authentication keys from the blockchain alternative authentication server200and stores the keys in the storage unit110. The alternative authentication keys may be kept in an external storage device, as printouts, etc., depending on a user's method. The service processing unit174includes a service request unit186, a one-time private key request unit187, and a user approval unit188. The service request unit186is configured to: access the service provider server400and request a service to the service provider server400; acquire user-authentication information through the user-authentication information acquisition unit172when personal information input request information for providing the service is received from the service provider server400; and load a public key, a private key, and terminal identification information from the storage unit110and output the same. The one-time private key request unit187transmits, to the blockchain alternative authentication server200, one-time private key issue request information including the alternative authentication keys, such as the user-authentication information, the public key, the private key, etc., and terminal identification information. The user approval unit188causes a notification to the service provider server400, an approval inquiry, normal confirmation information, etc. to be displayed on the display unit120in response to the one-time private key issue request, and receives a user's response thereto through the input unit130and provides the same to the service provider server400. FIG.3is a diagram illustrating a configuration of a blockchain alternative authentication server of a system for providing personal information using a one-time private key based on a blockchain of proof of use according to the present disclosure. Referring toFIG.3, the blockchain alternative authentication server200includes a server storage unit210, a communication unit220, and a server control unit230. The server storage unit210includes: a program area storing a control program for controlling the overall operation of the blockchain alternative authentication server200; a temporary area temporarily storing data generated during the execution of the control program; and a data area semi-permanently storing data required during the execution of the control program and data generated during the same. The data area stores node identification information including service provider identification information, terminal identification information, etc. for service-providing node servers in the blockchain network500, and service provider information including access (address) information, etc. The data area stores node transmission log information according to the present disclosure. The communication unit220accesses the wired/wireless data communication network10and enables data communication to be performed with the user terminal100, the user-identification institution server300, the service provider server400, and the service-providing node servers510in the blockchain network500that are connected to the wired/wireless data communication network10. The server control unit230includes a server service registration unit240and a server service processing unit250, and controls the overall operation of the blockchain alternative authentication server200according to the present disclosure. The server service registration unit240includes a server personal information acquisition unit241, a user-identification unit242, a blockchain network node selection unit243, a user-authentication information acquisition unit244, an alternative authentication key generation unit245, and a distribution storage unit246. The server service registration unit240generates alternative authentication keys, such as a public key, a private key, etc. for any user terminal100and provides the same to the user terminal100. The server service registration unit240stores a personal information transaction statement including the personal information of the user of the user terminal100, and a user public key in a distributed manner in multiple service-providing node servers510in the blockchain network500for registration for a blockchain alternative authentication service. Specifically describing, the server personal information acquisition unit241acquires user-identification personal information, and personal information for service registration from the user terminal100through the communication unit220and outputs the same. When the user-identification personal information is acquired through the server personal information acquisition unit241, the user-identification unit242accesses the user-identification institution server300, which is external, transmits user-identification request information including the user-identification personal information to request user identification, receives a user-identification result value (DI) in response thereto, and outputs and stores the same. The blockchain network node selection unit243selects multiple service-providing node servers510in which the personal information received from the server personal information acquisition unit241is to be stored in a distributed manner, and outputs node identification information of the selected service-providing node servers510. The blockchain network node selection unit243may select service-providing node servers510of which the number ranges from at least two to 50% of the total number of service-providing node servers510. This is to ensure the stability of personal information by storing the personal information in multiple service-providing node servers510, and to minimize exposure of the personal information. The user-authentication information acquisition unit244makes a request to the user terminal100for user-authentication information, and acquires, in response thereto, user-authentication information including a personal identification number (PIN) and biometric information, and terminal identification information from the user terminal100, and stores the same in the server storage unit210. The alternative authentication key generation unit245generates alternative authentication keys, such as a private key, a public key, etc., by applying the user-identification result value (DI) and the user-authentication information, and transmits the generated alternative authentication keys, such as the private key, the public key, etc. to the user terminal100through the communication unit220. The alternative authentication key generation unit245may generate the alternative authentication keys by further applying a token variable value to the user-identification result value and the user-authentication information. The alternative authentication key generation unit245may convert the alternative authentication keys, such as the private key, the public key, etc., into the form of QR codes and may provide the QR codes to the user terminal100. The alternative authentication keys are hash random number values. After the alternative authentication keys are provided, the distribution storage unit246acquires personal information for a service through the server personal information acquisition unit241, generates a personal information transaction statement including the personal information, encrypts the personal information transaction statement with the public key, and transmits the encrypted personal information transaction statement and the public key to the service-providing node servers510of the node identification information output from the blockchain network node selection unit243so that the encrypted personal information transaction statement and the public key are stored, wherein the service-providing node servers510are at least two to 50% of all the service-providing nodes. After the personal information transaction statement is stored in a distributed manner, the distribution storage unit246deletes the generated alternative authentication keys, such as the public key, the private key, etc., and the personal information transaction statement, generates node transmission log information corresponding to distributed storage in the service-providing node servers510, and stores the node transmission log information in the server storage unit210. The node transmission log information may include the terminal identification information of the user terminal100, personal information transaction statement tag information, distributed-storage time information, the transmitted node identification information of the service-providing node servers510, etc. The server service processing unit250includes a one-time private key generation unit251, a proof-of-use unit252, and an authentication unit253, and performs the overall processing for the blockchain alternative authentication service of the present disclosure. Specifically describing, the one-time private key generation unit251is configured to: receive alternative authentication keys, such as a public key, a private key, etc., from the user terminal100; perform user authentication and terminal authentication for the user through the authentication unit253when one-time private key issue request information including service provider information and required personal information field information is received; select, when authentication succeeds, any one node among the multiple service-providing node servers510storing the personal information transaction statement of the user terminal100and the public key; and generate a first one-time private key and a second one-time private key in a pair. The one-time private key generation unit251transmits the generated first one-time private key to the service provider server400corresponding to the service provider information included in the one-time private key issue request information directly or via the user terminal100. The one-time private key generation unit251generates a personal information request statement including the second one-time private key, the service provider information, the required personal information field information, and one-time private key pair authentication information, and transmits the personal information request statement to the selected service-providing node servers510. The proof-of-use unit252generates one or more new blocks of a predetermined file size for a fact confirmation certificate including a detailed statement of use, which is a record of details of use of the personal information transaction statement stored in a distributed manner in the service-providing node servers510in the blockchain network500at the time for registration for the blockchain alternative authentication service, performs proof of use on the generated blocks, and forms a chain between blocks. A detailed operation of the proof-of-use unit252will be described later in detail with reference toFIG.8. The authentication unit253compares both the user-authentication information and the terminal identification information that are included in the one-time private key issue request information received from the one-time private key generation unit251when the request for user authentication for the user is made as described above, with the user-authentication information and the terminal identification information that are previously registered and stored at the time of service registration, thereby performing user authentication and terminal authentication for the user on the basis of whether the pieces of information are matched. FIG.4is a flowchart illustrating a blockchain alternative authentication service registration method of a system for providing personal information using a one-time private key based on a blockchain of proof of use according to the present disclosure. Referring toFIG.4, the user accesses the blockchain alternative authentication server200through the user terminal100in order to use a blockchain alternative authentication service according to the present disclosure, requests registration for the blockchain alternative authentication service, and performs agreement on terms, etc. Then, the user terminal100transmits blockchain alternative authentication service registration request information to the blockchain alternative authentication server200at step S111. When a blockchain alternative authentication service registration request is made from any user terminal100, the blockchain alternative authentication server200makes a request to the user terminal100for user-identification personal information at step S113, and receives the user-identification personal information from the user terminal100at step S115. When the user-identification personal information is received, the blockchain alternative authentication server200accesses the user-identification institution server300and transmits user-identification request information including the user-identification personal information to request user identification at step S117, and receives, in response thereto, a user-identification result value (DI) from the user-identification institution server300and stores the same in the server storage unit210at step S119. When the user-identification result value is received, the blockchain alternative authentication server200makes a request to the user terminal100for user-authentication information at step S121. Then, the user terminal100acquires user-authentication information including a personal identification number (PIN) and biometric information (FIDO), and terminal identification information through the user-authentication information acquisition unit172at step S123, and transmits the user-authentication information including the PIN and the FIDO, and the terminal identification information, such as a terminal ID, etc., to the blockchain alternative authentication server200at step S125. After receiving the user-authentication information and the terminal identification information, the blockchain alternative authentication server200stores the user-identification result value (DI) and the user-authentication information in the server storage unit210at step S127, and generates alternative authentication keys, such as a private key, a public key, etc. at steps S129and S131. As described above, the blockchain alternative authentication server200may generate the alternative authentication keys by adding any token variable value to the user-identification result value and the user-authentication information. The generated alternative authentication keys, such as the public key, the private key, etc., are provided to the user terminal100at step S133. After receiving the alternative authentication keys, such as the public key, the private key, etc., the user terminal100stores the received public key and the received private key in the storage unit110at step S135. The blockchain alternative authentication server200that has generated the alternative authentication keys, such as the public key, the private key, etc., and has transmitted the same to the user terminal100provides a personal information input user interface means to the user terminal100to receive personal information from the user, or acquires personal information by extracting it from previously stored information at step S136. After the personal information is acquired, the blockchain alternative authentication server200selects multiple service-providing node servers510among service-providing node servers510constituting the blockchain network500, and stores node identification information of the selected service-providing node servers510at step S137. When the service-providing node servers510are selected, the blockchain alternative authentication server200generates a personal information transaction statement including the personal information and encrypts the generated personal information transaction statement with the public key at step S139. The personal information may be acquired after selection of the service-providing node servers510. When the personal information transaction statement is encrypted, the blockchain alternative authentication server200transmits the encrypted personal information transaction statement and a user public key to the selected service-providing node servers510in the blockchain network500at step S141. Then, the service-providing node servers510in the blockchain network500store the personal information transaction statement and the user public key at step S145. After the personal information transaction statement and the user public key are stored in a distributed manner, the blockchain alternative authentication server200deletes the generated public key and the generated personal information transaction statement, generates node transmission log information, and stores the same in the server storage unit210at step S143. After distributed storage of the personal information transaction statement and the user public key, the blockchain alternative authentication server200performs proof of use on the basis of a detailed statement of use for the user public key and the personal information transaction statement that are stored in a distributed manner, so that new blocks and existing blocks are chained at step S500. The proof of use will be described later in detail with reference toFIG.8. FIG.5is a flowchart illustrating a method for providing personal information, in a system for providing personal information using a one-time private key based on a blockchain of proof of use according to an embodiment of the present disclosure, which illustrates a method for providing personal information in a case in which the service provider server400is not the service-providing node server510in the blockchain network500but the service-providing host server410that is an external server. Referring toFIG.5, the user terminal100that wants to receive a service through the service-providing host server410accesses the service-providing host server410at step S211, and makes a service use request to the service-providing host server410at step S213. The service use request may be membership registration, login, etc., or may be online and offline-use requests for a particular service. The service use request may be made by provision of the public key stored in the user terminal100, and the public key may be provided in the form of a QR code. When any one user terminal100makes a service use request, the service-providing host server410transmits, to the user terminal100, personal information input request information including required personal information field information for personal information items (fields) required to provide the service and service provider information, at step S215. The service provider information may include service provider identification information, an encryption key, and a reply address. The user terminal100that receives the request for personal information acquires user-authentication information including a PIN, FIDO, etc. and terminal identification information through the input unit130, the biometric recognition information detection unit150, and the storage unit110, and transmits user-authentication request information including the user-authentication information and the terminal identification information to the blockchain alternative authentication server200at step S218. The blockchain alternative authentication server200that receives the user-authentication information and the terminal identification information performs user authentication and terminal authentication for the user at step S219, determines whether authentication succeeds at step S220, and gives notification of authentication failure at step S221or notification of authentication success at step222. After notification of authentication success, the user terminal100receives a user public key and a private key directly through an input interface or loads a user public key and a private key at step S223, and transmits one-time private key issue request information including the user-authentication information, the public key, the private key, SP information, and the required personal information field information to the blockchain alternative authentication server200at step S224. After user authentication and terminal authentication for the user succeed, the blockchain alternative authentication server200selects any service-providing node server510among the service-providing node servers510storing the public key that is the same as the received user public key and detects a node server access address of the selected service-providing node server510at step S225, and generates a one-time private key pair and one-time private key pair authentication information at step S227. The one-time private key pair may include the first one-time private key and the second one-time private key in a pair. The one-time private key may be provided as a QR code, and is represented as a one-time private key, such as a first one-time private key, a second one-time private key, etc., in the drawings. The blockchain alternative authentication server200generates a personal information request statement including the user public key, the node server access address, the required personal information field information, the service provider information, a timestamp, and the one-time private key pair authentication information at step S229, and issues the node server access address and the first one-time private key to the service-providing host server410at step S231, and transmits personal information issue request information including the personal information request statement and the second one-time private key to the service-providing node server510at step S233. The node server access address and the first one-time private key transmitted by the blockchain alternative authentication server200may be provided to the service-providing host server410through the user terminal100, or may be directly provided to the service-providing host server410. The service-providing host server410that receives the first one-time private key identifies the node server access address that is a node server access address at step S235, and transmits personal information provision request information including the service provider information and the first one-time private key to the service-providing node server510at step S237. When the personal information issue request information is received from the blockchain alternative authentication server200and the personal information provision request information is received from the service-providing host server410, the service-providing node server510compares information of the personal information request statement of the personal information issue request information received from the blockchain alternative authentication server200and pieces of information of the personal information provision request information so as to perform verification at step S239. After verification succeeds, the service-providing node server510detects the personal information transaction statement matched to the user public key and decrypts the personal information transaction statement of the user terminal100by using the second one-time private key at step S241. The service-providing node server510extracts, from the personal information transaction statement, personal information fields corresponding to the required personal information field information and generates a personal information submission including the personal information of the personal information items corresponding to the required personal information field information, and encrypts the personal information submission with the received encryption key of the service provider at step S243. When the encrypted personal information submission is generated, the service-providing node server510issues the personal information submission to the received reply address of the service-providing host server410at step S245, and generates a detailed statement of use by recording details of use of the public key and of the personal information transaction statement and stores the same at step247. The service-providing host server410that receives the encrypted personal information submission decrypts the personal information submission and provides the user terminal100with personal information field confirmation request information for requesting confirmation of the personal information for each required personal information field at step S251. When the personal information field confirmation request information is received, the user terminal100displays it. When the user confirms that fields are normal at step S253, the user terminal100gives the service-providing host server410notification of field normal to report that the fields are normal at step S255. When there is abnormality in the fields, the user terminal100gives the service-providing host server410notification of a personal information field mismatch error to report that the fields are abnormal at step S257. After the personal information field confirmation request information is provided, the service-providing host server410monitors whether notification of personal information field normal is received or notification of a personal information field mismatch error is received from the user terminal100at step S259. When notification of a personal information field mismatch error is given, the service-providing host server410discards the personal information and ends at step S263. When notification of personal information field normal is given, the service-providing host server410requests an agreement on storing the acquired personal information and the acquired user public key for the cases, such as login after service membership registration, or provision of a service, and stores the personal information and the user public key under the agreement. The service-providing host server410stores the personal information and provides the service at step S261. The blockchain alternative authentication server200may perform proof of use and a blockchain routine according to the generating of the one-time private key pair and the providing of the first one-time private key to the service-providing host server410and the providing of the second one-time private key to the service-providing node server510at step S500. The proof of use and the blockchain routine will be described later in detail with reference toFIG.8. FIG.6is a flowchart illustrating a method of verifying a one-time private key, in a method for providing personal information, in a system for providing personal information using a one-time private key based on a blockchain of proof of use according to the present disclosure, which is a flowchart illustrating S239and S241ofFIG.5in more detail. Referring toFIG.6, when the personal information request statement is received from the blockchain alternative authentication server200and the personal information provision request information is received from the service-providing host server410, the service-providing node server510compares the service provider identification information included in the personal information provision request information and the service provider identification information of the personal information request statement at step S311, performs authentication on the service provider identification information according to determination of whether the pieces of information are matched at step S313, and transmits a mismatch non-use error message to the blockchain alternative authentication server200and the service-providing host server410for the case of the mismatch at step S315. When authentication on the service provider identification information succeeds, the service-providing node server510detects the encrypted personal information transaction statement corresponding to the user public key received from the blockchain alternative authentication server200and temporarily stores the encrypted personal information transaction statement at step S317. After temporarily storing the personal information transaction statement, the service-providing node server510identifies whether the first one-time private key included in the personal information provision request information and the second one-time private key included in the personal information request statement are in a pair within a preset time at step S319, and performs authentication on the one-time private key by determining whether they are in a pair at step S321. When the one-time private keys are matched, the service-providing node server510performs decryption on the personal information transaction statement that is detected and temporarily stored, with the second one-time private key within a predetermined time at step S323. The service-providing node server510examines whether decryption is completed within the predetermined time after decryption starts at steps S325and S327. When the first one-time private key and the second one-time private key are not in a pair, the service-providing node server510gives notification of mismatch non-use error information to the blockchain alternative authentication server200and the service-providing host server410at step S322. In addition, when decryption with the second one-time private key is not completed within the predetermined time, the service-providing node server510provides time-out non-use error information to the blockchain alternative authentication server200and the service-providing host server410at step S329. With reference toFIGS.5and6, the process of requesting and acquiring personal information performed by the service provider server400that is the service-providing host server410, which is a server not belonging to the blockchain network500, has been described. In addition, similarly toFIGS.5and6, the service-providing node server510, which is an internal service provider server400, belonging to the blockchain network500receives personal information through any one of other service-providing node servers510-2excluding the service-providing node server510by performing the process of requesting and acquiring the personal information. FIG.7is a flowchart illustrating a method for providing personal information and a method for updating personal information, in a system for providing personal information using a one-time private key based on a blockchain of proof of use according to another embodiment of the present disclosure, which is a diagram illustrating a case in which the service provider server400acquires personal information to a service-providing node server510-1belonging to the blockchain network500through another service-providing node server510-2belonging to the blockchain network500. Referring toFIG.7, the service-providing node server510-1acquires personal information for the required personal information fields of the user terminal100through steps S411to S417in the same manner as those inFIG.5. However, the operation performed by the service-providing host server410inFIG.5is performed by the service-providing node server510-2. The service-providing node server510-1monitors whether as a service is provided, the personal information including the sensitive personal information for the user of the user terminal100, that is, the subject of the personal information is changed, for example, addition, deletion, modification, etc., at step S419. Herein, the service-providing node server510-1has required personal information, and receives the personal information issued from any one selected among the service-providing node servers510-2providing personal information, as a personal information submission document including service provider identification information, an encryption key, and a reply address of the selected service-providing node server510-2. For example, the service-providing node server510-1may be a server of a hospital. Medical and medical prescription records, which belong to one type of sensitive personal information of the user, may be added, for example, medical treatments or prescription drugs based on medical history are changed or added, and so on, and thus the sensitive personal information may be changed. As in the above example, when the personal information including the sensitive personal information is changed, the service-providing node server510-1transmits change-details-informing and agreement inquiry information to the user terminal100in order to inform the user terminal100of change details and get the agreement on updating information at step S423. The user terminal100displays the change-details-informing and agreement inquiry information and monitors whether the agreement is gotten from the user at step S425. In the case of disagreement, the user terminal100transmits a disagreement signal to the service-providing node server510-1at step S427. In the case of agreement, the user terminal100transmits an agreement signal to the service provider server400at step S429. When the agreement signal is received from the user terminal100, the service-providing node server510-1generates a personal information submission document in which a user public key and personal information to be changed are recorded, adds an electronic signature, and performs encryption with the encryption key of the service-providing node server510-2at step S433. When the personal information submission document is encrypted, the first service-providing node server510-1transmits one-time private key issue request information to the user terminal100at step S435. The private key issue request information may include personal information change information, which is information on the personal information to be changed, and the service provider information of the first service-providing node server510-1. The user terminal100acquires alternative authentication keys, such as a public key, a private key, etc. at step S437, and transmits the one-time private key issue request information including the alternative authentication keys to the blockchain alternative authentication server200at step S439. The one-time private key issue request information further includes the personal information change information and the service provider information of the first service-providing node server510-1. The blockchain alternative authentication server200that receives the one-time private key issue request information generates a first one-time private key and a second one-time private key in a pair, and generates a personal information request statement including the user public key, the personal information change information, the service provider information of the first service-providing node server510-1, one-time private key pair authentication information, etc. at step S441. After generating the personal information request statement, the blockchain alternative authentication server200transmits the first one-time private key directly to the first service-providing node server510-1or transmits the first one-time private key to the first service-providing node server510-1via the user terminal100at step S443, and transmits the second one-time private key and the personal information request statement to the second service-providing node server510-2at step S445. The first service-providing node server510-1that receives the first one-time private key transmits personal information change request information to the reply address of the second service-providing node server510-2at step S447. The personal information change request information includes the first one-time private key, the personal information submission document, the service provider information of the first service-providing node server510-1, etc. The second service-providing node server510-2verifies the personal information request information at step S449as described inFIG.6, and decrypts the personal information transaction statement with the second one-time private key at step S451. When the personal information transaction statement is decrypted, the second service-providing node server510-2applies the details of change of the personal information submission document to the existing content of the personal information transaction statement for change, and performs encryption and storage at step S453. When the details of change is applied to the personal information transaction statement, the second service-providing node server510-2notifies the first service-providing node server510-1and the user terminal100that change of the personal information in the personal information transaction statement is completed, at steps S455and S457. FIG.8is a flowchart illustrating a proof-of-use scheme of a blockchain alternative authentication server, in a method for providing personal information, in a system for providing personal information using a one-time private key based on a blockchain based on a proof of use according to the present disclosure. Referring toFIG.8, the service-providing node servers510of the blockchain network500generate a detailed statement of use including the fact that the personal information submission is generated and transmitted, process details, change details, and an electronic signature at step S512because the personal information transaction statement is used, and transmit proof-of-use request information including the detailed statement of use and node (service provider) identification information (ID) to the blockchain alternative authentication server200at step S513. The blockchain alternative authentication server200acquires detailed statements of use from multiple nodes and classifies the same according to a predetermined time, the purpose of use, the node ID, etc. at step S515, and identifies the electronic signature of the node ID at step S517. After the electronic signature is identified, the blockchain alternative authentication server200performs abnormal-transaction detection on the contents of the detailed statement of use and classifies the risk level thereof at step S519. For example, if it is detected that a user provides a public key for an offline service at the same time in physically different locations and personal information is provided to the service provider, this is an indicator of suspicion about illegal use of the personal information. If the personal information is sensitive personal information, the risk degree may be classified as being at a high point in section-based levels. The blockchain alternative authentication server200monitors detection of an abnormal transaction at step S521. When an abnormal transaction is detected at step S523, abnormal-transaction detection notification information for notification of detection of the abnormal transaction is transmitted to the service-providing node server510and the user terminal100, or to the first service-providing node server510-1, the second service-providing node server510-2, and the user terminal100at steps S525and S527. Conversely, when abnormal transaction is not detected, the blockchain alternative authentication server200increases a trust index for the detailed statement of use (for example, a trust index is increased by 1 when abnormal transaction is not detected each time one examination is performed), and applies the trust index to apply a trust index evaluation item including the trust index and a fact confirmation verification value for an evaluation method at step S529. The trust index may be a value that is continuously accumulated if abnormal transaction is not detected each time proof of use is performed. For example, when a user makes personal information provision requests 100 times to receive services of multiple service providers and abnormal transaction is not detected, it is determined that a personal information transaction statement of the user is kept secure with the latest content. Therefore, the reliability of the public key of the user and the personal information is high, and with respect to a detailed statement of use therefor, a high trust index may be applied. Conversely, when a user has never made a personal information provision request after blockchain alternative authentication service registration, or when the proportion of abnormal transaction to the number of uses is detected at a predetermined value or larger during abnormal-transaction detection, there is a risk and reliability is low. With respect to a detailed statement of use therefor, a low trust index may be applied. When the fact confirmation verification value is calculated, the blockchain alternative authentication server200generates a fact confirmation certificate including the fact confirmation verification value and an electronic signature of the blockchain alternative authentication server200, completes a proof-of-use processing at step S531, generates new blocks on a per-predetermined-file-size basis at step S533, applies a chain code hash random number thereto at step S535, and transmits the block to all the service-providing node servers510in the blockchain network500for sharing at step S537. The service-providing node server510chain updates the existing blocks with the chain code hash random number of the new blocks and stores the result at step S539. For example, the personal information may be classified into three types: personal unique identification information, such as a name, a birth date, and a sex; personal alternative identification information, such as an email address, a card number, and a phone number; and sensitive personal information, such as medical records, academic records, a profile, etc. These types of personal information are changed, for example, addition, deletion, modification, etc. Regarding pieces of the personal information used in various services, the reliability and the value of the pieces of the personal information are measured by different evaluation criteria. The evaluation criteria vary with country-based policy, culture, and standard and service providers and are affected by changing environments or standards. Therefore, it is also evaluated whether various, changed, and appropriate criteria are applied in an evaluation item or an evaluation method for reliability of the personal information of the user, and methodological integrity of reliability calculation is verified through evaluation of a process of measuring the reliability of the fact about use of the personal information and of a detailed statement of use. If medical records, which belong to sensitive personal information to which individual's agreement is applied according to the relevant regulations, are updated without individual's agreement, the medical records exist even though these should not exist. Therefore, the personal information itself may have a high reliability because the latest details of use have been recorded. However, there is a possibility that individual's rights are violated or that an error occurs in an agreement confirmation procedure or program, and a future-response system needs to operate to delete the updated personal information, so a fact confirmation verification value may be low. The fact confirmation certificate is a node ID and a detailed statement of use that are generated and transmitted by nodes for use of the personal information. The fact confirmation certificate is generated through the process of measuring a trust index by the alternative authentication server and verifying fact confirmation, and indicates clear authentication of confirmation of the following: the fact about transaction of the personal information, the personal information itself, and the reliability of an individual user, a personal-information user, etc. In the meantime, the present disclosure is not limited to the above-described exemplary embodiments, and it will be understood by those skilled in the art that various improvements, modifications, substitutions, and additions may be made without departing from the scope of the present disclosure. It is noted that if embodiments by such improvements, modifications, substitutions, and additions are within the scope of the following appended claims, the technical ideas thereof are also within the scope of the present disclosure. DESCRIPTION OF THE REFERENCE NUMERALS IN THE DRAWINGS 10: wired/wireless data communication network100: user terminal110: storage unit120: display unit130: input unit140: wireless communication unit141: long-distance wireless communication unit142: short-distance wireless communication unit150: biometric information acquisition unit151: fingerprint detection unit152: iris detection unit153: voice feature detection unit160: camera170: terminal control unit171: personal information acquisition unit172: user-authentication information acquisition unit173: service registration unit174: service processing unit181: personal identification information acquisition unit182: biometric information acquisition unit183: terminal identification information acquisition unit184: service registration request unit185: alternative authentication key reception unit186: service request unit187: one-time private key request unit188: user approval unit200: blockchain alternative authentication server210: server storage unit220: communication unit230: server control unit240: server service registration unit241: server personal information acquisition unit242: user-identification unit243: blockchain network node selection unit244: user-authentication information acquisition unit245: alternative authentication key generation unit246: distribution storage unit250: server service processing unit251: one-time private key (OTQ) generation unit252: proof-of-use unit253: authentication unit300: user-identification institution server400: service provider server410: service-providing host server500: blockchain network510: node (=service-providing node server) | 67,134 |
11943363 | DETAILED DESCRIPTION Prior to discussing embodiments of the invention, some terms can be described in further detail. An “application” may be a computer program that is used for a specific purpose. “Authentication” may include a process for verifying an identity of something (e.g., a user). One form of authentication can be biometric authentication. A “biometric” may be any human characteristic that is unique to an individual. For example, a biometric may be a person's fingerprint, voice sample, face, DNA, retina, etc. A “biometrics interface” may be an interface across which biometrics information is captured. Biometrics interfaces include a thumb print scanner, an iris or retina scanner, a camera, a microphone, a breathalyzer, etc. Biometrics interfaces may be present on user devices, such as mobile devices, or present at an access terminal. A “biometric reader” may include a device for capturing data from an individual's biometric. Examples of biometric readers may include fingerprint readers, front-facing cameras, microphones, and iris scanners. A “biometric sample” may include data obtained by a biometric reader. The data may be either an analog or digital representation of the user's biometric, generated prior to determining distinct features needed for matching. For example, a biometric sample of a user's face may be image data. In another example, a biometric sample of a user's voice may be audio data. A “biometric template” or “biometric sample template” may include a file containing distinct characteristics extracted from a biometric sample that may be used during a biometric authentication process. For example, a biometric template may be a binary mathematical file representing the unique features of an individual's fingerprint, eye, hand or voice needed for performing accurate authentication of the individual. A “computing device” may be any suitable device that can receive and process data. Examples of computing devices may include access devices, transport computers, processing network computers, or authorization computers. The term “cryptographic key” may refer to something used in encryption or decryption. As an example, a cryptographic key could refer to a product of two large prime numbers. A cryptographic key may serve as an input in a cryptographic process, such as RSA or AES, and may be used to encrypt plaintext and produce a ciphertext output, or decrypt ciphertext and produce a plaintext output. The term “homomorphic encryption” may refer to any suitable technique for encrypting data that allows for computation on the resulting ciphertexts, generating an encrypted result which, when decrypted, matches the result of the operations as if they had been performed on the plaintext. It should be noted that computing devices can perform difficult computations on homomorphically-encrypted data without ever having access to the unencrypted data. An “issuer” may typically refer to a business entity (e.g., a bank) that maintains an account for a user. An issuer may also issue payment credentials stored on a user device, such as a cellular telephone, smart card, tablet, or laptop to the consumer. A “memory” may be any suitable device or devices that can store electronic data. A suitable memory may comprise a non-transitory computer readable medium that stores instructions that can be executed by a processor to implement a desired method. Examples of memories may comprise one or more memory chips, disk drives, etc. Such memories may operate using any suitable electrical, optical, and/or magnetic mode of operation. A “key” may refer to a piece of information that is used in a cryptographic algorithm to transform input data into another representation. A cryptographic algorithm can be an encryption algorithm that transforms original data into an alternate representation, or a decryption algorithm that transforms encrypted information back to the original data. Examples of cryptographic algorithms may include triple data encryption standard (TDES), data encryption standard (DES), advanced encryption standard (AES), etc. A “private key” may include any encryption key that may be protected and secure. For example, the private key may be securely stored at an entity that generates a public/private key pair and may be used to decrypt any information that has been encrypted with the associated public key of the public/private key pair. A “processor” may refer to any suitable data computation device or devices. A processor may comprise one or more microprocessors working together to accomplish a desired function. The processor may include a CPU comprising at least one high-speed data processor adequate to execute program components for executing user and/or system-generated requests. The CPU may be a microprocessor such as AMD's Athlon, Duron and/or Opteron; IBM and/or Motorola's PowerPC; IBM's and Sony's Cell processor; Intel's Celeron, Itanium, Pentium, Xeon, and/or XScale; and/or the like processor(s). A “public key” may include any encryption key that may be shared openly and publicly. The public key may be designed to be shared and may be configured such that any information encrypted with the public key may only be decrypted using an private key associated with the public key (i.e., a public/private key pair). A “public/private key pair” may include a pair of linked cryptographic keys generated by an entity. The public key may be used for public functions such as encrypting a message to send to the entity or for verifying a digital signature which was supposedly made by the entity. The private key, on the other hand may be used for private functions such as decrypting a received message or applying a digital signature. The public key will usually be authorized by a body known as a certification authority (i.e., certificate authority) which stores the public key in a database and distributes it to any other entity which requests it. The private key will typically be kept in a secure storage medium and will usually only be known to the entity. However, the cryptographic systems described herein may feature key recovery mechanisms for recovering lost keys and avoiding data loss. A “resource provider” may be an entity that can provide a resource such as a good, service, data, etc. to a requesting entity. Examples of resource providers may include merchants, governmental entities that can provide access to data, data warehouses, entities that can provide access to restricted locations (e.g., train station operators), etc. In some embodiments, resource providers may be associated with one or more physical locations (e.g., supermarkets, malls, stores, etc.) and online platforms (e.g., e-commerce websites, online companies, etc.). In some embodiments, resource providers may make physical items (e.g., goods, products, etc.) available to the user. In other embodiments, resource providers may make digital resources (e.g., electronic documents, electronic files, etc.) available to the user. In other embodiments, resource providers may manage access to certain services or data (e.g., a digital wallet provider). A “server computer” may include a powerful computer or cluster of computers. For example, the server computer can be a large mainframe, a minicomputer cluster, or a group of servers functioning as a unit. In one example, the server computer may be a database server coupled to a Web server. The server computer may be coupled to a database and may include any hardware, software, other logic, or combination of the preceding for servicing the requests from one or more client computers. The server computer may comprise one or more computational apparatuses and may use any of a variety of computing structures, arrangements, and compilations for servicing the requests from one or more client computers. A “user” may include an individual. In some embodiments, a user may be associated with one or more personal accounts and/or user devices. A “user device” may be any suitable device that is operated by a user. Suitable user devices can communicate with external entities such as portable devices and remote server computers. Examples of user devices include mobile phones, laptop computers, desktop computers, server computers, vehicles such as automobiles, household appliances, wearable devices such as smart watches and fitness bands, etc. FIG.1shows a system100comprising a number of components according to an embodiment of the invention. The system100comprises at least a user device102, an enrollment provider server104, and a match server106. The components of the system100may communicate directly or using some network108. In some embodiments, the system may include one or more access device110. In some embodiments, the access device110and the match server106may be the same entity or may be operated by the same entity. The enrollment provider server104may be an example of a first server computer and the match server106may be an example of a second server computer. The enrollment provider server computer104is typically distinct from and is spatially or logically separate of the match server106. As depicted, the system may include a user device102. The user device102may be any electronic device capable of communicating with an enrollment provider server104and/or an access device110. In some embodiments, the user device102may be a mobile device (e.g., a smart phone). In some embodiments, biometric information (e.g., an image of) for a user may be captured using a camera of the user device102and transmitted to an enrollment provider server104for processing. In some embodiments, at least a portion of the functionality described herein may be executed via a mobile application installed upon the user device102. The user device102may be configured to obtain a biometric sample from the user, which may then be used to enroll the user in the described system. In some embodiments, the user device102may obtain the biometric sample from the user and generate a biometric template112from that biometric sample. The biometric template112may then be encrypted and transmitted to the enrollment provider server104. For example, in some embodiments, the biometric template may be encrypted using an encryption key specific to the user device102. In another example, the biometric template may be encrypted using a public encryption key (of a public/private key pair) associated with the enrollment provider server104. In some embodiments, the user device102may also provide account information114to the enrollment provider server104. For example, the user may be asked to select, or provide, at least one primary account number (PAN) to be linked to the functionality described herein. In this example, the PAN may be provided to the enrollment provider server104. It should be noted that in some embodiments, account information may be provided to the enrollment provider server104through a separate channel (i.e., by a device other than the user device102). As depicted, the system may include an enrollment provider server104(i.e., a first server computer). The enrollment provider server104may be any computing device capable of performing at least a portion of the functionality described herein. In some embodiments, the enrollment provider server104may receive biometric information from the user device102and may process that biometric information in relation to one or more accounts. The enrollment provider may create and distribute, in a suitable manner, an enrollment provider application (e.g., a mobile application to be installed upon, and executed from, user device102). The enrollment provider server104may typically be a system associated with an issuer or entity (e.g., a bank) that has a business relationship with a match server106or other entity. The enrollment provider server104may be configured to encrypt the biometric template112received from the user device102using a public key associated with the enrollment provider server104. In some embodiments, the enrollment provider server104may first decrypt the biometric template112before re-encrypting the biometric template112received from the user device102using a public key associated with the enrollment provider server104. For example, if the biometric template112has been encrypted by the user device102using an encryption key specific to the user device102, then the enrollment provider server104may decrypt the biometric template112using a decryption key specific to the user device102and may re-encrypt the biometric template112using a public key associated with the enrollment provider server104. The enrollment provider server104may transmit the encrypted biometric template116to a match server106. In some embodiments, the biometric template112may be deleted or otherwise removed from the memory of the enrollment provider server104once the encrypted biometric template116has been sent to the match server106. The enrollment provider server104may be further configured to receive an encrypted comparison between two biometric templates and determine a likelihood of a match. In some embodiments, this may involve first decrypting an encrypted comparison data file generated by the match server106. Once decrypted, the enrollment provider server104may process the received comparison data file using any suitable biometric authentication techniques. In some embodiments, the enrollment provider server104may respond to the match server106with an indication of the likelihood that the biometric templates match. In some embodiments, the likelihood that the biometric templates match may be represented as a percentage. As depicted, the system may include match server106(i.e., a second server). The match server106may be capable of receiving data, performing computations, transmitting data, etc. In some embodiments, the match server106may be configured to receive and process a request from access device110. The request received from the access device110may include a biometric template118generated by the access device110for a user that wishes to complete a transaction. The match server106may be configured to encrypt the biometric template118using a public key associated with the enrollment provider server104and to compare the encrypted biometric template to the encrypted biometric template116received from the enrollment provider server104. When encrypting the biometric template118, the match server106may use encryption techniques substantially similar to those used by the enrollment provider server104to encrypt the biometric template116. Once the match server106has encrypted biometric template118, the match server106may compare the encrypted biometric template118to the encrypted biometric template116. It should be noted that the two templates need not be decrypted to be compared if both biometric templates have been encrypted using homomorphic encryption techniques. Indeed, the match server106may not even be capable of decrypting either biometric template. Once the match server106has generated a comparison of the encrypted data, the match server106may transmit that comparison to the enrollment provider server104. The enrollment provider server104may respond to the match server106with an indication as to the likelihood that the two biometric templates are a match. In some embodiments, the match server106may then determine whether the likelihood value is greater than some predetermined threshold value and, based on that determination, may provide the access device110with an indication as to whether to approve or decline the transaction. The network108may be any suitable communication network or combination of networks. Suitable communications networks may include any one or a combination of the following: a direct interconnection; the Internet; a Local Area Network (LAN); a Metropolitan Area Network (MAN); an Operating Missions as Nodes on the Internet (OMNI); a secured custom connection; a Wide Area Network (WAN); a wireless network (e.g., employing protocols such as, but not limited to a Wireless Application Protocol (WAP), I-mode, and/or the like); and/or the like. Messages between the computers, networks, and devices may be transmitted using a secure communications protocols such as, but not limited to, File Transfer Protocol (FTP); HyperText Transfer Protocol (HTTP); Secure Hypertext Transfer Protocol (HTTPS), Secure Socket Layer (SSL), ISO (e.g., ISO 8583) and/or the like. An access device110may be configured to manage access to a particular resource. Upon receiving a request from a user to access that resource, the access device110may be configured to obtain a biometric sample from that user. The access device110may then generate a second biometric template118(e.g., an authentication template) using a process substantially similar to the process used by the user device102to generate the biometric template112. The biometric template118may then be transmitted to the match server106for authentication. In some embodiments, the access device110may receive a response from the match server106that includes an indication of whether the transaction has been authenticated. The access device110may then complete the transaction in a manner similar to conventional manners using the account information provided via the user device102. For simplicity of illustration, a certain number of components are shown inFIG.1. It is understood, however, that embodiments of the invention may include more than one of each component. In addition, some embodiments of the invention may include fewer than or greater than all of the components shown inFIG.1. In addition, the components inFIG.1may communicate via any suitable communication medium (including the internet), using any suitable communications protocol. FIG.2depicts an illustrative example of a system or architecture200in which techniques for enabling biometric authentication without exposing the authorizing entity to sensitive information may be implemented. In architecture200, one or more consumers and/or users may utilize a user device102. In some examples, the user device102may be in communication with an enrollment provider server104and/or an access device via a network108, or via other network connections. The access device may, in turn, be in communication with a match server106. User device102, enrollment provider server104, network108, and match server106may be examples of the respective components depicted inFIG.1. The user device102may be any type of computing device such as, but not limited to, a mobile phone, a smart phone, a personal digital assistant (PDA), a laptop computer, a desktop computer, a server computer, a thin-client device, a tablet PC, etc. The user device102may include a memory202and one or more processors204capable of processing user input. The user device102may also include one or more input sensors, such as camera devices206, for receiving user input. As is known in the art, there are a variety of input sensors capable of detecting user input, such as accelerometers, cameras, microphones, etc. The user input obtained by the input sensors may be from a variety of data input types, including, but not limited to, audio data, visual data, or biometric data. In some embodiments, camera devices206may include a number of different types of camera devices, one or more of which may be a range camera device (e.g., a depth sensor) capable of generating a range image, and another of which may be a camera configured to capture image information. Accordingly, biometric information obtained via a camera device may include image information and/or depth information (e.g., a range map of a face). Embodiments of the application on the user device102may be stored and executed from its memory202. The memory202may store program instructions that are loadable and executable on the processor(s)204, as well as data generated during the execution of these programs. Depending on the configuration and type of user device102, the memory202may be volatile (such as random access memory (RAM)) and/or non-volatile (such as read-only memory (ROM), flash memory, etc.). The user device102may also include additional storage, such as either removable storage or non-removable storage including, but not limited to, magnetic storage, optical disks, and/or tape storage. The disk drives and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program modules, and other data for the computing devices. In some implementations, the memory202may include multiple different types of memory, such as static random access memory (SRAM), dynamic random access memory (DRAM) or ROM. Turning to the contents of the memory202in more detail, the memory202may include an operating system and one or more application programs or services for implementing the features disclosed herein including at least a module for generating a biometric template from a biometric sample (biometric template generation module208). The memory202may also include instructions that cause the user device102to encrypt any generated biometric template. In some embodiments, the biometric template may be encrypted using an encryption key specific to the user device102. In some embodiments, the biometric template may be encrypted using a public encryption key associated with the enrollment provider server104. In some embodiments, the biometric template generation module208may comprise code that, when executed in conjunction with the processors204, causes the user device102to obtain a biometric sample from a user and generate a biometric template from that biometric sample. In some embodiments, a biometric template may be a binary mathematical file representing the unique features of an individual's fingerprint, eye, hand or voice needed for performing accurate authentication of the individual. A biometric template may be generated in a number of suitable manners. For example, the biometric template may store an indication of a relationship between various biometric features for a user which are derived from the biometric sample. By way of illustrative example, a biometric template may store an indication of a user's eye location with respect to that user's nose. It should be noted that whereas a full biometric sample may require a large amount of memory to store, a biometric template derived from a biometric sample that stores an indication of relationships between features found in the sample may require significantly less memory for storage. The memory202and any additional storage, both removable and non-removable, are examples of non-transitory computer-readable storage media. For example, computer-readable storage media may include volatile or non-volatile, removable or non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. As used herein, modules may refer to programming modules executed by computing systems (e.g., processors) that are part of the user device102or the enrollment provider server104. The user device102may also contain communications connections that allow the user device102to communicate with a stored database, another computing device or server, user terminals, and/or other devices on the network208. The user device102may also include input/output (I/O) device(s) and/or ports, such as for enabling connection with a keyboard, a mouse, a pen, a voice input device, a touch input device, a display, speakers, a printer, etc. In some examples, the network208may include any one or a combination of many different types of networks, such as cable networks, the Internet, wireless networks, cellular networks, and other private and/or public networks. It is noted that the described techniques may apply in other client/server arrangements (e.g., set-top boxes, etc.), as well as in non-client/server arrangements (e.g., locally stored applications, peer to-peer systems, etc.). The enrollment provider server104and/or match server106may be any type of computing device such as, but not limited to, a mobile phone, a smart phone, a personal digital assistant (PDA), a laptop computer, a desktop computer, a server computer, a thin-client device, a tablet PC, etc. Additionally, it should be noted that in some embodiments, one or both of the depicted computing devices may be executed by one more virtual machines implemented in a hosted computing environment. The hosted computing environment may include one or more rapidly provisioned and released computing resources, which computing resources may include computing, networking, and/or storage devices. A hosted computing environment may also be referred to as a cloud-computing environment. In one illustrative configuration, the enrollment provider server104may include at least one memory210and one or more processing units (or processors)212. The processor(s)212may be implemented as appropriate in hardware, computer-executable instructions, firmware or combinations thereof. Computer-executable instruction or firmware implementations of the processor(s)212may include computer-executable or machine executable instructions written in any suitable programming language to perform the various functions described. Turning to the contents of the memory210in more detail, the memory210may include a template encryption module214that, when used in conjunction with the processor(s)212, is configured to encrypt biometric templates received from one or more user devices102using homomorphic encryption techniques. The template encryption module214may be configured to use a public encryption key associated with the enrollment provider server104to encrypt the biometric template. In some embodiments, the template encryption module214may decrypt a biometric template received from a user device102prior to re-encrypting the biometric template. In some embodiments, the template encryption module214may utilize one or more homomorphic cryptosystems available in open source libraries such as the HElib library, the FHEW library, and/or the TFHE library. The memory210may include a homomorphic verification module216that, when used in conjunction with the processor(s)212, is configured to decrypt an encrypted match result received from a match server and determine an extent to which the match is successful. In some embodiments, the homomorphic verification module216may receive a data file from a match server106that represents a comparison or similarity between two encrypted biometric templates. Because the two biometric templates have been encrypted using the public key associated with the enrollment provider server104using homomorphic encryption techniques, the received data file is also encrypted and is decryptable using a private key associated with the enrollment provider server. The homomorphic verification module216may be configured to decrypt the received data file to determine an extent to which the two biometric templates match. In some embodiments, the data file may include an indication as to how similar or different the two biometric templates are. The homomorphic verification module216may generate a value that represents a likelihood that the users associated with the two biometric templates are the same user. This result, which may be represented as a numeric value (e.g., a percentage), may be provided back to the match server106. In some embodiments, the result may be provided to a user device102associated with the data file. Additionally, the memory210may include encryption key data218, which stores a public and private encryption key associated with the enrollment provider server104as well as encryption keys associated with a number of user devices102. The memory may also include account data220, which may store information for one or more users and/or user devices102as well as payment/authentication information for the respective users and/or user devices102. Encryption key data218and/or account data220may be stored in one or more databases. The match server106may be any suitable type of computing device that interacts with an access device to authenticate a user in a transaction. The match server106may include a memory222and one or more processors224capable of processing user input. Embodiments of the application on the match server106may be stored and executed from its memory222. The memory222may store program instructions that are loadable and executable on the processor(s)224, as well as data generated during the execution of these programs. The memory222may include an operating system and one or more application programs or services for implementing the features disclosed herein including at least a module for encrypting a biometric template (template encryption module226) and/or a module for performing homomorphic comparison on encrypted data (template comparison module228). The template encryption module226may be substantially similar to the template encryption module214described above. It should be noted that in some embodiments, a biometric template may be encrypted by an access device before being transmitted to the match server. The template comparison module228may be configured to compare two biometric templates that have been encrypted using a public key associated with the enrollment provider server104. It should be noted that the match server106may not have access to the private key associated with the enrollment provider server104, hence the template comparison module228may not be capable of decrypting the biometric templates. However, since the biometric templates have been encrypted using homomorphic encryption techniques, the template comparison module228is able to process the encrypted biometric templates as it would unencrypted biometric templates to produce a data file that represents the differences or similarities between the two biometric templates. The data file produced in this manner is itself encrypted and the match server is also unable to decrypt the data file. Instead, the match server106may be configured to transmit the generated data file to the enrollment provider server104, which will in turn decrypt the data file (e.g., via the homomorphic verification module216) and return an indication as to the likelihood of a match between the two biometric templates. FIG.3shows a flow diagram of an enrollment method according to an embodiment of the invention. The process300, or at least portions thereof, may be performed by an example user device102, enrollment provider server104, and match server106as depicted inFIG.1andFIG.2and described above. In an embodiment of the invention, a user may enroll on an enrollment provider mobile application on the user device102, such as a user smartphone. The enrollment may include enrolling one or more payment instruments, such as credit cards, and obtaining, using the user device102, a biometric sample, such as a facial image. It may also include a form of authentication, demonstrating to the enrollment provider server104that the user who is enrolling is a legitimate owner of the payment instruments. In some embodiments, this may be done by the user inputting a code or password, thus logging the user into an account maintained by the enrollment provider server104. In some embodiments, the enrollment data may include a biometric template (encrypted or unencrypted) as well as an indication of an account to be linked to embodiments of the disclosure. Upon receiving the enrollment data, the process may involve storing the enrollment data in association with both the account information and the user device from which the enrollment data was received. In some embodiments, the enrollment data may replace existing enrollment data. For example, a user may wish to use a new biometric sample and/or associate the existing biometric template to a different account. At step S302, the user device102may receive a biometric sample from the user of the user device102. In some embodiments, the user may be prompted to input biometric data using a biometric reader, resulting in the collection of the biometric sample. In some embodiments, the biometric sample may be collected in response to a user having requested enrollment into a system that enables biometric access to a resource. In some embodiments, the user may be required to authenticate that the user is who he or she claims to be at step S302. For example, the user may be required to log into an account maintained by an enrollment provider server104. The account login may be performed via a mobile application installed upon, and executed from, the user device102. At step S304, the user device102may process the biometric sample into a first biometric template. In some embodiments, this may involve identifying various biometric features within the obtained biometric sample and identifying relationships between one or more of those features. An indication of those relationships may then be compiled into a biometric template. For example, the biometric template may include an indication as to a relative distance between various facial features of the user104. In this example, the biometric template may store an indication of the distance between the user's mouth and the user's nose with respect to the distance between the user's nose and the user's forehead. At step S306, the enrollment provider application on the user device102may encrypt the first biometric template in a form that protects its confidentiality and integrity, as well as proves its origin. For example, this may be done using authenticated encryption with derived symmetric keys where the enrollment provider server104may have a master key that has been previously used to derive a user specific key or keys from the user or account data (such as PAN). In some embodiments, the user device102may, in response to requesting enrollment of a user, be provided with an encryption key to use in encrypting the biometric template. In some embodiments, the encryption key may be a device-specific encryption key which is associated with that user device102. In some embodiments, the user device102may be provided with a public key (of a public-private key pair) associated with the enrollment provider server104. In some embodiments, a shared secret key may be created for the user device102and enrollment provider server104using a combination of public-private key pairs (e.g., via a Diffie-Hellman key exchange). The biometric template may then be encrypted using the provided encryption key. At step S308, the user device102may transmit a message including the encrypted biometric template and user identification data to the enrollment provider server104. The user identification data may identify the user to the enrollment provider server104. In some embodiments, the user identification data may be a password, a token, or a primary account number (PAN). The user identification data may be encrypted in the same way as the encrypted biometric template. In some embodiments, the encrypted biometric template and the user identification data may be encrypted in different ways. At step S310, after receiving the message, the enrollment provider server104may decrypt the encrypted biometric template and user identification data. The enrollment provider server104may validate the integrity and origin of the message. The process may further involve storing the enrollment data in association with both the account information and the user device from which the enrollment data was received. In some embodiments, the biometric template may be encrypted using a public key associated with the enrollment provider server104. In at least some of those embodiments, the encrypted biometric template may be stored as it was received. At step S312, the enrollment provider server104may generate a biometric identifier, also referred to as a handle (sometimes referred to herein as CH), corresponding to the user. The biometric identifier may be used by external parties. The biometric identifier may be generated such that it does not reveal anything about the user or have links back to the identity of the user or their account (PAN). In some embodiments, the biometric identifier may be a random number or string of characters. In some embodiments, the biometric identifier may be stored in an enrollment provider server database in relation to the user. At step S314, the enrollment provider server104may encrypt the decrypted first biometric template (sometimes referred to herein as TE) previously received from the user with an enrollment provider public key (referred to herein as Pb), wherein the encryption may be written as Pb{TE} for a first encrypted biometric template. If the public key cryptographic system is elliptical-curve based, then El Gamal encryption may be used, as the first encrypted biometric template will be subject to homomorphic operations and as such cannot use a mixed encryption scheme where a payload is encrypted with a symmetric cipher and the symmetric key is in turn encrypted with a public key. At step S316, after encrypting the decrypted first biometric template to form the first encrypted biometric template, the enrollment provider server104may transmit the first encrypted biometric template and the biometric identifier to the match server106. The biometric identifier may be used by the match server106to reference a user account without being provided details about the user. The transmission from the enrollment provider server104to the match server106may be secure, that is, authenticated and encrypted, e.g. with mutually authenticated transport layer security (TLS). In some embodiments, the enrollment provider server104can delete the decrypted first biometric template as well as the first encrypted biometric template from its system (e.g., the enrollment provider server database) as they may no longer be required at the enrollment provider server104. In this way no residual information about the first biometric template, even in encrypted form, remains at the enrollment provider server104. At step S318, after receiving the first encrypted biometric template and the biometric identifier, the match server106may store an association from the biometric identifier to the first encrypted biometric template in a database. Notice, that since the match server106does not possess the enrollment provider private key associated with the enrollment provider public key, it cannot decrypt the first encrypted biometric template or recover the first biometric template in any way. Thus, the match server106securely stores the first encrypted biometric template and neither the match server106nor an entity that hacks into the match server106is able to obtain the first biometric template since it is encrypted. In some embodiments, the user device102may not transmit the encrypted biometric template to the enrollment provider server104, but may transmit the user identification data. In such a case, the enrollment provider server104may verify the user through the user identification data. The enrollment provider server104may then generate a biometric identifier, and then transmit the biometric identifier to the user device102. The user device102may then encrypt the first biometric template with the enrollment provider public key, and then transmit the first encrypted biometric template as well as the biometric identifier to the match server106. FIG.4shows a flow diagram of an authentication method according to an embodiment of the invention. Similar to the process depicted inFIG.3, the process400, or at least portions thereof, may be performed by an example user device102, enrollment provider server104, and match server106as depicted inFIG.1andFIG.2and described above. Authentication may take place in a payment situation inside an application that may not be controlled by the enrollment provider server104, but, for example, by a resource provider, or while browsing and activating a java script application from a resource provider page. When a user performs an authentication, they may do so on an application or a browser-based java script, such as a resource provider application located on the user device102. The resource provider application may have access to an enrollment provider public key and/or a match server public key. At step S402, the resource provider application or java script may contact one of the enrollment provider server104or the match server106, to recover the biometric identifier from information the party may have about the user (e.g. token or PAN). In some embodiments, the resource provider application on the user device102may transmit a biometric identifier request message to the enrollment provider server104. The biometric identifier request message may include user identification data, a request for the biometric identifier, and any other suitable information. The user identification data may be a token, a PAN, or any other suitable identifier. At step S404, the enrollment provider server104may transmit the biometric identifier associated with the user identification data to the user device102in response to the received request. In some embodiments, the biometric identifier may be encrypted before being transmitted to the user device102. For example, the biometric identifier may be encrypted using an encryption key for which the user device102has access to a decryption key. Once the biometric identifier has been recovered by the user device102, at step S406, the user may be prompted to submit a biometric sample to the user device102, e.g. take a self-photo (e.g., a selfie) using a camera on the user device102. At step S408, the user device102may process the biometric sample into a second biometric template (referred to herein as TA). The second biometric template may be generated using techniques substantially similar to those used to generate the first biometric template. In some embodiments, the application or program used to generate the second biometric template may be the same application or program used to generate the first biometric template. At step S410, in some embodiment, the resource provider application or java script may encrypt the second biometric template with the enrollment provider public key, resulting in a second encrypted biometric template, Pb{TA}. It should be noted that in some embodiments, the match server106may encrypt the second biometric template with the enrollment provider public key, resulting in a second encrypted biometric template Pb{TA}. The resource provider application may then encrypt the encrypted (or unencrypted) second biometric template, the biometric identifier, and a transaction identifier (referred to herein as TI) with a match server public key (referred to herein as PbMS), resulting in an encrypted tuple, PbMS{Pb{TA}, CH, TI} The match server public key may be of a mixed form as described above. At step S412, the resource provider application may transmit the encrypted tuple to the match server106. The encrypted tuple may be transmitted to the match server106in a form that protects its integrity and confidentiality. At step S414, after the match server106receives the encrypted tuple from the resource provider application, the match server106may then decrypt the encrypted tuple with a match server private key corresponding to the match server public key, resulting in the second encrypted biometric template, the biometric identifier, and the transaction identifier. At step S416, the match server106may use the biometric identifier to look up the first encrypted biometric template, stored at step S318. This may involve querying a database of encrypted biometric templates stored in association with biometric identifiers. At step S418, the match server106may perform a homomorphic comparison process between the first encrypted biometric template and the second encrypted biometric template, resulting in an encrypted data file (i.e., an encrypted match result), wherein the encrypted data file is in an enrollment provider encryption domain. In other words, the resulting data file may already be encrypted with the public key associated with the enrollment provider server104when it is generated. Homomorphic comparison may be a form of encrypted data processing that allows computation on encrypted data, generating an encrypted result which, when decrypted matches the result of the computations as if they had been performed on unencrypted data. In other words, the two templates that are being compared must been in the same encryption domain, in this case the enrollment provider encryption domain, in order to perform homomorphic matching, wherein the result of the matching must also be in the same encryption domain. In some embodiments, this may be represented as Pb{m}:=HE_match(Pb{TE}, Pb{TA}). It should be noted that although the match server104is able to perform the homomorphic comparison, the match server is not able to interpret the results of that comparison because it lacks access to the enrollment provider server's private key. At step S420, the match server106may transmit the encrypted match result, the biometric identifier, and the transaction identifier to the enrollment provider server104using a secure channel. At step S422, the enrollment provider server104may decrypt the encrypted match result with an enrollment provider private key corresponding to the enrollment provider public key, resulting in a match result. The match result indicates a likelihood as to whether the first biometric template and the second biometric template match. The match result may be in any suitable form. For example, in some embodiments, the match result may be a value between zero and one hundred, wherein a value of zero represents that the templates do not match, and wherein a value of one hundred represents that the templates completely match. In this example, the value may be represented as a percentage value. In other embodiments, the match result may be either “yes match” or “no match.” In further embodiments, after obtaining the match result data file, the enrollment provider server104may transmit a notification regarding the match result to the user device102. The notification may include the match result as well as information regarding the match result and/or the transaction identifier. For example, the notification may be “the biometric for transaction #521matches stored biometric.” In other embodiments, the enrollment provider server104may transmit the match result, the biometric identifier, and the transaction identifier to the resource provider application and/or the match server106. In some embodiments, the match result may be used to authenticate a transaction corresponding to the transaction identifier. In some embodiments, a transaction may be authenticated upon determining that the match result value is greater than some predetermined acceptable risk threshold value. In some embodiments, an acceptable risk threshold value may vary based on the access device from which the request has been received or the type of transaction to be authenticated. For example, some access devices (or entities that operate those access devices) may be willing to take on a greater level of risk than other access devices. It should be noted that a higher acceptable risk threshold value will result in increased security at the cost of having a greater number of false declinations. FIG.5depicts a flow chart depicting example interactions that may take place between an enrollment provider server and a match server in accordance with at least some embodiments. In some embodiments, the enrollment provider server104may receive a request for enrollment from a user device102. In some embodiments, the enrollment provider server104may respond to the request for enrollment by providing an encryption key (e.g., a public encryption key associated with the enrollment provider server104). Once the user device102has received the encryption key, it may prompt a user to provide a biometric sample via one or more input sensors of the user device102. For example, the user device may prompt the user to take a picture of his or her face using a camera device installed in the use device102. The user device102may generate a biometric template from the received biometric sample. In some embodiments, the user device may also prompt the user for a password or other authentication means that may be used to verify the authenticity of the user. Additionally, the user device102may prompt the user to provide an indication of one or more accounts (e.g., payment accounts) to be enrolled into the system described herein. The user device may transmit each of the biometric template and indication of an account to the enrollment provider server at502. In some embodiments, the enrollment provider server may assign a biometric identifier to be associated with the biometric template and user device102. The enrollment provider server may transmit the biometric identifier to the user device102(e.g., within a confirmation that the biometric template has been received). At504, the enrollment provider server104may generate a homomorphically encrypted biometric template from the biometric template that it received from the user device at502. To do this, the enrollment provider server104may encrypt the received biometric template using its public key. The encrypted biometric template may then be sent to the match server106. It should be noted that although the interactions depicted inFIG.5illustrate an embodiment in which the enrollment provider server encrypts the biometric template, the biometric template may be encrypted by the user device102in at least some embodiments. In at least some of those embodiments, the user device may also transmit the encrypted biometric template directly to the match server106(e.g., via a mobile application installed upon the user device102). At506, the match server106may receive the encrypted biometric template and the biometric identifier from the enrollment provider server104. The match server106may store the encrypted biometric template in association with the biometric identifier within a database or other storage means. At this point, interactions between the various components of the system may cease (with respect to this particular transaction) until the operator of the user device102wishes to complete a transaction using the system. When the operator of the user device102is ready to conduct a transaction using the described system, the user device may provide a biometric sample (or biometric template generated from a biometric sample) to an access device110along with the biometric identifier. In the event that the access device110receives a biometric sample (e.g., in the case that the biometric sample was collected by a camera of the access device), the access device110may generate a biometric template from that biometric sample, which it may forward to the match server106. At508, the match server106may receive the biometric template and the biometric identifier from an access device110. The access device may be any computing device that manages access to a resource, including a website that sells goods and/or services (e.g., an online retailer). In some embodiments, the match server106may be an operator of a website. At510, the match server106may generate a homomorphically-encrypted biometric template. To do this, the match server106may use the public key associated with the enrollment provider server104to encrypt the biometric template received from the access device110in substantially the same manner as the encrypted biometric template was generated at504, with the only difference in the process being which underlying biometric template is being encrypted. At512, the match server106may retrieve the encrypted biometric template received at506(e.g., based on the provided biometric identifier). Once retrieved, the match server106may perform a comparison between the encrypted biometric template received at506and the encrypted biometric template generated at510. The match server106may generate a match result data file which represents a similarity or difference between the two biometric templates. Because each of the biometric templates has been encrypted using homomorphic encryption techniques, the resulting data file will be inherently encrypted. Hence, the match server106will not be able to interpret the match result data file even though it generated that data file. Accordingly, in order to retrieve the result of the match result, the match server106may transmit the match result data file to the enrollment provider server104. The match result data file may be provided with the biometric identifier as well as a transaction identifier. At514, the enrollment provider server104may receive the match result data file from the match server106. In some embodiments, the enrollment provider server may also receive the biometric identifier as well as a transaction identifier that can be used to identify the transaction/user associated with the match result. Upon receiving the match result, the enrollment provider server104may decrypt the match result data file using its private key. At516, the enrollment provider server104may interpret the decrypted match result data file to determine a likelihood that the two biometric templates were generated from biometric samples taken from the same user. In some embodiments, the decrypted data file may represent a difference or similarity between the two biometric templates. For example, the biometric templates may include an indication of relationships between various biometric features of a user. In this example, the match result data file may include an indication as to how much those relationships differ between the two biometric templates. In some embodiments, the match likelihood may be expressed as a numeric value. The enrollment provider server104may provide an indication of the match likelihood value to the match server106and/or the user device102. At518, the match server106may receive the match likelihood value and determine, based on the received match likelihood value, whether to approve or decline the transaction. In some embodiments, the match server106may maintain a predetermined acceptable risk threshold value which represents a numeric value over which the biometric templates should be considered to have been generated from the same user. For example, the match server106may maintain an acceptable risk threshold value of 98%, in which match likelihood values greater than or equal to 98% will be considered authenticated. At520, the match server106may approve or decline the transaction based on whether or not the match likelihood value is above or below the acceptable risk threshold value. In some embodiments, the match server106may convey the match likelihood value to the access device110, which may determine whether to approve or decline the transaction. In some embodiments, the match server106may provide the access device110with an indication as to whether the authentication of the user is, or is not, successful. FIG.6depicts an flow diagram illustrating an example process for determining a match likelihood value for user authentication in accordance with at least some embodiments. Process600may be performed by an example enrollment provider server104as depicted inFIG.1. Process600may begin at602, when the enrollment provider server receives authentication data from a user device. In some embodiments, the authentication data may include a biometric template as well as account information to be linked to the biometric template at the enrollment provider server. At604, process600may involve determining a biometric identifier to be associated with the received authentication data. In some embodiments, the biometric identifier may be generated as a string of random characters. In some embodiments, the biometric identifier may be assigned as a primary key designated to uniquely identify table records within a database table in which at least a portion of the authentication data is stored. In some embodiments, the process600may involve encrypting a biometric template received from the user device102(e.g., within the authentication data) and, in some cases, storing that encrypted biometric template in a database in relation to the biometric identifier. At606, process600may involve transmitting the biometric identifier to a user device and/or a match server. In some embodiments, the biometric identifier may be transmitted to a match server along with a first encrypted biometric template. In at least some of these embodiments, the biometric identifier may also be transmitted to the user device from which the authentication data was received. In some embodiments, the biometric identifier may be transmitted to the user device along with a public encryption key associated with the system. In at least some of these embodiments, the user device may generate and subsequently encrypt a biometric template using the provided encryption key. The user device may then transmit the encrypted biometric template directly to the match server along with the biometric identifier. In each of the scenarios presented above, the match server may then store the biometric identifier in relation to the encrypted biometric template. At608, process600may involve receiving an encrypted match value data file. In some embodiments, the match server computer subsequently receives a second encrypted biometric template and the biometric identifier from the user device, and generates an encrypted match value data file by comparing the first encrypted biometric template and the second encrypted biometric template. The second encrypted biometric template may be encrypted using the same public key as the first encrypted biometric template. The match value data file may include a delta or difference or similarity in data between the first encrypted biometric template and the second encrypted biometric template. It should be noted that generating an encrypted match value data file should not involve decrypting the data from either the first encrypted biometric template or the second encrypted biometric template. At610, process600may involve decrypting the received match value data file. To do this, the system may use a private key corresponding to the public key used to encrypt both the first encrypted biometric template and the second encrypted biometric template. One skilled in the art would recognize that a number of decryption techniques are available for use at this step. The particular decryption technique used will be dependent upon the type of encryption technique used. At612, process600may involve determining a match likelihood value. In some embodiments, this may involve interpreting the decrypted match result data file to determine a likelihood that the two biometric templates were generated from biometric samples taken from the same user. In some embodiments, the decrypted match value data file may represent a difference or similarity between the two biometric templates. For example, the biometric templates may include an indication of relationships between various biometric features of a user. In this example, the match result data file may include an indication as to how much those relationships differ between the two biometric templates. In some embodiments, the match likelihood may be expressed as a numeric value. In some embodiments, the system may provide an indication of the match likelihood value to the match server and/or the user device. Embodiments of the disclosure provide for a number of advantages over conventional systems. For example, the system described enables entities to utilize biometric authentication in their applications without exposing those entities to sensitive information. In embodiments of the system, a developer is able to incorporate biometric authentication (e.g., facial recognition) of a user into their application without being given access to that user's decrypted biometric information. Hence, the developer, which may be an untrusted party, is not then able to redistribute a user's biometric information or use it for nefarious purposes. At the same time, by enabling third parties (e.g., the match server) to perform biometric template comparisons, the system can significantly reduce its own workload, resulting in huge increases to available processing power. In addition, the methods and systems are secure and scalable. Since the biometric template data is encrypted in the match server, it is secure from an data breaches as the encrypted template data is useless on its own. Further, each match server may be operated by different entities such as different merchants, different banks, or different organizations. Each entity may holds its own users' data and perform the cryptographic matching process. This not only partitions the data according to the appropriate entity, but as noted above, distributes the computational requirements associated with the matching processes that are performed. However, the enrollment server can be the only computer in the system that ever has possession of a biometric template in unencrypted form. As such, only one server computer needs to be made highly secure, while multiple other match servers may exist and may have less security than the enrollment server. As such, embodiments of the invention are very scalable. It should be understood that any of the embodiments of the present invention can be implemented in the form of control logic using hardware (e.g. an application specific integrated circuit or field programmable gate array) and/or using computer software with a generally programmable processor in a modular or integrated manner. As used herein, a processor includes a single-core processor, multi-core processor on a same integrated chip, or multiple processing units on a single circuit board or networked. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will know and appreciate other ways and/or methods to implement embodiments of the present invention using hardware and a combination of hardware and software. Any of the software components or functions described in this application may be implemented as software code to be executed by a processor using any suitable computer language such as, for example, Java, C, C++, C#, Objective-C, Swift, or scripting language such as Perl or Python using, for example, conventional or object-oriented techniques. The software code may be stored as a series of instructions or commands on a computer readable medium for storage and/or transmission, suitable media include random access memory (RAM), a read only memory (ROM), a magnetic medium such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like. The computer readable medium may be any combination of such storage or transmission devices. Such programs may also be encoded and transmitted using carrier signals adapted for transmission via wired, optical, and/or wireless networks conforming to a variety of protocols, including the Internet. As such, a computer readable medium according to an embodiment of the present invention may be created using a data signal encoded with such programs. Computer readable media encoded with the program code may be packaged with a compatible device or provided separately from other devices (e.g., via Internet download). Any such computer readable medium may reside on or within a single computer product (e.g. a hard drive, a CD, or an entire computer system), and may be present on or within different computer products within a system or network. A computer system may include a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user. The above description is illustrative and is not restrictive. Many variations of the invention will become apparent to those skilled in the art upon review of the disclosure. The scope of the invention should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the pending claims along with their full scope or equivalents. One or more features from any embodiment may be combined with one or more features of any other embodiment without departing from the scope of the invention. As used herein, the use of “a,” “an,” or “the” is intended to mean “at least one,” unless specifically indicated to the contrary. | 64,956 |
11943364 | DETAILED DESCRIPTION Various embodiments of a privacy-enabled biometric system are configured to enable encrypted authentication procedures in conjunction with biometric information. The handling of biometric information includes capture of unencrypted biometrics that are used to generate encrypted forms (e.g., encrypted feature vectors via a generation neural network). The system uses the encrypted forms for subsequent processing, and in various embodiments discards any unencrypted version of the biometric data—thus providing a fully private authentication system. For example, the system can provide for scanning of multiple encrypted biometrics (e.g., one to many prediction) to determine authentication (e.g., based on matches or closeness). Further embodiments can provide for search and matching across multiple types of encrypted biometric information (e.g., based on respective neural networks configured to process certain biometric information) improving accuracy of validation over many conventional approaches, while improving the security over the same approaches. According to one aspect, a private authentication system can invoke multi-phase authentication methodologies. In a first phase of enrollment, users' unencrypted biometric information is converted to encrypted form. According to various embodiments, the users unencrypted biometric data is input into neural networks configured to process the respective biometric input (e.g., voice, face, image, health data, retinal scan, fingerprint scan, etc.). In various embodiments, the generation neural networks are configured to generate one way encryptions of the biometric data. The output(s) of the neural network(s) (or, for example, intermediate values created by the generation neural networks) can be distance measurable encryptions of the biometric information which are stored for later comparison. For a given user, the generated encrypted values can now be used for subsequent authentication. For example, the system can compare a newly created encrypted feature vector to the encrypted feature vectors stored on the system. If the distance between the encrypted values is within a threshold, the user is deemed authenticated or more generally, that a valid match results. In a second phase of operation, the enrollment process uses the generated encrypted biometrics (e.g., distance measurable encrypted feature vectors) to train a second neural network (e.g., a deep neural network or fully connected neural network—described in greater detail below). The second neural network accepts as input encrypted feature vectors (e.g., distance measurable feature vectors, Euclidean measurable feature vectors, homomorphic encrypted feature vectors, etc.) and label inputs during training. Once trained the second neural network (e.g., encrypted classification network) accepts encrypted feature vectors and returns identification labels (or, for example, an unknown result). According to various embodiments, the phases of operation are complimentary and can be used sequentially, alternatively, or simultaneously, among other options. For example, the first phase can be used to prime the second phase for operation, and can do so repeatedly. Thus, a first enrollment may use the first phase to generate encrypted feature vectors for training a first DNN of the second phase. Once ready the first DNN can be used for subsequent authentication. In another example, the system can accept new users or enrolled additional authentication information, which triggers the first phase again to generate encrypted feature vectors. This can occur while the first DNN continues to execute its authentication functions. A second DNN can be trained on the new authentication information, and may also be trained on the old authentication information of the first DNN. For example, the system can use the first DNN to handle older users, and the second DNN to handle newer users. In another example, the system can switch over to the second DNN trained on the collective body of authentication information (e.g., encrypted feature vectors). Various embodiments use different transition protocols between and amongst the first and second phases of authentication. For example, the system can invoke multiple threads one for each authentication type (e.g., fast or deep learning), and may further invoke multiple threads within each authentication type. Thus in some embodiments, a distance metric store can be used in an initial enrollment phase to permit quick establishment of user authentication credentials so that a more sophisticated authentication approach can be trained in the background (e.g., a DNN can be trained on encrypted feature vectors (e.g., Euclidean measurable feature vectors, distance measurable feature vectors, homomorphic encrypted feature vectors, etc.) and identification labels, so that upon input of an encrypted feature vector the DNN can return an identification label (or unknown result, where applicable)). The authentication system can also be configured to integrate liveness testing protocols to ensure that biometric information is being validly submitted (e.g., and not spoofed). According to some embodiments, the system is also configured to provide one to many search and/or matching on encrypted biometrics in polynomial time. According to one embodiment, the system takes input biometrics and transforms the input biometrics into feature vectors (e.g., a list of floating point numbers (e.g., 64, 128, 256, or within a range of at least 64 and 10240, although some embodiments can use more feature vectors)). According to various embodiments, the number of floating point numbers in each list depends on the machine learning model being employed to process input encrypted biometric information. For example, the known FACENET model by GOOGLE generates a feature vector list of 128 floating point numbers, but other embodiments use models with different feature vectors and, for example, lists of floating point numbers. According to various embodiments, the biometrics processing model (e.g., a deep learning convolution network (e.g., for images and/or faces)) is configured such that each feature vector is Euclidean measurable when output. In one example, the input (e.g., the biometric) to the model can be encrypted using a neural network to output a homomorphic encrypted value. In another example, the inventors have created a first neural network for processing plain or unencrypted voice input. The voice neural network is used to accept unencrypted voice input and to generate embeddings or feature vectors that are encrypted and Euclidean measurable for use in training another neural network. In various embodiments, the first voice neural network generates encrypted embeddings that are used to train a second neural network, that once trained can generate predictions on further voice input (e.g., match or unknown). In one example, the second neural network (e.g., a deep neural network—DNN) is trained to process unclassified voice inputs for authentication (e.g., predicting a match). In some embodiments, the feature vectors generated for voice can be a list of 64 floating point numbers, but similar ranges of floating points numbers to the FACENET implementations (discussed in greater detail below) can also be used (e.g., 32 floating point numbers up to 10240 floating point numbers, among other options). According to one aspect, by executing on embedding or feature vectors that are encrypted and Euclidean measurable the system produces and operates in a privacy preserving manner. These encryptions (e.g., one way homomorphic encryptions) can be used in encrypted operations (e.g., addition, multiplication, comparison, etc.) without knowing the underlying plaintext value. Thus, the original or input biometric can simply be discarded, and does not represent a point of failure for security thereafter. In further aspects, implementing one way encryptions eliminates the need for encryption keys that can likewise be compromised. This is a failing of many convention systems. According to various aspects, the privacy enabled with encrypted biometrics can be further augmented with liveness detection to prevent faked or spoofed biometric credentials from being used. According to some embodiments, the system can analyze an assurance factor derived from randomly selected instances (e.g., selected by the system) of a biometric input, to determine that input biometric information matches the set of randomly selected instances of the biometric input. The assurance factor and respective execution can be referred to as a “liveness” test. According to various embodiments, the authentication system can validate the input of biometric information for identity and provide assurance the biometric information was not faked via liveness testing. Examples of the methods, devices, and systems discussed herein are not limited in application to the details of construction and the arrangement of components set forth in the following description or illustrated in the accompanying drawings. The methods and systems are capable of implementation in other embodiments and of being practiced or of being carried out in various ways. Examples of specific implementations are provided herein for illustrative purposes only and are not intended to be limiting. In particular, acts, components, elements and features discussed in connection with any one or more examples are not intended to be excluded from a similar role in any other examples. Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. Any references to examples, embodiments, components, elements or acts of the systems and methods herein referred to in the singular may also embrace embodiments including a plurality, and any references in plural to any embodiment, component, element or act herein may also embrace embodiments including only a singularity. References in the singular or plural form are not intended to limit the presently disclosed systems or methods, their components, acts, or elements. The use herein of “including,” “comprising,” “having,” “containing,” “involving,” and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. References to “or” may be construed as inclusive so that any terms described using “or” may indicate any of a single, more than one, and all of the described terms. FIG.7is a block diagram of an example privacy-enabled biometric system704with liveness validation. According to some embodiments, the system can be installed on a mobile device or called from a mobile device (e.g., on a remote server or cloud based resource) to return an authenticated or not signal. In various embodiments, system704can execute any of the following processes. For example, system704can enroll users (e.g., via process100), identify enrolled users (e.g., process200) and/or include multiple enrollment phases (e.g., distance metric evaluation and fully encrypted input/evaluation), and search for matches to users (e.g., process250). In various embodiments, system704includes multiple pairs of neural networks, where each pair includes a processing/generating neural network for accepting an unencrypted biometric input (e.g., images or voice, etc.) and processing to generate an encrypted embedding or feature vector. Each pair can include a classification neural network than can be trained on the generated encrypted feature vectors to classify the encrypted information with labels, and that is further used to predict a match to the trained labels or an unknown class based on subsequent input of encrypted feature vectors to the trained network. In other embodiments, the system can be configured with a trained classification neural network and receive from another processing component, system, or entity, encrypted feature vectors to use for prediction with the trained classification network. According to various embodiments, system704can accept, create or receive original biometric information (e.g., input702). The input702can include images of people, images of faces, thumbprint scans, voice recordings, sensor data, etc. Further, the voice inputs can be requested by the system, and correspond to a set of randomly selected biometric instances (including for example, randomly selected words) as part of liveness validation. According to various embodiments, the inputs can be processed for identity matching and in conjunction the inputs can be analyzed to determine matching to the randomly selected biometric instances for liveness verification. As discussed above, the system704can also be architected to provide a prediction on input of an encrypted feature vector, and another system or component can accept unencrypted biometrics and/or generate encrypted feature vectors, and communicate the same for processing. According to one embodiment, the system can include a biometric processing component708. A biometric processing component (e.g.,708) can be configured to crop received images, sample voice biometrics, eliminate noise from microphone captures, etc., to focus the biometric information on distinguishable features (e.g., automatically crop image around face, eliminate background noise for voice sample, normalized health data received, generate samples of received health data, etc.). Various forms of pre-processing can be executed on the received biometrics, and the pre-processing can be executed to limit the biometric information to important features or to improve identification by eliminating noise, reducing an analyzed area, etc. In some embodiments, the pre-processing (e.g., via708) is not executed or not available. In other embodiments, only biometrics that meet quality standards are passed on for further processing. Processed biometrics can be used to generate additional training data, for example, to enroll a new user, and/or train a classification component/network to perform predictions. According to one embodiment, the system704can include a training generation component710, configured to generate new biometrics for use in training to identify a user. For example, the training generation component710can be configured to create new images of the user's face or voice having different lighting, different capture angles, etc., different samples, filtered noise, introduced noise, etc., in order to build a larger training set of biometrics. In one example, the system includes a training threshold specifying how many training samples to generate from a given or received biometric. In another example, the system and/or training generation component710is configured to build twenty five additional images from a picture of a user's face. Other numbers of training images, or voice samples, etc., can be used. In further examples, additional voice samples can be generated from an initial set of biometric inputs to create a larger set of training samples for training a voice network (e.g., via710) According to one embodiment, the system is configured to generate encrypted feature vectors from the biometric input (e.g., process images from input and/or generated training images, process voice inputs and/or voice samples and/or generated training voice data, among other options). In various embodiments, the system704can include an embedding component712configured to generate encrypted embeddings or encrypted feature vectors (e.g., image feature vectors, voice feature vectors, health data feature vectors, etc.). According to one embodiment, component712executes a convolution neural network (“CNN”) to process image inputs (and for example, facial images), where the CNN includes a layer which generates distance (e.g., Euclidean) measurable output. The embedding component712can include multiple neural networks each tailored to specific biometric inputs, and configured to generate encrypted feature vectors (e.g., for captured images, for voice inputs, for health measurements or monitoring, etc.) that are distance measurable. According to various embodiments, the system can be configured to required biometric inputs of various types, and pass the type of input to respective neural networks for processing to capture respective encrypted feature vectors, among other options. In various embodiments, one or more processing neural networks is instantiated as part of the embedding component712, and the respective neural network process unencrypted biometric inputs to generate encrypted feature vectors. In one example, the processing neural network is a convolutional neural network constructed to create encrypted embeddings from unencrypted biometric input. In one example, encrypted feature vectors can be extracted from a neural network at the layers preceding a softmax layer (including for example, the n−1 layer). As discussed herein, various neural networks can be used to define embeddings or feature vectors with each tailored to an analyzed biometric (e.g., voice, image, health data, etc.), where an output of or with the model is Euclidean measurable. Some examples of these neural network include a model having a softmax layer. Other embodiments use a model that does not include a softmax layer to generate Euclidean measurable feature vectors. Various embodiments of the system and/or embedding component are configured to generate and capture encrypted feature vectors for the processed biometrics in the layer or layer preceding the softmax layer. Optional processing of the generated encrypted biometrics can include filter operations prior to passing the encrypted biometrics to classifier neural networks (e.g., a DNN). For example, the generated encrypted feature vectors can be evaluated for distance to determine that they meet a validation threshold. In various embodiments, the validation threshold is used by the system to filter noisy or encrypted values that are too far apart. According to one aspect, filtering of the encrypted feature vectors improves the subsequent training and prediction accuracy of the classification networks. In essence, if a set of encrypted embeddings for a user are too far apart (e.g., distances between the encrypted values are above the validation threshold) the system can reject the enrollment attempt, request new biometric measurements, generate additional training biometrics, etc. Each set of encrypted values can be evaluated against the validation threshold and values with too great a distance can be rejected and/or trigger requests for additional/new biometric submission. In one example, the validation threshold is set so that no distance between comparisons (e.g., of face image vectors) is greater than 0.85. In another example, the threshold can be set such that no distance between comparisons is greater than 1.0. Stated broadly, various embodiments of the system are configured to ensure that a set of enrollment vectors are of sufficient quality for use with the classification DNN, and in further embodiments configured to reject enrollment vectors that are bad (e.g., too dissimilar). According to some embodiments, the system can be configured to handle noisy enrollment conditions. For example, validation thresholds can be tailored to accept distance measures of having an average distance greater than 0.85 but less than 1 where the minimum distance between compared vectors in an enrollment set is less than 0.06. Different thresholds can be implemented in different embodiments, and can vary within 10%, 15% and/or 20% of the examples provided. According to some embodiments, the system704can include a classifier component714. The classifier component can include one or more deep neural networks trained on encrypted feature vector and label inputs for respective users and their biometric inputs. The trained neural network can then be used during prediction operations to return a match to a person (e.g., from among a groups of labels and people (one to many matching) or from a singular person (one to one matching)) or to return a match to an unknown class. During training of the classifier component714, the feature vectors from the embedding component712or system704are used by the classifier component714to bind a user to a classification (i.e., mapping biometrics to a matchable/searchable identity). According to one embodiment, a deep learning neural network (e.g., enrollment and prediction network) is executed as a fully connected neural network (“FCNN”) trained on enrollment data. In one example, the FCNN generates an output identifying a person or indicating an UNKNOWN individual (e.g., at706). Other examples can implement different neural networks for classification and return a match or unknown class accordingly. In some examples, the classifier is a neural network but does not require a fully connected neural network. According to various embodiments, a deep learning neural network (e.g., which can be an FCNN) must differentiate between known persons and the UNKNOWN. In some examples, the deep learning neural network can include a sigmoid function in the last layer that outputs probability of class matching based on newly input biometrics or that outputs values showing failure to match. Other examples achieve matching based on executing a hinge loss function to establish a match to a label/person or an unknown class. In further embodiments, the system704and/or classifier component714are configured to generate a probability to establish when a sufficiently close match is found. In some implementations, an unknown person is determined based on negative return values (e.g., the model is tuned to return negative values for no match found). In other embodiments, multiple matches can be developed by the classifier component714and voting can also be used to increase accuracy in matching. Various implementations of the system (e.g.,704) have the capacity to use this approach for more than one set of input. In various embodiments, the approach itself is biometric agnostic. Various embodiments employ encrypted feature vectors that are distance measurable (e.g., Euclidean, homomorphic, one-way encrypted, etc.), generation of which is handled using the first neural network or a respective first network tailored to a particular biometric. In some embodiments, the system can invoke multiple threads or processes to handle volumes of distance comparisons. For example, the system can invoke multiple threads to accommodate an increase in user base and/or volume of authentication requests. According to various aspect, the distance measure authentication is executed in a brute force manner. In such settings, as the user population grows so does the complexity or work required to resolve the analysis in a brute force (e.g., check all possibilities (e.g., until match)) fashion. Various embodiments are configured to handle this burden by invoking multiple threads, and each thread can be used to check a smaller segment of authentication information to determine a match. In some examples, different neural networks are instantiated to process different types of biometrics. Using that approach the vector generating neural network may be swapped for or use a different neural network in conjunction with others where each is capable of creating a distance measurable encrypted feature vector based on the respective biometric. Similarly, the system may enroll on both or greater than multiple biometric types (e.g., use two or more vector generating networks) and predict on the feature vectors generated for both types of biometrics using both neural networks for processing respective biometric types, which can also be done simultaneously. In one embodiment, feature vectors from each type of biometric can likewise be processed in respective deep learning networks configured to predict matches based on the feature vector inputs (or return unknown). The co-generated results (e.g., one from each biometric type) may be used to identify a user using a voting scheme and may better perform by executing multiple predictions simultaneously. For each biometric type used, the system can execute multi-phase authentication approaches with a first generation network and distance measures in a first phase, and a network trained on encrypted feature vectors in a second phase. At various times each of the phases may be in used—for example, an enrolled user can be authentication with the trained network (e.g., second phase), while a newly enrolling user is enrolled and/or authenticated via the generation network and distance measure phase. In some embodiments, the system can be configured to validate an unknown determination. It is realized that accurately determining that an input to the authentication system is an unknown is an unsolved problem in this space. Various embodiments leverage the deep learning construction (including, for example, the classification network) described herein to enable identification/return of an unknown result. In some embodiments, the DNN can return a probability of match that is below a threshold probability. If the result is below the threshold, the system is configured to return an unknown result. Further embodiments leverage the distance store to improve the accuracy of the determination of the unknown result. In one example, upon a below threshold determination output from the DNN, the system can validate the below threshold determination by performing distance comparison(s) on the authentication vectors and the vectors in the distance store for the most likely match (e.g., greatest probability of match under the threshold). According to another aspect, generating accurate (e.g., greater than 90% accuracy in example executions described below) identification is only a part of a complete authentication system. In various embodiments, identification is coupled with liveness testing to ensure that biometric inputs are not, for example, being recorded and replayed for verification. For example, the system704can include a liveness component718. The liveness component can be configured to generate a random set of biometric instances, that the system requests a user submit. The random set of biometric instance can serve multiple purposes. For example, the biometric instances provide a biometric input that can be used for identification, and can also be used for liveness (e.g., validate matching to random selected instances). If both tests are valid, the system can provide an authentication indication or provide access or execution of a requested function. Further embodiments can require multiple types of biometric input for identification, and couple identification with liveness validation. In yet other embodiments, liveness testing can span multiple biometric inputs as well. According to one embodiment, the liveness component718is configured to generate a random set of words that provide a threshold period of voice data from a user requesting authentication. In one example, the system is configured to require a five second voice signal for processing, and the system can be configured to select the random biometric instances accordingly. Other thresholds can be used (e.g., one, two, three, four, six, seven, eight, nine seconds or fractions thereof, among other examples), each having respective random selections that are associated with a threshold period of input. According to further embodiments, the system (e.g.704) can be configured to incorporate new identification classes responsive to receiving new biometric information. In one embodiment, the system704includes a retraining component configured to monitor a number of new biometrics (e.g., per user/identification class or by total number of new biometrics) and automatically trigger a re-enrollment with the new feature vectors derived from the new biometric information (e.g., produced by712). In other embodiments, the system can be configured to trigger re-enrollment on new feature vectors based on time or time period elapsing. The system704and/or retraining component716can be configured to store feature vectors as they are processed, and retain those feature vectors for retraining (including for example feature vectors that are unknown to retrain an unknown class in some examples). Various embodiments of the system are configured to incrementally retrain the classification model (e.g., classifier component714and/or a DNN) on system assigned numbers of newly received biometrics. Further, once a system set number of incremental re-trainings have occurred the system is further configured to complete a full retrain of the model. According to various aspects, the incremental retrain execution avoids the conventional approach of fully retraining a neural network to recognize new classes and generate new identifications and/or to incorporate new feature vectors as they are input. Incremental re-training of an existing model to include a new identification without requiring a full retraining provides significant execution efficiency benefits over conventional approaches. According to various embodiments, the variables for incremental retraining and full retraining can be set on the system via an administrative function. Some defaults include incremental retrain every 3, 4, 5, 6, etc., identifications, and full retrain every 3, 4, 5, 6, 7, 8, 9, 10, etc., incremental retrains. Additionally, this requirement may be met by using calendar time, such as retraining once a year. These operations can be performed on offline (e.g., locked) copies of the model, and once complete, the offline copy can be made live. Additionally, the system704and/or retraining component716is configured to update the existing classification model with new users/identification classes. According to various embodiments, the system builds a classification model for an initial number of users, which can be based on an expected initial enrollment. The model is generated with empty or unallocated spaces to accommodate new users. For example, a fifty user base is generated as a one hundred user model. This over allocation in the model enables incremental training to be executed and incorporated, for example, new classes without requiring fully retraining the classification model. When a new user is added, the system is and/or retraining component716is configured to incrementally retrain the classification model—ultimately saving significant computation time over convention retraining executions. Once the over allocation is exhausted (e.g., 100 total identification classes) a full retrain with an additional over allocation can be made (e.g., fully retrain the 100 classes to a model with 150 classes). In other embodiments, an incremental retrain process can be executed to add additional unallocated slots. Even with the reduced time retraining, the system can be configured to operate with multiple copies of the classification model. One copy may be live that is used for authentication or identification. A second copy may be an update version, that is taken offline (e.g., locked from access) to accomplish retraining while permitting identification operations to continue with a live model. Once retraining is accomplished, the updated model can be made live and the other model locked and updated as well. Multiple instances of both live and locked models can be used to increase concurrency. According to some embodiments, the system700can receive feature vectors instead of original biometrics and processing original biometrics can occur on different systems—in these cases system700may not include, for example,708,710,712, and instead receive feature vectors from other systems, components or processes. Example Liveness Execution And Considerations According to one aspect, in establishing identity and authentication an authentication system is configured to determine if the source presenting the features is, in fact, a live source. In conventional password systems, there is no check for liveliness. A typical example of a conventional approach includes a browser where the user fills in the fields for username and password or saved information is pre-filled in a form on behalf of the user. The browser is not a live feature, rather the entry of the password is pulled from the browser' form history and essentially replayed. This is an example of replay, and according to another aspect presents many challenges exist where biometric input could be copied and replayed. The inventors have realized that biometrics have the potential to increase security and convenience simultaneously. However, there are many issues associated with such implementation, including for example, liveness. Some conventional approaches have attempted to introduce biometrics—applying the browser example above, an approach can replace authentication information with an image of a person's face or a video of the face. In such conventional systems that do not employ liveness checks, these conventional systems may be compromised by using a stored image of the face or stored video and replaying for authentication. The inventors have realized that use of biometrics (e.g., such as face, voice or fingerprint, etc.) include the consequence of the biometric potentially being offered in non-live forms, and thus allowing a replayed biometric to be an offering of a plausible to the system. Without liveness, the plausible will likely be accepted. The inventors have further realized that to determine if a biometric is live is an increasingly difficult problem. Examined are some approaches for resolving the liveness problem—which are treated broadly as two classes of liveness approaches (e.g., liveness may be subdivided into active liveness and passive liveness problem domains). Active liveness requires the user to do something to prove the biometric is not a replica. Passive liveness makes no such requirement to the user and the system alone must prove the biometric is not a replica. Various embodiments and examples are directed to active liveness validation (e.g., random words supplied by a user), however, further examples can be applied in a passive context (e.g., system triggered video capture during input of biometric information, ambient sound validation, etc.). Table X (FIG.8A-B) illustrates example implementation that may be employed, and includes analysis of potential issues for various interactions of the example approaches. In some embodiments, various ones of the examples in Table X can be combined to reduce inefficiencies (e.g., potential vulnerabilities) in the implementation. Although some issues are present in the various comparative embodiments, the implementation can be used, for example, where the potential for the identified replay attacks can be minimized or reduced. According to one embodiment, randomly requested biometric instances in conjunction with identity validation on the same random biometric instances provides a high level of assurance of both identity and liveness. In one example (Row8), the random biometric instances include a set of random words selected for liveness validation in conjunction with voice based identification. According to one embodiment, an authentication system, assesses liveness by asking the user to read a few random words. This can be done in various embodiments, via execution of process900,FIG.9. According to various embodiment, process900can being at902with a request to a user to supply a set of random biometric instances. Process900continues with concurrent (or, for example, simultaneous) authentication functions—identity and liveness at904. For example, an authentication system can concurrently or simultaneously process the received voice signal through two algorithms (e.g., liveness algorithm and identity algorithm (e.g., by executing904of process900), returning a result in less than one second. The first algorithm (e.g., liveness) performs a text to speech function to compare the pronounced text to the requested text (e.g., random words) to verify that the words were read correctly, and the second algorithm uses a prediction function (e.g., a prediction application programming interface (API)) to perform a one-to-many (1:N) identification on a private voice biometric to ensure that the input correctly identifies the expected person. At908, for example, process900can return an authentication value for identified and live inputs906YES. If either check fails906NO, process900can return an invalid indicator at910. Further embodiments implement multiple biometric factor identification with liveness to improve security and convenience. In one example, a first factor, face (e.g., image capture), is used to establish identity. In another example, the second factor, voice (e.g., via random set of words), is used to confirm identity, and establish authentication with the further benefit of confirming (or not) that the source presenting the biometric input is live. Various embodiments of private biometric systems are configured to execute liveness. The system generates random text that is selected to take roughly 5 seconds to speak (in whatever language the user prefers—and with other example threshold minimum periods). The user reads the text and the system (e.g., implemented as a private biometrics cloud service or component) then performs a text to speech process, comparing the pronounced text to the requested text. The system allows, for example, a private biometric component to assert the liveness of the requestor for authentication. In conjunction with liveness, the system compares the random text voice input and performs an identity assertion on the same input to ensure the voice that spoke the random words matches the user's identity. For example, input audio is now used for liveness and identity. FIG.10is an example process flow1000for executing identification and liveness validation. Process1000can be executed by an authentication system (e.g.,704,FIG.7or304,FIG.3). According to one embodiment, process1000begins with generation of a set of random biometric instances (e.g., set of random words) and triggering a request for the set of random words at1002. In various embodiments, process1000continues under multiple threads of operation. At1004, a first biometric type can be used for a first identification of a user in a first thread (e.g., based on images captured of a user during input of the random words). Identification of the first biometric input (e.g., facial identification) can proceed as discussed herein (e.g., process unencrypted biometric input with a first neural network to output encrypted feature vectors, predict a match on the encrypted feature vectors with a DNN, and return an identification or unknown and/or use a first phase for distance evaluation), and as described in, for example, process200and/or process250below. At1005, an identity corresponding to the first biometric or an unknown class is returned. At1006, a second biometric type can be used for a second identification of a user in a second thread. For example, the second identification can be based upon a voice biometric. According to one embodiment, processing of a voice biometric can continue at1008with capture of at least a threshold amount of the biometric (e.g., 5 second of voice). In some examples, the amount of voice data used for identification can be reduced at1010with biometric pre-processing. In one embodiment, voice data can be reduced with execution of pulse code modulation. Various approaches for processing voice data can be applied, including pulse code modulation, amplitude modulation, etc., to convert input voice to a common format for processing. Some example functions that can be applied (e.g., as part of1010) include Librosa (e.g., to eliminate background sound, normalize amplitude, etc.); pydub (e.g., to convert between mp3 and .wav formats); Librosa (e.g., for phase shift function); Scipy (e.g. to increase low frequency); Librosa (e.g., for pulse code modulation); and/or soundfile (e.g., for read and write sound file operations). In various embodiments, processed voice data is converted to the frequency domain via a fourier transform (e.g., fast fourier transform, discrete fourier transform, etc.) which can be provided by numpy or scipy libraries. Once in the frequency domain, the two dimensional frequency array can be used to generate encrypted feature vectors. In some embodiments, voice data is input to a pre-trained neural network to generate encrypted voice feature vectors at1012. In one example, the frequency arrays are used as input to a pre-trained convolutional neural network (“CNN”) which outputs encrypted voice feature vectors. In other embodiments, different pre-trained neural networks can be used to output encrypted voice feature vectors from unencrypted voice input. As discussed throughout, the function of the pre-trained neural network is to output distance measurable encrypted feature vectors upon voice data input. Once encrypted feature vectors are generated at1012, the unencrypted voice data can be deleted. Some embodiments receive encrypted feature vectors for processing rather than generate them from unencrypted voice directly, in such embodiments there is no unencrypted voice to delete. In one example, a CNN is constructed with the goal of creating embeddings and not for its conventional purpose of classifying inputs. In further example, the CNN can employ a triple loss function (including, for example, a hard triple loss function), which enables the CNN to converge more quickly and accurately during training than some other implementations. In further examples, the CNN is trained on hundreds or thousands of voice inputs. Once trained, the CNN is configured for creation of embeddings (e.g., encrypted feature vectors). In one example, the CNN accepts a two dimensional array of frequencies as an input and provides floating point numbers (e.g., 32, 64, 128, 256, 1028, . . . floating point numbers) as output. In some executions of process1000, the initial voice capture and processing (e.g., request for random words—1002-1012) can be executed on a user device (e.g., a mobile phone) and the resulting encrypted voice feature vector can be communicated to a remote service via an authentication API hosted and executed on cloud resources. In some other executions, the initial processing and prediction operations can be executed on the user device as well. Various execution architectures can be provided, including fully local authentication, fully remote authentication, and hybridization of both options. In one embodiment, process1000continues with communication of the voice feature vectors to a cloud service (e.g., authentication API) at1014. The voice feature vectors can then be processed by a fully connected neural network (“FCNN”) for predicting a match to a trained label at1016. As discussed, the input to the FCNN is an embedding generated by a first pre-trained neural network (e.g., an embedding comprising 32, 64, 128, 256, 1028, etc. floating point numbers). Prior to execution of process1000, the FCNN is trained with a threshold number of people for identification (e.g.,500,750,1000,1250,1500. . . etc.). The initial training can be referred to as “priming” the FCNN. The priming function is executed to improve accuracy of prediction operations performed by the FCNN. At1018, the FCNN returns a result matching a label or an unknown class—i.e., matches to an identity from among a group of candidates or does not match to a known identity. The result is communicated for evaluation of each threads' result at1022. According to various embodiments, the third thread of operation is executed to determine that the input biometrics used for identification are live (i.e., not spoofed, recorded, or replayed). For example, at1020the voice input is processed to determine if the input words match the set of random words requested. In one embodiment, a speech recognition function is executed to determine the words input, and matching is executed against the randomly requested words to determine an accuracy of the match. If any unencrypted voice input remains in memory, the unencrypted voice data can be deleted as part of1020. In various embodiments, processing of the third thread, can be executed locally on a device requesting authorization, on a remote server, a cloud resource, or any combination. If remote processing is executed, a recording of the voice input can be communicated to a server or cloud resource as part of1020, and the accuracy of the match (e.g., input to random words) determined remotely. Any unencrypted voice data can be deleted once encrypted feature vectors are generated and/or once matching accuracy is determined. In further embodiments, the results from each thread is joined to yield an authorization or invalidation. At1024, the first thread returns an identity or unknown for the first biometric, the second thread returns an identity or unknown for the second biometric, and the third thread an accuracy of match between a random set of biometric instances and input biometric instances. At1024, process1000provides a positive authentication indication wherein first thread identity matches the second thread identity and one of the biometric inputs is determined to be live (e.g., above a threshold accuracy (e.g., 33% or greater among other options). If not positive, process1000can be re-executed (e.g., a threshold number of times) or a denial can be communicated. According to various embodiments, process1000can include concurrent, branched, and/or simultaneous execution of the authentication threads to return a positive authentication or a denial. In further embodiments, process1000can be reduced to a single biometric type such that one identification thread and one liveness thread is executed to return a positive authentication or a denial. In further embodiments, the various steps described can be executed together or in different order, and may invoke other processes (e.g., to generate encrypted feature vectors to process for prediction) as part of determining identity and liveness of biometric input. In yet other embodiments, additional biometric types can be tested to confirm identity, with at least one liveness test on one of the biometric inputs to provide assurance that submitted biometrics are not replayed or spoofed. In further example, multiple biometrics types can be used for identity and multiple biometric types can be used for liveness validation. Example Authentication System with Liveness In some embodiments, an authentication system interacts with any application or system needing authentication service (e.g., a Private Biometrics Web Service). According to one embodiment, the system uses private voice biometrics to identify individuals in a datastore (and provides one to many (1:N) identification) using any language in one second. Various neural networks measure the signals inside of a voice sample with high accuracy and thus allow private biometrics to replace “username” (or other authentication schemes) and become the primary authentication vehicle. In some examples, the system employs face (e.g., images of the user's face) as the first biometric and voice as the second biometric type, providing for at least two factor authentication (“2FA”). In various implementation, the system employs voice for identity and liveness as the voice biometric can be captured with the capture of a face biometric. Similar biometric pairings can be executed to provide a first biometric identification, a second biometric identification for confirmation, coupled with a liveness validation. In some embodiments, an individual wishing to authenticate is asked to read a few words while looking into a camera and the system is configured to collect the face biometric and voice biometric while the user is speaking. According to various examples, the same audio that created the voice biometric is used (along with the text the user was requested to read) to check liveness and to ensure the identity of the user's voice matches the face. Such authentication can be configured to augment security in a wide range of environments. For example, private biometrics (e.g., voice, face, health measurements, etc.) can be used for common identity applications (e.g., “who is on the phone?”) and single factor authentication (1FA) by call centers, phone, watch and TV apps, physical security devices (door locks), and other situations where a camera is unavailable. Additionally, where additional biometrics can be captured 2FA or better can provide greater assurance of identity with the liveness validation. Broadly stated, various aspects implement similar approaches for privacy-preserving encryption for processed biometrics (including, for example, face and voice biometrics). Generally stated, after collecting an unencrypted biometric (e.g., voice biometric), the system creates a private biometric (e.g., encrypted feature vectors) and then discards the original unencrypted biometric template. As discussed herein, these private biometrics enable an authentication system and/or process to identify a person (i.e., authenticate a person) while still guaranteeing individual privacy and fundamental human rights by only operating on biometric data in the encrypted space. To transform the unencrypted voice biometric into a private biometric, various embodiments are configured to pre-process the voice signal and reduce the voice data to a smaller form (e.g., for example, without any loss). The Nyquist sampling rate for this example is two times the frequency of the signal. In various implementations, the system is configured to sample the resulting data and use this sample as input to a Fourier transform. In one example, the resulting frequencies are used as input to a pre-trained voice neural network capable of returning a set of embeddings (e.g., encrypted voice feature vectors). These embeddings, for example, sixty four floating point numbers, provide the system with private biometrics which then serve as input to a second neural network for classification. Private Biometric Implementation Various embodiments are discussed below for enrolling users with private biometrics and prediction on the same. Various embodiments discuss some considerations and examples for implementation of private biometrics. These examples and embodiments can be used with liveness verification of the respective private biometrics as discussed above. FIG.1is an example process flow100for enrolling in a privacy-enabled biometric system (e.g.,FIG.3,304described in greater detail below orFIG.7,704above). Process100begins with acquisition of unencrypted biometric data at102. The unencrypted biometric data (e.g., plaintext, reference biometric, etc.) can be directly captured on a user device, received from an acquisition device, or communicated from stored biometric information. In one example, a user takes a photo of themselves on their mobile device for enrollment. Pre-processing steps can be executed on the biometric information at104. For example, given a photo of a user, pre-processing can include cropping the image to significant portions (e.g., around the face or facial features). Various examples exist of photo processing options that can take a reference image and identify facial areas automatically. In another example, the end user can be provided a user interface that displays a reference area, and the user is instructed to position their face from an existing image into the designated area. Alternatively, when the user takes a photo, the identified area can direct the user to focus on their face so that it appears within the highlight area. In other options, the system can analyze other types of images to identify areas of interest (e.g., iris scans, hand images, fingerprint, etc.) and crop images accordingly. In yet other options, samples of voice recordings can be used to select data of the highest quality (e.g., lowest background noise), or can be processed to eliminate interference from the acquired biometric (e.g., filter out background noise). Having a given biometric, the process100continues with generation of additional training biometrics at106. For example, a number of additional images can be generated from an acquired facial image. In one example, an additional twenty five images are created to form a training set of images. In some examples, as few as three images can be used but with the tradeoff of reduce accuracy. In other examples, as many as forty training images may be created. The training set is used to provide for variation of the initial biometric information, and the specific number of additional training points can be tailored to a desired accuracy (see e.g., Tables I-VIII below provide example implementation and test results). Other embodiments can omit generation of additional training biometrics. Various ranges of training set production can be used in different embodiments (e.g., any set of images from two to one thousand). For an image set, the training group can include images of different lighting, capture angle, positioning, etc. For audio based biometrics different background noises can be introduced, different words can be used, different samples from the same vocal biometric can be used in the training set, among other options. Various embodiments of the system are configured to handle multiple different biometric inputs including even health profiles that are based at least in part on health readings from health sensors (e.g., heart rate, blood pressure, EEG signals, body mass scans, genome, etc.), and can, in some examples, include behavioral biometric capture/processing. According to various embodiments, biometric information includes Initial Biometric Values (IBV) a set of plaintext values (pictures, voice, SSNO, driver's license number, etc.) that together define a person. At108, feature vectors are generated from the initial biometric information (e.g., one or more plain text values that identify an individual). Feature vectors are generated based on all available biometric information which can include a set of and training biometrics generated from the initial unencrypted biometric information received on an individual or individuals. According to one embodiment, the IBV is used in enrollment and for example in process100. The set of IBVs are processed into a set of initial biometric vectors (e.g., encrypted feature vectors) which are used downstream in a subsequent neural network. In one implementation, users are directed to a website to input multiple data points for biometric information (e.g., multiple pictures including facial images), which can occur in conjunction with personally identifiable information (“PII”). The system and/or execution of process100can include tying the PII to encryptions of the biometric as discussed below. In one embodiment, a convolutional deep neural network is executed to process the unencrypted biometric information and transform it into feature vector(s) which have a property of being one-way encrypted cipher text. The neural network is applied (108) to compute a one-way homomorphic encryption of the biometric—resulting in feature vectors (e.g., at110). These outputs can be computed from an original biometric using the neural network but the values are one way in that the neural network cannot then be used to regenerate the original biometrics from the outputs. Various embodiments employ networks that take as input a plaintext input and return Euclidean measurable output. One such implementation is FaceNet which takes in any image of a face and returns 128 floating point numbers, as the feature vector. The neural network is fairly open ended, where various implementations are configured to return a distance or Euclidean measurable feature vector that maps to the input. This feature vector is nearly impossible to use to recreate the original input biometric and is therefore considered a one-way encryption. Various embodiments are configured to accept the feature vector(s) produced by a first neural network and use it as input to a new neural network (e.g., a second classifying neural network). According to one example, the new neural network has additional properties. This neural network is specially configured to enable incremental training (e.g., on new users and/or new feature vectors) and configured to distinguish between a known person and an unknown person. In one example, a fully connected neural network with 2 hidden layers and a “hinge” loss function is used to process input feature vectors and return a known person identifier (e.g., person label or class) or indicate that the processed biometric feature vectors are not mapped to a known person. For example, the hinge loss function outputs one or more negative values if the feature vector is unknown. In other examples, the output of the second neural network is an array of values, wherein the values and their positions in the array determined a match to a person or identification label. Various embodiments use different machine learning models for capturing feature vectors in the first network. According to various embodiments, the feature vector capture is accomplished via a pre-trained neural network (including, for example, a convolutional neural network) where the output is distance measurable (e.g., Euclidean measurable). In some examples, this can include models having a softmax layer as part of the model, and capture of feature vectors can occur preceding such layers. Feature vectors can be extracted from the pre-trained neural network by capturing results from the layers that are Euclidean measurable. In some examples, the softmax layer or categorical distribution layer is the final layer of the model, and feature vectors can be extracted from the n−1 layer (e.g., the immediately preceding layer). In other examples, the feature vectors can be extracted from the model in layers preceding the last layer. Some implementations may offer the feature vector as the last layer. In some embodiments, and optional step can be executed as part of process100(not shown). The optional step can be executed as a branch or fork in process100so that authentication of a user can immediately follow enrollment of a new user or authentication information. In one example, a first phase of enrollment can be executed to generate encrypted feature vectors. The system can use the generated encrypted feature vectors directly for subsequent authentication. For example, distance measures can be application to determine a distance between enrolled encrypted feature vectors and a newly generated encrypted feature vector. Where the distance is within a threshold, the user can be authenticated or an authentication signal returned. In various embodiments, this optional authentication approach can be used while a classification network is being trained on encrypted feature vectors in the following steps. The resulting feature vectors are bound to a specific user classification at112. For example, deep learning is executed at112on the feature vectors based on a fully connected neural network (e.g., a second neural network, an example classifier network). The execution is run against all the biometric data (i.e., feature vectors from the initial biometric and training biometric data) to create the classification information. According to one example, a fully connected neural network having two hidden layers is employed for classification of the biometric data. In another example, a fully connected network with no hidden layers can be used for the classification. However, the use of the fully connected network with two hidden layers generated better accuracy in classification in some example executions (see e.g., Tables I-VIII described in greater detail below). According to one embodiment, process100can be executed to receive an original biometric (e.g., at102) generate feature vectors (e.g.,110), and apply a FCNN classifier to return a label for identification at112(e.g., output #people). In further embodiments, step112can also include filtering operations executed on the encrypted feature vectors before binding the vectors to a label via training the second network. For example, encrypted feature vectors can be analyzed to determine if they are within a certain distance of each other. Where the generated feature vectors are too far apart, they can be rejected for enrollment (i.e., not used to train the classifier network). In other examples, the system is configured to request additional biometric samples, and re-evaluate the distance threshold until satisfied. In still other examples, the system rejects the encrypted biometrics and request new submissions to enroll. Process100continues with discarding any unencrypted biometric data at114. In one example, an application on the user's phone is configured to enable enrollment of captured biometric information and configured to delete the original biometric information once processed (e.g., at114). In other embodiments, a server system can process received biometric information and delete the original biometric information once processed. According to some aspects, only requiring that original biometric information exists for a short period during processing or enrollment significantly improves the security of the system over conventional approaches. For example, systems that persistently store or employ original biometric data become a source of vulnerability. Unlike a password that can be reset, a compromised biometric remains compromised, virtually forever. Returning to process100, at116the resulting cipher text (e.g., feature vectors) biometric is stored. In one example, the encrypted biometric can be stored locally on a user device. In other examples, the generated encrypted biometric can be stored on a server, in the cloud, a dedicated data store, or any combination thereof. In one example, the biometrics and classification are stored for use in subsequent matching or searching. For instance, new biometric information can be processed to determine if the new biometric information matches any classifications. The match (depending on a probability threshold) can then be used for authentication or validation. In cases where a single match is executed, the neural network model employed at112can be optimized for one to one matching. For example, the neural network can be trained on the individual expected to use a mobile phone (assuming no other authorized individuals for the device). In some example, the neural network model can include training allocation to accommodate incremental training of the model on acquired feature vectors over time. Various embodiment, discussed in great detail below incorporate incremental training operations for the neural network to permit additional people and to incorporate newly acquired feature vectors. In other embodiments, an optimized neural network model (e.g., FCNN) can be used for a primary user of a device, for example, stored locally, and remote authentication can use a data store and one to many models (e.g., if the first model returns unknown). Other embodiments may provide the one to many models locally as well. In some instances, the authentication scenario (e.g., primary user or not) can be used by the system to dynamically select a neural network model for matching, and thereby provide additional options for processing efficiency. FIG.2Aillustrates an example process200for authentication with secured biometric data. Process200begins with acquisition of multiple unencrypted biometrics for analysis at202. In one example, the privacy-enabled biometric system is configured to require at least three biometric identifiers (e.g., as plaintext data, reference biometric, or similar identifiers). If for example, an authentication session is initiated, the process can be executed so that it only continues to the subsequent steps if a sufficient number of biometric samples are taken, given, and/or acquired. The number of required biometric samples can vary, and take place with as few as one. Similar to process100, the acquired biometrics can be pre-processed at204(e.g., images cropped to facial features, voice sampled, iris scans cropped to relevant portions, etc.). Once pre-processing is executed the biometric information is transformed into a one-way homomorphic encryption of the biometric information to acquire the feature vectors for the biometrics under analysis (e.g., at206). Similar to process100, the feature vectors can be acquired using any pre-trained neural network that outputs distance measurable encrypted feature vectors (e.g., Euclidean measurable feature vectors, homomorphic encrypted feature vectors, among other options). In one example, this includes a pre-trained neural network that incorporates a softmax layer. However, other examples do not require the pre-trained neural network to include a softmax layer, only that they output Euclidean measurable feature vectors. In one, example, the feature vectors can be obtained in the layer preceding the softmax layer as part of step206. In various embodiments, authentication can be executed based on comparing distances between enrolled encrypted biometrics and subsequently created encrypted biometrics. In further embodiments, this is executed as a first phase of authentication. Once a classifying network is trained on the encrypted biometrics a second phase of authentication can be used, and authentication determinations made via208. According to some embodiments, the phases of authentication can be executed together and even simultaneously. In one example, an enrolled user will be authenticated using the classifier network (e.g., second phase), and a new user will be authenticated by comparing distances between encrypted biometrics (e.g., first phase). As discussed, the new user will eventually be authenticated using a classifier network trained on the new user's encrypted biometric information, once the classifier network is ready. At208, a prediction (e.g., a via deep learning neural network) is executed to determine if there is a match for the person associated with the analyzed biometrics. As discussed above with respect to process100, the prediction can be executed as a fully connected neural network having two hidden layers (during enrollment the neural network is configured to identify input feature vectors as individuals or unknown, and unknown individuals can be added via incremental training or full retraining of the model). In other examples, a fully connected neural network having no hidden layers can be used. Examples of neural networks are described in greater detail below (e.g.,FIG.4illustrates an example neural network400). Other embodiments of the neural network can be used in process200. According to some embodiments, the neural network features include operates as a classifier during enrollment to map feature vectors to identifications; operates as a predictor to identify a known person or an unknown. In some embodiments, different neural networks can be tailored to different types of biometrics, and facial images processed by one, while voice biometrics are processed by another. According to some embodiments, process208is described agnostic to submitter security. In other words, process200relies on front end application configuration to ensure submitted biometrics are captured from the person trying to authenticate. As process200is agnostic to submitter security, the process can be executed in local and remote settings in the same manner. However, according to some implementations the execution relies on the native application or additional functionality in an application to ensure an acquired biometric represents the user to be authenticated or matched. FIG.2Billustrates an example process flow250showing additional details for a one to many matching execution (also referred to as prediction). According to one embodiment, process250begins with acquisition of feature vectors (e.g., step206ofFIG.2A or110ofFIG.1). At254, the acquired feature vectors are matched against existing classifications via a deep learning neural network. In one example, the deep learning neural network has been trained during enrollment on s set of individuals. The acquired feature vectors will be processed by the trained deep learning network to predict if the input is a match to known individual or does not match and returns unknown. In one example, the deep learning network is a fully connected neural network (“FCNN”). In other embodiments, different network models are used for the second neural network. According to one embodiment, the FCNN outputs an array of values. These values, based on their position and the value itself, determine the label or unknown. According to one embodiment, returned from a one to many case are a series of probabilities associated with the match—assuming five people in the trained data: the output layer showing probability of match by person: [0.1, 0.9, 0.3, 0.2, 0.1] yields a match on Person 2 based on a threshold set for the classifier (e.g., >0.5). In another run, the output layer: [0.1, 0.6, 0.3, 0.8, 0.1] yields a match on Person 2 & Person 4 (e.g., using the same threshold). However, where two results exceed the match threshold, the process and or system is configured to select the maximum value and yield a (probabilistic) match Person 4. In another example, the output layer: [0.1, 0.2, 0.3, 0.2, 0.1] shows no match to a known person—hence an UNKNOWN person—as no values exceed the threshold. Interestingly, this may result in adding the person into the list of authorized people (e.g., via enrollment discussed above), or this may result in the person being denied access or privileges on an application. According to various embodiments, process250is executed to determine if the person is known or not. The functions that result can be dictated by the application that requests identification of an analyzed biometrics. For an UNKNOWN person, i.e. a person never trained to the deep learning enrollment and prediction neural network, an output layer of an UNKNOWN person looks like [−0.7, −1.7, −6.0, −4.3]. In this case, the hinge loss function has guaranteed that the vector output is all negative. This is the case of an UNKNOWN person. In various embodiments, the deep learning neural network must have the capability to determine if a person is UNKNOWN. Other solutions that appear viable, for example, support vector machine (“SVM”) solutions break when considering the UNKNOWN case. In one example, the issue is scalability. An svm implementation cannot scale in the many-to-many matching space becoming increasing unworkable until the model simply cannot be used to return a match in any time deemed functional (e.g., 100 person matching cannot return a result in less than 20 minutes). According to various embodiments, the deep learning neural network (e.g., an enrollment & prediction neural network) is configured to train and predict in polynomial time. Step256can be executed to vote on matching. According to one embodiment, multiple images or biometrics are processed to identify a match. In an example where three images are processed the FCNN is configured to generate an identification on each and use each match as a vote for an individual's identification. Once a majority is reached (e.g., at least two votes for person A) the system returns as output identification of person A. In other instance, for example, where there is a possibility that an unknown person may result—voting can be used to facilitate determination of the match or no match. In one example, each result that exceeds the threshold probability can count as one vote, and the final tally of votes (e.g., often 4 out of 5) is used to establish the match. In some implementations, an unknown class may be trained in the model—in the examples above a sixth number would appear with a probability of matching the unknown model. In other embodiments, the unknown class is not used, and matching is made or not against known persons. Where a sufficient match does not result, the submitted biometric information is unknown. Responsive to matching on newly acquired biometric information, process250can include an optional step258for retraining of the classification model. In one example, a threshold is set such that step258tests if a threshold match has been exceeded, and if yes, the deep learning neural network (e.g., classifier & prediction network) is retrained to include the new feature vectors being analyzed. According to some embodiments, retraining to include newer feature vectors permits biometrics that change over time (e.g., weight loss, weight gain, aging or other events that alter biometric information, haircuts, among other options). FIG.3is a block diagram of an example privacy-enabled biometric system304. According to some embodiments, the system can be installed on a mobile device or called from a mobile device (e.g., on a remote server or cloud based resource) to return an authenticated or not signal. In various embodiments system304can executed any of the preceding processes. For example, system304can enroll users (e.g., via process100), identify enrolled users (e.g., process200), and search for matches to users (e.g., process250). According to various embodiments, system304can accept, create or receive original biometric information (e.g., input302). The input302can include images of people, images of faces, thumbprint scans, voice recordings, sensor data, etc. A biometric processing component (e.g.,308) can be configured to crop received images, sample voice biometrics, etc., to focus the biometric information on distinguishable features (e.g., automatically crop image around face). Various forms of pre-processing can be executed on the received biometrics, designed to limit the biometric information to important features. In some embodiments, the pre-processing (e.g., via308) is not executed or available. In other embodiments, only biometrics that meet quality standards are passed on for further processing. Processed biometrics can be used to generate additional training data, for example, to enroll a new user. A training generation component310can be configured to generate new biometrics for a user. For example, the training generation component can be configured to create new images of the user's face having different lighting, different capture angles, etc., in order to build a train set of biometrics. In one example, the system includes a training threshold specifying how many training samples to generate from a given or received biometric. In another example, the system and/or training generation component310is configured to build twenty five additional images from a picture of a user's face. Other numbers of training images, or voice samples, etc., can be used. The system is configured to generate feature vectors from the biometrics (e.g., process images from input and generated training images). In some examples, the system304can include a feature vector component312configured to generate the feature vectors. According to one embodiment, component312executes a convolution neural network (“CNN”), where the CNN includes a layer which generates Euclidean measurable output. The feature vector component312is configured to extract the feature vectors from the layers preceding the softmax layer (including for example, the n−1 layer). As discussed above, various neural networks can be used to define feature vectors tailored to an analyzed biometric (e.g., voice, image, health data, etc.), where an output of or with the model is Euclidean measurable. Some examples of these neural network include model having a softmax layer. Other embodiments use a model that does not include a softmax layer to generate Euclidean measurable vectors. Various embodiment of the system and/or feature vector component are configured to generate and capture feature vectors for the processed biometrics in the layer or layer preceding the softmax layer. According to another embodiment, the feature vectors from the feature vector component312or system304are used by the classifier component314to bind a user to a classification (i.e., mapping biometrics to a match able/searchable identity). According to one embodiment, the deep learning neural network (e.g., enrollment and prediction network) is executed as a FCNN trained on enrollment data. In one example, the FCNN generates an output identifying a person or indicating an UNKNOWN individual (e.g., at306). Other examples, use not fully connected neural networks. According to various embodiments, the deep learning neural network (e.g., which can be an FCNN) must differentiate between known persons and the UNKNOWN. In some examples, this can be implemented as a sigmoid function in the last layer that outputs probability of class matching based on newly input biometrics or showing failure to match. Other examples achieve matching based on a hinge loss functions. In further embodiments, the system304and/or classifier component314are configured to generate a probability to establish when a sufficiently close match is found. In some implementations, an unknown person is determined based on negative return values. In other embodiments, multiple matches can be developed and voting can also be used to increase accuracy in matching. Various implementations of the system have the capacity to use this approach for more than one set of input. The approach itself is biometric agnostic. Various embodiments employ feature vectors that are distance measurable and/or Euclidean measurable, which is generated using the first neural network. In some instances, different neural networks are configured to process different types of biometrics. Using that approach the encrypted feature vector generating neural network may be swapped for or use a different neural network in conjunction with others where each is capable of creating a distance and/or Euclidean measurable feature vector based on the respective biometric. Similarly, the system may enroll in two or more biometric types (e.g., use two or more vector generating networks) and predict on the feature vectors generated for both (or more) types of biometrics using both neural networks for processing respective biometric type simultaneously. In one embodiment, feature vectors from each type of biometric can likewise be processed in respective deep learning networks configured to predict matches based on feature vector inputs or return unknown. The simultaneous results (e.g., one from each biometric type) may be used to identify using a voting scheme or may better perform by firing both predictions simultaneously According to further embodiments, the system can be configured to incorporate new identification classes responsive to receiving new biometric information. In one embodiment, the system304includes a retraining component configured to monitor a number of new biometrics (e.g., per user/identification class or by total number of new biometrics) and automatically trigger a re-enrollment with the new feature vectors derived from the new biometric information (e.g., produced by312). In other embodiments, the system can be configured to trigger re-enrollment on new feature vectors based on time or time period elapsing. The system304and/or retraining component316can be configured to store feature vectors as they are processed, and retain those feature vectors for retraining (including for example feature vectors that are unknown to retrain an unknown class in some examples). Various embodiments of the system are configured to incrementally retrain the model on system assigned numbers of newly received biometrics. Further, once a system set number of incremental retraining have occurred the system is further configured to complete a full retrain of the model. The variables for incremental retraining and full retraining can be set on the system via an administrative function. Some defaults include incremental retrain every 3, 4, 5, 6 identifications, and full retrain every 3, 4, 5, 6, 7, 8, 9, 10 incremental retrains. Additionally, this requirement may be met by using calendar time, such as retraining once a year. These operations can be performed on offline (e.g., locked) copies of the model, and once complete the offline copy can be made live. Additionally, the system304and/or retraining component316is configured to update the existing classification model with new users/identification classes. According to various embodiments, the system builds a classification model for an initial number of users, which can be based on an expected initial enrollment. The model is generated with empty or unallocated spaces to accommodate new users. For example, a fifty user base is generated as a one hundred user model. This over allocation in the model enables incremental training to be executed on the classification model. When a new user is added, the system is and/or retraining component316is configured to incrementally retrain the classification model—ultimately saving significant computation time over convention retraining executions. Once the over allocation is exhausted (e.g., 100 total identification classes) a full retrain with an additional over allocation can be made (e.g., fully retrain the 100 classes to a model with 150 classes). In other embodiments, an incremental retrain process can be executed to add additional unallocated slots. Even with the reduced time retraining, the system can be configured to operate with multiple copies of the classification model. One copy may be live that is used for authentication or identification. A second copy may be an update version, that is taken offline (e.g., locked from access) to accomplish retraining while permitting identification operations to continue with a live model. Once retraining is accomplished, the updated model can be made live and the other model locked and updated as well. Multiple instances of both live and locked models can be used to increase concurrency. According to some embodiments, the system300can receive encrypted feature vectors instead of original biometrics and processing original biometrics can occur on different systems—in these cases system300may not include, for example, 308, 310, 312, and instead receive feature vectors from other systems, components or processes. FIGS.4A-Dillustrate example embodiments of a classifier network. The embodiments show a fully connected neural network for classifying feature vectors for training and for prediction. Other embodiments implement different neural networks, including for example, neural networks that are not fully connected. Each of the networks accepts distance and/or Euclidean measurable feature vectors and returns a label or unknown result for prediction or binds the feature vectors to a label during training. FIGS.5A-Dillustrate examples of processing that can be performed on input biometrics (e.g., facial image) using a neural network. Encrypted feature vectors can be extracted from such neural networks and used by a classifier (e.g.,FIGS.4A-D) during training or prediction operations. According to various embodiments, the system implements a first pre-trained neural network for generating distance and/or Euclidean measurable feature vectors that are used as inputs for a second classification neural network. In other embodiments, other neural networks are used to process biometrics in the first instance. In still other examples, multiple neural networks can be used to generated Euclidean measurable feature vectors from unencrypted biometric inputs each may feed the feature vectors to a respective classifier. In some examples, each generator neural network can be tailored to a respective classifier neural network, where each pair (or multiples of each) is configured to process a biometric data type (e.g., facial image, iris images, voice, health data, etc.). IMPLEMENTATION EXAMPLES The following example instantiations are provided to illustrates various aspects of privacy-enabled biometric systems and processes. The examples are provided to illustrate various implementation details and provide illustration of execution options as well as efficiency metrics. Any of the details discussed in the examples can be used in conjunction with various embodiments. It is realized that conventional biometric solutions have security vulnerability and efficiency/scalability issues. Apple, Samsung, Google and MasterCard have each launched biometric security solutions that share at least three technical limitations. These solutions are (1) unable to search biometrics in polynomial time; (2) do not one-way encrypt the reference biometric; and (3) require significant computing resources for confidentiality and matching. Modern biometric security solutions are unable to scale (e.g. Apple Face ID™ authenticates only one user) as they are unable to search biometrics in polynomial time. In fact, the current “exhaustive search” technique requires significant computing resources to perform a linear scan of an entire biometric datastore to successfully one-to-one record match each reference biometric and each new input record—this is as a result of inherent variations in the biometric instances of a single individual. Similarly, conventional solutions are unable to one-way encrypt the reference biometric because exhaustive search (as described above) requires a decryption key and a decryption to plaintext in the application layer for every attempted match. This limitation results in an unacceptable risk in privacy (anyone can view a biometric) and authentication (anyone can use the stolen biometric). And, once compromised, a biometric—unlike a password—cannot be reset. Finally, modern solutions require the biometric to return to plaintext in order to match since the encrypted form is not Euclidean measurable. It is possible to choose to make a biometric two-way encrypted and return to plaintext—but this requires extensive key management and, since a two-way encrypted biometric is not Euclidean measurable, it also returns the solution to linear scan limitations. Various embodiments of the privacy-enabled biometric system and/or methods provide enhancement over conventional implementation (e.g., in security, scalability, and/or management functions). Various embodiments enable scalability (e.g., via “encrypted search”) and fully encrypt the reference biometric (e.g., “encrypted match”). The system is configured to provide an “identity” that is no longer tied independently to each application and a further enables a single, global “Identity Trust Store” that can service any identity request for any application. Various operations are enabled by various embodiment, and the functions include. For example:Encrypted Match: using the techniques described herein, a deep neural network (“DNN”) is used to process a reference biometric to compute a one-way, homomorphic encryption of the biometric's feature vector before transmitting or storing any data. This allows for computations and comparisons on cipher texts without decryption, and ensures that only the distance and/or Euclidean measurable, homomorphic encrypted biometric is available to execute subsequent matches in the encrypted space. The plaintext data can then be discarded and the resultant homomorphic encryption is then transmitted and stored in a datastore. This example allows for computations and comparisons on cipher texts without decryption and ensures that only the Euclidean measurable, homomorphic encrypted biometric is available to execute subsequent matches in the encrypted space.Encrypted Search: using the techniques described herein, encrypted search is done in polynomial time according to various embodiments. This allows for comparisons of biometrics and achieve values for comparison that indicate “closeness” of two biometrics to one another in the encrypted space (e.g. a biometric to a reference biometric) while at the same time providing for the highest level of privacy. Various examples detail implementation of one-to-many identification using, for example, the N−1 layer of a deep neural network. The various techniques are biometric agnostic, allowing the same approach irrespective of the biometric or the biometric type. Each biometric (face, voice, IRIS, etc.) can be processed with a different, fully trained, neural network to create the biometric feature vector. According to some aspects, an issue with current biometric schemes is they require a mechanism for: (1) acquiring the biometric, (2) plaintext biometric match, (3) encrypting the biometric, (4) performing a Euclidean measurable match, and (5) searching using the second neural network prediction call. To execute steps 1 through 5 for every biometric is time consuming, error prone and frequently nearly impossible to do before the biometric becomes deprecated. One goal with various embodiments, is to develop schemes, techniques and technologies that allow the system to work with biometrics in a privacy protected and polynomial-time based way that is also biometric agnostic. Various embodiments employ machine learning to solve problems issues with (2)-(5). According to various embodiments, assumed is or no control over devices such as cameras or sensors that acquire the to be analyzed biometrics (thus arriving as plain text). According to various embodiments, if that data is encrypted immediately and only process the biometric information as cipher text, the system provides the maximum practical level of privacy. According to another aspect, a one-way encryption of the biometric, meaning that given cipher text, there is no mechanism to get to the original plaintext, reduces/eliminates the complexity of key management of various conventional approaches. Many one-way encryption algorithms exist, such as MD5 and SHA-512—however, these algorithms are not homomorphic because they are not Euclidean measurable. Various embodiments discussed herein enable a general purpose solution that produces biometric cipher text that is Euclidean measurable using a neural network. Apply a classifying algorithm to the resulting feature vectors enables one-to-many identification. In various examples, this maximizes privacy and runs between O(n)=1 and O(n)=log(n) time. As discussed above, some capture devices can encrypt the biometric via a one way encryption and provide feature vectors directly to the system. This enables some embodiments, to forgo biometric processing components, training generation components, and feature vector generation components, or alternatively to not use these elements for already encrypted feature vectors. Example Execution and Accuracy In some executions, the system is evaluated on different numbers of images per person to establish ranges of operating parameters and thresholds. For example, in the experimental execution the num-epochs establishes the number of interactions which can be varied on the system (e.g., between embodiments, between examples, and between executions, among other options). The LFW dataset is taken from the known labeled faces in the wild data set. Eleven people is a custom set of images and faces94 from the known source—faces94. For our examples, the epochs are the number of new images that are morphed from the original images. So if the epochs are 25, and we have 10 enrollment images, then we train with 250 images. The morphing of the images changed the lighting, angles and the like to increase the accuracy in training. TABLE I(fully connected neural network model with 2 hidden layers + output sigmoid layer):Input => [100, 50] => num_people (train for 100 people given 50 individuals to identify).Other embodiments improve over these accuracies for the UNKNOWN.#imagesAccuracyTrainingTestUNKNOWN#imagesIn UNKNOWNAccuracyIn UNKNOWNDatasetSetSetPERSON SetIn Test SetPERSON SetParametersIn Test SetPERSON SetLFW70%30%11 people1304257min_images_per_person = 1098.90%86.40%datasetnum-epochs = 25LFW70%30%11 people2226257min_images_per_person = 393.90%87.20%datasetnum-epochs = 2511 people70%30%Copy 2774min_images_per_person = 2100.00%50.00%peoplenum-epochs = 25from LFWfaces9470%30%11 people918257min_images_per_person = 299.10%79.40%datasetnum-epochs = 25 TABLE II(0 hidden layers & output linear with decision f(x); Decision at .5 value)Improves accuracy for the UNKNOWN case, but other implementations achieve higher accuracy.#imagesAccuracyTrainingTestUNKNOWN#imagesIn UNKNOWNAccuracyIn UNKNOWNDatasetSetSetPERSON SetIn Test SetPERSON SetParametersIn Test SetPERSON SetLFW70%30%11 people1304257min_images_per_person = 1098.80%91.10% %datasetnum-epochs = 25LFW70%30%11 people2226257min_images_per_person = 396.60%97.70% %datasetnum-epochs = 2511 people70%30%Copy 2774min_images_per_person = 298.70%50.00% %peoplenum-epochs = 25from LFWfaces9470%30%11 people918257min_images_per_person = 299.10%82.10% %datasetnum-epochs = 25Cut-off = 0.5faces9470%30%11 people918257min_images_per_person = 298.30%95.70%datasetnum-epochs = 25Cut-off = 1.0 TABLE IIIFCNN with 1 hidden layer (500 nodes) + output linear with decision#imagesAccuracyTrainingTestUNKNOWN#imagesIn UNKNOWNAccuracyIn UNKNOWNDatasetSetSetPERSON SetIn Test SetPERSON SetParametersIn Test SetPERSON SetLFW70%30%11 people1304257min_images_per_person = 1099.30%92.20%datasetnum-epochs = 25LFW70%30%11 people2226257min_images_per_person = 397.50%97.70%datasetnum-epochs = 2511 people70%30%Copy 2774min_images_per_person = 2peoplenum-epochs = 25from LFWfaces9470%30%11 people918257min_images_per_person = 299.20%92.60%datasetnum-epochs = 25Cut-off = 0.5faces9470%30%11 people918257min_images_per_person = 2datasetnum-epochs = 25Cut-off = 1.0 TABLE IVFCNN 2 Hidden Layers (500, 2*num_people) + output linear, decisions f(x)#imagesAccuracyTrainingTestUNKNOWN#imagesIn UNKNOWNAccuracyIn UNKNOWNDatasetSetSetPERSON SetIn Test SetPERSON SetParametersIn Test SetPERSON SetLFW70%30%11 people1304257min_images_per_person = 1098.30%97.70%datasetnum-epochs = 25LFW70%30%11 people2226257min_images_per_person = 398.50%98.10%datasetnum-epochs = 25Cut-off = 011 people70%30%Copy 2774min_images_per_person = 2peoplenum-epochs = 25from LFWfaces9470%30%11 people918257min_images_per_person = 298.60%93.80%datasetnum-epochs = 25Cut-off = 0 In various embodiments, the neural network model is generated initially to accommodate incremental additions of new individuals to identify (e.g.,2*num people is an example of a model initially trained for 100 people given an initial 50 individuals of biometric information). The multiple or training room provides can be tailored to the specific implementation. For example, where additions to the identifiable users is anticipated to be small additional incremental training options can include any number with ranges of 1% to 200%. In other embodiments, larger percentages can be implemented as well. TABLE VFCNN: 2 Hidden Layers (500, 2*num_people) + output linear, decisions f(x), and voting -where the model is trained on 2* the number of class identifiers for incremental training.AccuracyAccuracyTrain-#imagesIn UNKNOWNIn UNKNOWNingTestUNKNOWN#imagesIn UNKNOWNAccuracyPERSON Set =PERSON Set =DatasetSetSetPERSON SetIn Test SetPERSON SetParametersIn Test Set11 peoplefaces94LFW70%30%11 people1304257min_images_per_person = 1098.20%98.80%88.40%datasetnum-epochs = 25(vote)(vote)(vote)100.00%100.00%90.80%LFW70%30%11 people2226257min_images_per_person = 398.10%98.40%93.60%datasetnum-epochs = 25(vote)(vote)(vote)98.60%100.00%95.40%Cut-off = 011 people70%30%Copy 2774min_images_per_person = 2peoplenum-epochs = 25from LFWfaces9470%30%11 people918257min_images_per_person = 2datasetnum-epochs = 25Cut-off = 0 According to one embodiment the system can be implemented as a REST compliant API that can be integrated and/or called by various programs, applications, systems, system components, etc., and can be requested locally or remotely. In one example, the privacy-enabled biometric API includes the following specifications:Preparing data: this function takes the images & labels and saves them into the local directory. {def add_training_data(list_of_images, list_of_label) :@params list_of_images: the list of images@params list_of_label: the list of corresponding labels}Training model: each label (person/individual) can include at least 2 images. In some examples, if the person does not have the minimum that person will be ignored. {def train( ) :}● Prediction:{def predict(list_of_images) :@params list_of_images: the list of images of the same person@return label: a person name or “UNKNOWN_PERSON”} Further embodiments can be configured to handle new people (e.g., labels or classes in the model) in multiple ways. In one example, the current model can be retrained every time (e.g., with a threshold number) a certain number of new people are introduced. In this example, the benefit is improved accuracy—the system can guarantee a level of accuracy even with new people. There exists a trade-off in that full retraining is a slow time consuming and a heavy computation process. This can be mitigated with live and offline copies of the model so the retraining occurs offline and the newly retrain model is swapped for the live version. In one example, training time executed in over 20 minutes. With more data the training time increases. According to another example, the model is initialized with slots for new people. The expanded model is configured to support incremental training (e.g., the network structure is not changed when adding new people). In this example, the time to add new people is significantly reduced (even over other embodiments of the privacy-enabled biometric system). It is realized that there may be some reduction in accuracy with incremental training, and as more and more people are added the model can trends towards overfit on the new people i.e., become less accurate with old people. However, various implementations have been tested to operate at the same accuracy even under incremental retraining. Yet another embodiments implements both incremental retraining and full retraining at a threshold level (e.g., build the initial model with a multiple of the people as needed—(e.g., 2 times—100 labels for an initial 50 people, 50 labels for an initial 25 people, etc.)). Once the number of people reaches the upper bound (or approaches the upper bound) the system can be configured to execute a full retrain on the model, while building in the additional slots for new users. In one example, given 100 labels in the model with 50 initial people (50 unallocated) reaches 50 new people, the system will execute a full retrain for 150 labels and now 100 actual people. This provides for 50 additional users and incremental retraining before a full retrain is executed. Stated generally, the system in various embodiments is configured to retrain the whole network from beginning for every N people. Training data: have 100 people; step 1: train the network with N=1000 people; assign 100 people and reserving 900 to train incremental; train incrementally with new people until we reach 1000 people; and reach 1000 people, full retrain. Full retrain: train the network with 2N=2000 people; now have 1000 people for reserving to train incremental; train incrementally with new people until we reach 2000 people; and repeat the full retrain with open allocations when reach the limit. An example implementation of the API includes the following code: drop database if exists trueid;create database trueid;grant all on trueid.* to trueid@‘localhost’ identified by ‘trueid’;drop table if exists feature;drop table if exists image;drop table if exists PII;drop table if exists subject;CREATE TABLE subject(id INT PRIMARY KEY AUTO_INCREMENT,when_created TIMESTAMP DEFAULT CURRENT_TIMESTAMP);CREATE TABLE PII(id INT PRIMARY KEY AUTO_INCREMENT,subject_id INT,tag VARCHAR(254),value VARCHAR(254));CREATE TABLE image(id INT PRIMARY KEY AUTO_INCREMENT,subject_id INT,image_name VARCHAR(254),is_train boolean,when_created TIMESTAMP DEFAULT CURRENT_TIMESTAMP);CREATE TABLE feature(id INT PRIMARY KEY AUTO_INCREMENT,image_id INT NOT NULL,feature_order INT NOT NULL,feature_value DECIMAL(32,24) NOT NULL);ALTER TABLE image ADD CONSTRAINT fk_subject_id FOREIGNKEY (subject_id) REFERENCES subject(id);ALTER TABLE PII ADD CONSTRAINT fk_subject_id_piiFOREIGN KEY (subject_id) REFERENCES subject(id);ALTER TABLE feature ADD CONSTRAINT fk_image_id FOREIGNKEY (image_id) REFERENCES image(id);CREATE INDEX piisubjectid ON PII(subject_id);CREATE INDEX imagesubjectid ON image(subject_id);CREATE INDEX imagesubjectidimage ON image(subject_id,image_name);CREATE INDEX featureimage_id ON feature(image_id); API Execution Example:Push the known LFW feature embeddings to biometric feature database.Simulate the incremental training process:num_seed=50#build the model network, and first num_seed people was trained fullynum_window=50#For every num_window people: build the model network, and people trained fullynum_step=1#train incremental every new num_step peoplenum_eval=10#evaluate the model every num_eval peopleBuild the model network with #class=100. Train from beginning (#epochs=100) with the first 50 people. The remaining 50 classes are reserved for incremental training.i) Incremental training for the 51st person. Train the previous model with all 51 people (#epochs=20)ii) Incremental training for the 52nd person. Train the previous model with all 52 people (#epochs=20)iii) continue . . .(Self or automatic monitoring can be executed by various embodiments to ensure accuracy over time—alert flags can be produced if deviation or excessive inaccuracy is detected; alternatively or in conjunction full retraining can be executed responsive to excess inaccuracy and the fully retrained model evaluated to determine is accuracy issues are resolved—if so the full retrain threshold can be automatically adjusted). Evaluate the accuracy of the previous model (e.g., at every 10 steps), optionally record the training time for every step.Achieve incremental training for maximum allocation (e.g., the 100th person). Full train of the previous model with all 100 people (e.g., #epochs=20)Build the model network with #class=150. Train from beginning (e.g., #epochs=100) with the first 100 people. The remaining 50 classes are reserved for incremental training.i) Incremental training for the 101st person. Train the previous model with all 101 people (#epochs=20)ii) continue . . .Build the model network with #class=200. Train from beginning (e.g., #epochs=100) with the first 150 people. The remaining 50 classes are reserved for incremental training.i) Incremental training for the 151st person. Train the previous model with all 151 people (#epochs=20)ii) Continue . . . Refactor Problem: According to various embodiments, it is realized that incremental training can trigger concurrency problems: e.g., a multi-thread problem with the same model, thus the system can be configured to avoid retrain incrementally at the same time for two different people (data can be lost if retraining occurs concurrently). In one example, the system implements a lock or a semaphore to resolve. In another example, multiple models can be running simultaneously—and reconciliation can be executed between the models in stages. In further examples, the system can monitor models to ensure only one retrain is executed on multiple live models, and in yet others use locks on the models to ensure singular updates via incremental retrain. Reconciliation can be executed after an update between models. In further examples, the system can cache feature vectors for subsequent access in the reconciliation. According to some embodiments, the system design resolves a data pipeline problem: in some examples, the data pipeline supports running one time due to queue and thread characteristics. Other embodiments, avoid this issue by extracting the embeddings. In examples, that do not include that functionality the system can still run multiple times without issue based on saving the embedding to file, and loading the embedding from file. This approach can be used, for example, where the extracted embedding is unavailable via other approaches. Various embodiments can employ different options for operating with embeddings: when we give a value to a tensorflow, we have several ways: Feed_dict (speed trade-off for easier access); and Queue: faster via multi-threads, but can only run one time (the queue will be end after it's looped). Table VIII & TABLE IX (below) shows execution timing during operation and accuracy percentages for the respective example. TABLE VI1STEPACTIONINFOTIMEACCURACY250Retrieving feature embedding100.939024350Training Deep Learning classifier54.34578061451Retrieving feature embedding104.8042319551Training incrementally Deep Learning classifier9.755134106652Retrieving feature embedding105.692045752Training incrementally Deep Learning classifier9.367767096853Retrieving feature embedding95.68940234953Training incrementally Deep Learning classifier9.388467551054Retrieving feature embedding108.84456471154Training incrementally Deep Learning classifier9.6682245731255Retrieving feature embedding108.73918961355Training incrementally Deep Learning classifier10.25778271456Retrieving feature embedding107.13055351556Training incrementally Deep Learning classifier9.6600384711657Retrieving feature embedding111.11286191757Training incrementally Deep Learning classifier9.8248674871858Retrieving feature embedding109.7802781958Training incrementally Deep Learning classifier10.257016182059Retrieving feature embedding114.99198292159Training incrementally Deep Learning classifier9.7523822782260Retrieving feature embedding114.37310362360Training incrementally Deep Learning classifier10.151842362460Accuracy#test_images = 5330.9887432560Vote Accuracy#test_images = 53312661Retrieving feature embedding118.2379932761Training incrementally Deep Learning classifier10.08950712862Retrieving feature embedding120.25192572962Training incrementally Deep Learning classifier10.698251253063Retrieving feature embedding119.38037873163Training incrementally Deep Learning classifier10.665804863264Retrieving feature embedding138.0316053364Training incrementally Deep Learning classifier12.321834563465Retrieving feature embedding133.27017553565Training incrementally Deep Learning classifier12.359645373666Retrieving feature embedding136.87982893766Training incrementally Deep Learning classifier12.075443273867Retrieving feature embedding140.38687753967Training incrementally Deep Learning classifier12.542068964068Retrieving feature embedding140.8550524168Training incrementally Deep Learning classifier12.595526934269Retrieving feature embedding140.25006894369Training incrementally Deep Learning classifier12.556045774470Retrieving feature embedding144.56126764570Training incrementally Deep Learning classifier12.953984264670Accuracy#test_images = 6730.99257064770Vote Accuracy#test_images = 67314871Retrieving feature embedding145.24589874971Training incrementally Deep Learning classifier13.09419131 TABLE VII1STEPACTIONINFOTIMEACCURACY6780Training incrementally Deep Learning classifier14.248801236880Accuracy#test_images = 7240.99033156980Vote Accuracy#test_images = 72417081Retrieving feature embedding153.82957557181Training incrementally Deep Learning classifier14.723896037282Retrieving feature embedding157.92106777382Training incrementally Deep Learning classifier14.576724537483Retrieving feature embedding164.83837447583Training incrementally Deep Learning classifier21.835707667684Retrieving feature embedding161.29503877784Training incrementally Deep Learning classifier14.258012777885Retrieving feature embedding155.97852857985Training incrementally Deep Learning classifier14.451708798086Retrieving feature embedding160.90797048186Training incrementally Deep Learning classifier14.818185098287Retrieving feature embedding164.57346738387Training incrementally Deep Learning classifier18.266645918488Retrieving feature embedding169.84005488588Training incrementally Deep Learning classifier15.750749838689Retrieving feature embedding169.24132638789Training incrementally Deep Learning classifier15.931486858890Retrieving feature embedding172.51918898990Training incrementally Deep Learning classifier15.884493839090Accuracy#test_images = 8220.9866189190Vote Accuracy#test_images = 8220.99635049291Retrieving feature embedding170.1628739391Training incrementally Deep Learning classifier15.725256689492Retrieving feature embedding174.99470269592Training incrementally Deep Learning classifier15.7910499693Retrieving feature embedding175.34498579793Training incrementally Deep Learning classifier15.87565979894Retrieving feature embedding177.08250819994Training incrementally Deep Learning classifier15.7281236610095Retrieving feature embedding178.884681210195Training incrementally Deep Learning classifier16.0461592710296Retrieving feature embedding171.211434110396Training incrementally Deep Learning classifier16.3244252210497Retrieving feature embedding177.870851510597Training incrementally Deep Learning classifier15.9009311210698Retrieving feature embedding177.591693610798Training incrementally Deep Learning classifier16.5783472110899Retrieving feature embedding185.185421210999Training incrementally Deep Learning classifier16.64935994110100Retrieving feature embedding179.5375969111100Training incrementally Deep Learning classifier17.24395561112100Accuracy#test_images = 8750.9897143113100Vote Accuracy#test_images = 8751114100Retrieving feature embedding184.8017459 TABLE VIII shows summary information for additional executions. #imagesTrainingTestUNKNOWN#people#imagesIn UNKNOWNAccuracyDatasetSetSetPERSON SetIn Traing SetIn Test SetPERSON SetParametersIn Test SetLFW70%30%11 people1581304257min_images_per_person = 1098.70%datasetnum-epochs = 25(vote)100.00%Cut-off = 0LFW70%30%11 people9012226257min_images_per_person = 393.80%datasetnum-epochs = 25(vote)95.42%Cut-off = 0 According to one embodiment, the system can be described broadly to include the any one or more or any combination of the following elements and associated functions:Preprocessing: where the system takes in an unprocessed biometric, which can include cropping and aligning and either continues processing or returns that the biometric cannot be processed.Neural network1: Pre-trained. Takes in unencrypted biometrics. Returns biometric feature vectors that are one way encrypted and distance and/or Euclidean measurable. Regardless of biometric type being processed—NN1 generates Euclidean measurable encrypted feature vectors.Distance evaluation of NN1 output for a phase of authentication and/or to filter output of NN1: As discussed above, a first phase of authentication can use encrypted feature vectors to determine a distance and authenticate or not based on being within a threshold distance. Similarly during enrollment the generated feature vectors can be evaluated to ensure they are within a threshold distance and otherwise require new biometric samples.Neural network2: Not pre-trained. It is a deep learning neural network that does classification. Includes incremental training, takes a set of label, feature vector pairs as input and returns nothing during training—the trained network is used for matching or prediction on newly input biometric information. Does prediction, which takes a feature vector as input and returns an array of values. These values, based on their position and the value itself, determine the label or unknown.Voting functions can be executed with neural network2e.g., during prediction.System may have more than one neural network1for different biometrics. Each would generate Euclidean measurable encrypted feature vectors based on unencrypted input.System may have multiple neural network2(s) one for each biometric type. Modifications and variations of the discussed embodiments will be apparent to those of ordinary skill in the art and all such modifications and variations are included within the scope of the appended claims. An illustrative implementation of a computer system800that may be used in connection with any of the embodiments of the disclosure provided herein is shown inFIG.8. The computer system800may include one or more processors810and one or more articles of manufacture that comprise non-transitory computer-readable storage media (e.g., memory820and one or more non-volatile storage media830). The processor810may control writing data to and reading data from the memory820and the non-volatile storage device830in any suitable manner. To perform any of the functionality described herein, the processor810may execute one or more processor-executable instructions stored in one or more non-transitory computer-readable storage media (e.g., the memory820), which may serve as non-transitory computer-readable storage media storing processor-executable instructions for execution by the processor810. The terms “program” or “software” are used herein in a generic sense to refer to any type of computer code or set of processor-executable instructions that can be employed to program a computer or other processor to implement various aspects of embodiments as discussed above. Additionally, it should be appreciated that according to one aspect, one or more computer programs that when executed perform methods of the disclosure provided herein need not reside on a single computer or processor, but may be distributed in a modular fashion among different computers or processors to implement various aspects of the disclosure provided herein. Processor-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments. Also, data structures may be stored in one or more non-transitory computer-readable storage media in any suitable form. For simplicity of illustration, data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a non-transitory computer-readable medium that convey relationship between the fields. However, any suitable mechanism may be used to establish relationships among information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationships among data elements. Also, various inventive concepts may be embodied as one or more processes, of which examples (e.g., the processes described with reference toFIGS.1and2A-2B,9,10, etc.) have been provided. The acts performed as part of each process may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments. All definitions, as defined and used herein, should be understood to control over dictionary definitions, and/or ordinary meanings of the defined terms. As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc. The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc. Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed. Such terms are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term). The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” “having,” “containing”, “involving”, and variations thereof, is meant to encompass the items listed thereafter and additional items. Having described several embodiments of the techniques described herein in detail, various modifications, and improvements will readily occur to those skilled in the art. Such modifications and improvements are intended to be within the spirit and scope of the disclosure. Accordingly, the foregoing description is by way of example only, and is not intended as limiting. The techniques are limited only as defined by the following claims and the equivalents thereto. | 119,551 |
11943365 | DETAILED DESCRIPTION For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the aspects illustrated in the drawings, and specific language may be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is intended. Any alterations and further modifications to the described devices, instruments, methods, and any further application of the principles of the present disclosure are fully contemplated as would normally occur to one skilled in the art to which the disclosure relates. In particular, it is fully contemplated that the features, components, and/or steps described with respect to one aspect may be combined with the features, components, and/or steps described with respect to other aspects of the present disclosure. For the sake of brevity, however, the numerous iterations of these combinations may not be described separately. For simplicity, in some instances the same reference numbers are used throughout the drawings to refer to the same or like parts. FIG.1is an illustration of an example system100associated with a secure cross-device authentication system, according to various aspects of the present disclosure. The system100includes one or more user devices102, a security infrastructure110, and a network service provider116capable of communicating with each other over a network120. In some aspects, a user device102and the network service provider116may communicate with one another for purposes of obtaining and/or providing network services. The network services may include any service provided over a network (e.g., Internet) such as, for example, electronic mail services, social media services, messaging services, virtual private network (VPN) services, data storage and protection services, financial services, e-commerce services, or a combination thereof. In some aspects, the user device102and the security infrastructure110may communicate with one another for purposes of obtaining and/or providing cyber security services. The cyber security services may include, for example, an authentication service during which the security infrastructure110enables secure authentication of the user device102with the network service provider116. In some aspects, the security infrastructure110and the network service provider116may communicate with each other for purposes of authenticating the user device102with the network service provider116. A user device102, from among the one or more user devices102, may include and/or be associated with a security application104, a biometric unit106, and a trusted platform module (TPM) device108communicatively coupled to an associated processor (e.g., processor620) and/or memory (e.g., memory630). In some aspects, the associated processor and/or memory may be local to the user device102. In some aspects, the associated processor and/or memory may be located remotely with respect to the user device102. The user device102may be a physical computing device capable of hosting the security application104and of connecting to the network120. The user device102may be, for example, a laptop, a mobile phone, a tablet computer, a desktop computer, a smart device, a router, or the like. In some aspects, the user device102may include, for example, Internet-of-Things (IoT) devices such as VSP smart home appliances, smart home security systems, autonomous vehicles, smart health monitors, smart factory equipment, wireless inventory trackers, biometric cyber security scanners, or the like. The user device102may include and/or may be associated with a communication interface to communicate (e.g., receive and/or transmit) data. The biometric unit106may enable identification, authentication, and/or access control. In some aspects, the biometric unit106may include a biometric sensor for sensing and/or capturing biometric information associated with a user. Such biometric information may include, for example, fingerprint, palm print, finger shape, palm shape, voice, retina, iris, face image, sound, dynamic signature, blood vessel pattern, keystroke, or a combination thereof. The biometric unit106may utilize the associated processor to correlate the captured biometric information with user information, and to store a correlation of the biometric information with the user information in the associated memory. Further, the biometric unit106may enable comparison of received biometric information with stored biometric information to verify and/or authenticate that the received biometric information is associated with the user information (e.g., belongs to the user). The TPM device108may include a dedicated controller utilizing integrated cryptographic keys (e.g., master keys) and/or cryptographic algorithms to operate as a secure crypto processor. The TPM device108may carry out cryptographic operations, embedded in a packaging with multiple physical security measures, which give it a degree of tamper resistance. In some aspects, the TPM device108may refrain from communicating the cryptographic keys (e.g., master keys, etc.) and/or the cryptographic algorithms externally (e.g., external to the TPM device108). The security infrastructure110may include the processing unit112and the database114. The processing unit112may include a logical component configured to perform complex operations to evaluate various factors associated with providing the cyber security services. The database114may store various pieces of information associated with providing the cyber security services, including security algorithms, encrypted content, and/or encryption/decryption key information. The security infrastructure110may include or be associated with a communication interface (e.g., communication interface670) to communicate (e.g., transmit and/or receive) data. The security infrastructure110may configure and provide the security application104for installation to enable the user device102to communicate with an application programming interface (API) (not shown) included in the security infrastructure110and/or for obtaining the cyber security services. As discussed below in further detail with respect toFIG.2, the security application104may be configured to enable utilization of the biometric unit106and/or the TPM device108by (an operating system of) the user device102to enable secure authentication of the user device102or another user device102with the network service provider116. Further, the security application104and/or the security infrastructure110may utilize one or more encryption and decryption algorithms to encrypt and decrypt the data. The encryption algorithms and decryption algorithms may employ standards such as, for example, data encryption standards (DES), advanced encryption standards (AES), Rivest-Shamir-Adleman (RSA) encryption standard, Open PGP standards, file encryption overview, disk encryption overview, email encryption overview, etc. Some examples of encryption algorithms include a triple data encryption standard (DES) algorithm, Rivest-Shamir-Adleman (RSA) encryption algorithm, advanced encryption standards (AES) algorithms, Twofish encryption algorithms, Blowfish encryption algorithms, IDEA encryption algorithms, MD5 encryption algorithms, HMAC encryption algorithms, etc. The network service provider116may own and operate an infrastructure associated with providing the network services. To access the network services, the network service provider116may enable the user device120to set up an authentication system. Upon communication of credentials by the user device102, the network service provider116may authenticate the credentials and provide the user device102with access to the network services when the credentials are successfully authenticated. The network120may be wired or wireless network. In some aspects, the network120may include one or more of, for example, a phone line, a local-area network (LAN), a wide-area network (WAN), a metropolitan-area network (MAN), a home-area network (HAN), Internet, Intranet, Extranet, and Internetwork. In some aspects, the network120may include a digital telecommunication network that permits several nodes to share and access resources. As indicated above,FIG.1is provided as an example. Other examples may differ from what is described with regard toFIG.1. Multiple user devices may be associated with an account registered with a network service provider to receive network services. The network services may include a service provided over a network (e.g., Internet) such as, for example, electronic mail services, social media services, messaging services, virtual private network (VPN) services, data storage and protection services, or a combination thereof. To gain access to the network services, an authentication system may be set up with the network service provider. Traditionally, the authentication system may include a single-factor authentication system or a multi-factor authentication system. In the single-factor authentication system, the user device may communicate a first factor such as, for example, a username and/or a password for authentication by the network service provider. Upon successful authentication of the first factor, the network service provider may provide the user device with the network services. In multi-factor authentication, upon successful authentication of the first factor, the user device may determine and communicate a second factor (e.g., pin, token, alphanumeric string, or a combination thereof) for further authentication by the network service provider. The second factor may be different and/or independent from the first factor. Based at least in part on successful authentication of the second factor, the network service provider may provide the user device with access to the network services. Security associated with the multi-factor authentication system may be enhanced by associating verification of biometric information with determining of the first factor and the second factor. In an example, during authentication, a user device may receive biometric information and verify that the received biometric information belongs to an authorized user. Based at least in part on successful verification of the received biometric information, the user device may determine and communicate the first factor for authentication. Similarly, based at least in part on successful authentication of the first factor, the user device may again receive biometric information and verify that the received biometric information belongs to the authorized user. Based at least in part on successful verification of the again received biometric information, the user device may determine and communicate the second factor for authentication. Based at least in part on successful authentication of the second factor, the network service provider may provide the user device with access to the network services. In some cases, a given user device, from among the multiple user devices, associated with the registered account may be unable to receive and/or verify biometric information during authentication. In an example, the given user device may not be equipped with a biometric unit. In another example, a biometric unit included in the given user device may malfunction during authentication. In such cases, the given user device may be unable to receive the network services. This may result in the given user device and/or the network service provider inefficiently utilizing resources (e.g., management resources, network resources, financial resources, time resources, processing resources, memory resources, power consumption resources, battery life, or the like) to attempt to obtain the network services. Additionally, a separate user device (instead of the given user device) that is capable of receiving and/or verifying biometric information may have to be utilized to receive the network services, which may be inconvenient and/or may delay receipt of the network services. Various aspects of systems and techniques discussed in the present disclosure provide a secure cross-device authentication system. The secure cross-device authentication system may include a security infrastructure and/or respective security applications installed on multiple user devices associated with an account registered with a network service provider, which may provide network services to the multiple user devices. In some aspects, the security infrastructure may provide the multiple user devices with the respective security applications. A respective security application may be configured to enable utilization of a respective local biometric unit and/or a respective local TPM device to enable secure authentication of the local user device (e.g., the device on which the respective application is installed) with the network service provider to enable the local user device to receive the network services. Additionally, a respective security application may be configured to enable utilization of a local biometric unit and/or a TPM device to enable secure authentication of another user device with the network service provider to enable the other user device to receive the network services. In this way, even when the other user device is not equipped with a biometric unit or experiences a malfunctioning biometric unit during authentication, and the other user device is unable to receive and/or verify biometric information, the other user device may be securely authenticated with the network service provider. As a result, the respective security applications may enable efficient utilization of resources (e.g., management resources, network resources, financial resources, time resources, processing resources, memory resources, power consumption resources, battery life, or the like) by the user devices and/or the network service provider. Additionally, the respective security applications may avoid having to utilize a separate user device to receive the network services, thereby reducing an inconvenience associated with receiving the network services and/or avoiding a delay in receiving the network services. In some aspects, a processor executing the security application may determine unavailability of a first biometric unit associated with the first user device for verification of first biometric information; select, based at least in part on determining unavailability of the first biometric unit, a second biometric unit associated with a second user device for verification of second biometric information; receive, from the second user device based at least in part on a first verification of the second biometric information, a first factor associated with authentication of the first user device by a service provider; receive, from the second user device based at least in part on successful authentication of the first factor and on a second verification of the second biometric information, a second factor associated with authentication of the first user device by the service provider; and receive, from the service provider, a service based at least in part on successful authentication of the second factor. FIG.2is an illustration of an example flow200associated with a secure cross-device authentication system, according to various aspects of the present disclosure. The example flow200may include a first user device (e.g., user device102) and a second user device (e.g., user device102) obtaining cyber security services from a security infrastructure (e.g., security infrastructure110). The first user device may include a first security application (e.g., a first instance of security application104) and a first TPM device (e.g., TPM device108). In some aspects, the first user device may experience a malfunctioning first biometric unit (e.g., biometric unit106) during authentication. As a result, the first user device may be unable to receive and/or verify biometric information when the first user device is to receive network services from a network service provider (e.g., network service provider116). The second user device may include a second security application (e.g., a second instance of security application104), a second biometric unit106(not shown), and a second TPM device108. As discussed below in further detail, the security infrastructure110may configure and provide the security applications to the first and second user devices for purposes of authenticating the first user device with the network service provider. Although only two user devices are shown inFIG.2, the present disclosure contemplates any number of user devices being included in the secure cross-device authentication system. The security applications (e.g., the first security application and the second security application) may respectively enable the user devices (e.g., the first user device and the second user device) to receive information to be processed by the security applications and/or by the security infrastructure110. Each security application may include a graphical interface to receive the information via a local input interface (e.g., touch screen, keyboard, mouse, pointer, camera, etc.) associated with each user device. The information may be received via text input or via a selection from among a plurality of options (e.g., pull down menu, etc.). In some aspects, each security application may be configured to activate and/or enable, at a time associated with receiving the information, the graphical interface to receive the information. In an example, the first security application may cause a screen (e.g., local screen) associated with the first user device to display, for example, a pop-up message to request entry of the information. Each security application may also enable transmission of at least a portion of the received information to the security infrastructure110. As shown by reference numeral205, the first security application may associate verification of biometric information with operation of the first TPM device and the second security application may associate verification of biometric information with operation of the second TPM device. With respect to the first user device, the first security application may determine availability of the first biometric unit and of the first TPM device. To determine availability of the first biometric unit and of the first TPM device, the first security application may request and receive, from an operating system being utilized by the first user device, information indicating that the first biometric unit and the first TPM device are associated with the operating system. In some aspects, the first security application may determine that the first user device is not equipped with the first biometric unit or that the first biometric unit has malfunctioned. In this case, the first security application may associate verification of biometric information, obtained from another device (e.g., second user device) associated with the registered account, with operation of the first TPM device. The first security application may associate verification of the biometric information with operation of the first TPM device such that a request for the first TPM device to, for example, sign data, encrypt data, and/or decrypt data is to indicate or be accompanied by a result of successful verification of biometric information. In some aspects, the first security application may associate verification of the biometric information with operation of the first TPM device such that the first security application is to transmit a request for the first TPM device to, for example, sign data, encrypt data, and/or decrypt data, based at least in part on real-time verification of biometric information by the other user device. In other words, the first security application is to transmit the request for the first TPM device to, for example, sign data, encrypt data, and/or decrypt data based at least in part on verification of biometric information by the other user device at a time associated with the first security application transmitting the request. With respect to the second user device, the second security application may determine availability of the second biometric unit and of the second TPM device. To determine availability of the second biometric unit and of the second TPM device, the second security application may request and receive, from an operating system being utilized by the second user device, information indicating that the second biometric unit and the second TPM device are associated with the operating system. Based at least in part on determining availability of the second biometric unit and of the second TPM device, the second security application may utilize the operating system to associate real-time verification of biometric information with operation of the second TPM device. For instance, the second security application may associate verification of biometric information with, for example, signing of data by the second TPM device, encrypting of data by the second TPM device, and/or decrypting of data by the second TPM device. The associating of verification of biometric information with operation of the second TPM device may be such that a request for the second TPM device to, for example, sign data, encrypt data, and/or decrypt data is to indicate or be accompanied by a result of successful verification of biometric information. In some aspects, the second security application may associate verification of the biometric information with operation of the second TPM device such that the second security application is to transmit a request for the second TPM device to, for example, sign data, encrypt data, and/or decrypt data, based at least in part on real-time verification of biometric information. In other words, the second security application is to transmit the request for the second TPM device to, for example, sign data, encrypt data, and/or decrypt data based at least in part on verification of biometric information at a time associated with the first security application transmitting the request. To associate verification of biometric information with operation of the second TPM device, the second security application may, for example, display a pop-up message on a screen associated with the second user device to request biometric information from an authorized user of the second user device. Further, the second security application may enable (e.g., cause) the operating system to activate the second biometric unit to sense the biometric information. The second security application may receive and store, in an associated memory, the biometric information that belongs to the authorized user as authentic biometric information. In some aspects, the authorized user associated with the second user device may be the authorized user associated with the first user device. In some aspects, the authorized user associated with the second user device may be different from the authorized user associated with the first user device. When the second security application is to transmit a request for the second TPM device to, for example, sign data, encrypt data, and/or decrypt data, the second security application may verify biometric information in real time. In an example, to verify the biometric information, the second security application may enable (e.g., cause) the operating system to activate the second biometric unit to receive biometric information in real time (e.g., at a time associated with transmitting the request). Further, the second security application may compare the received biometric information with the authentic biometric information stored in the associated memory. When the received biometric information matches (e.g., is the same as) the stored authentic biometric information, the second security application may determine that the received biometric information belongs to the authorized user and may select to transmit the request for the second TPM device to, for example, sign data, encrypt data, and/or decrypt data. In some aspects, the request may indicate or include a result of the received biometric information matching the authentic biometric information (e.g., successful authentication) to the second TPM device. Alternatively, when the received biometric information fails to match (e.g., is different from) the stored authentic biometric information, the second security application may determine that the received biometric information does not belong to the authorized user and may select to refrain from transmitting the request for the TPM device to, for example, sign data, encrypt data, and/or decrypt data. The first user device and the second user device may receive network services from the network service provider. The network services may include a service provided over a network (e.g., Internet) such as, for example, electronic mail services, social media services, messaging services, virtual private network (VPN) services, data storage and protection services, or a combination thereof. To gain access to the network services, the first user device and/or the second user device may register an account with the network service provider. In some aspects, the first user device and the second user device may be associated with the account registered with the network service provider. Further, to receive the network services, the first user device and/or the second user device may set up an authentication system with the network service provider. The authentication system may include a multi-factor authentication system. When the first user device and/or the second user device is to receive the network services, the first user device and/or the second user device may utilize a web browser and/or a network service provider (NSP) application to determine and communicate a first factor such as, for example, a username and/or a password for authentication by the network service provider116. Based at least in part on successful authentication of the first factor, the first user device and/or the second user device may determine and communicate a second factor (e.g., pin, token, alphanumeric string, or a combination thereof) for further authentication by the network service provider. In some aspects, the second factor may be variable, varying based at least in part on a time reference (e.g., Unix time) and/or may be valid for a predetermined duration of time. In this case, to determine the second factor, the first user device and/or the second user device may utilize a security algorithm along with secret information (e.g., seed information, QR code, or a combination thereof) provided by the network service provider in association with the registered account. The second factor may be different and/or independent from the first factor. The first user device and/or the second user device may communicate the second factor to the network service provider for authentication with the predetermined duration of time. Based at least in part on successful authentication of the second factor, the network service provider may provide the first user device and/or the second user device with the network services. As shown by reference numeral210, the first user device may transmit first registration information to the security infrastructure110and the second user device may transmit a second registration information to the security infrastructure110. With respect to the first user device, the first security application may determine the first registration information based at least in part on requesting and receiving entry of the first registration information and/or via requesting and receiving access to account registered with the network service provider. The first registration information may include, for example, metadata associated with the account registered with the network service provider (e.g., account number) and/or data associated with the network services to be received from the network service provider. In some aspects, the metadata may include information regarding the network service provider such as, for example, communication information (e.g., domain information, IP address, or the like) associated with communicating with the network service provider, subscription information associated with the network services to be received, or the like. The first registration information may also include first identification information associated with identifying the first user device with the security infrastructure110. In an example, the first identification information may include a unique first installation identifier associated with installing the first security application on the first user device. The first security application may determine the first installation identifier based at least in part on information associated with the first user device installing a present (e.g., existing) instance of the first security application. In some aspects, the security infrastructure110may provide the first installation identifier to the first user device in association with the first user device installing the first security application. In some aspects, the security infrastructure110may store the first installation identifier in the first security application. The first identification information may also include a first master public key associated with the first TPM device included in the first user device. In some aspects, the first security application may request the first TPM device to determine a first master key. In some aspects, the first master key may be associated with signing, encrypting, and/or decrypting of data by the first TPM device. The first master key may include an asymmetric master key pair including the first master public key and a first master private key. In some aspects, only the first TPM device may have access to the first master private key (e.g., the first TPM device may keep the first master key confidential). Based at least in part on determining the first master key, the first TPM device may return a unique first identifier associated with (e.g., that identifies) the first master private key to the first security application. In some aspects, the first master key and/or the first identifier may be specific to (e.g., may be utilized by) the first user device. Based at least in part on receiving the first registration information, the security infrastructure110may optionally confirm possession of the first master private key by the first user device. To do so, the security infrastructure110may conduct a first challenge-response procedure with the first user device. In an example, the security infrastructure110may determine first validation data to be utilized during the first challenge-response procedure. The first validation data may include, for example, an alphanumeric string, a one-time password, or a combination thereof. The alphanumeric string and/or the one-time password may include random and unbiased characters. The security infrastructure110may challenge the first user device to sign the first validation data by transmitting the first validation data to the first security application. The first security application may transmit a first signature request to the first TPM device to sign the first validation data. Prior to transmitting the first signature request, the first security application may request the second user device to verify biometric information in real-time. The first signature request may include the first identifier in association with the first validation data received from the security infrastructure110, and may indicate a result of successful verification of biometric information by the second user device. Based at least in part on the first signature request including the first identifier in association with the first validation data, the first security application may indicate to the first TPM device that the first master private key, associated with the first identifier, is to be utilized to sign the first validation data. In other words, based at least in part on transmitting the first identifier in association with the first validation data, the first security application may enable the first TPM device to utilize the first master private key, identified by the first identifier, to sign the first validation data. Based at least in part on receiving the first signature request, the first TPM device may sign the first validation data. In an example, the first TPM device may determine successful verification of biometric information. Further, the first TPM device may determine that the first validation data is to be signed using the first master private key based at least in part on the first validation data being received in association with the first identifier, as indicated by the first signature request. As a result, the first TPM device may utilize the first master private key to sign the first validation data. In some aspects, the first TPM device may utilize a hash function (e.g., SHA 1, MD5, etc.) to hash characters included in the first validation data and may encrypt the hashed characters with the first master private key. The first TPM device may provide the signed validation data to the first security application. In turn, the first security application may respond to the challenge by transmitting the signed first validation data to the security infrastructure110, which may utilize the first master public key to validate the signed first validation data. In an example, the security infrastructure110may utilize the association between the first master public key and the first master private key to validate the signed first validation data. For instance, the security infrastructure110may calculate a hash of the characters included in the first validation data. Further, the security infrastructure110may attempt to decrypt the signed first validation data with the first master public key to receive the hashed characters included in the signed first validation data. The security infrastructure110may compare the calculated hash with the hashed characters included in the first validation data. When the result of the comparison indicates that the calculated hash matches (e.g., is the same as) the hashed characters included in the first validation data, the security infrastructure110may determine that the first user device, to which the security infrastructure110had transmitted the first validation data, has signed the first validation data by utilizing the first master private key. In other words, the security infrastructure110may determine that the first user device has adequately responded to the challenge. In this case, the security infrastructure110may determine that the first user device is in possession of the first master private key. The security infrastructure110may store, in a memory (e.g., database114), the received first registration information in correlation with the first user device. With respect to the second user device, the second security application may determine the second registration information based at least in part on requesting and receiving entry of the second registration information and/or via requesting and receiving access to account registered with the network service provider. The second registration information may include, for example, metadata associated with the account registered with the network service provider (e.g., account number) and/or data associated with the network services to be received from the network service provider. In some aspects, the metadata may include information regarding the network service provider such as, for example, communication information (e.g., domain information, IP address, or the like) associated with communicating with the network service provider, subscription information associated with the network services to be received, or the like. The second registration information may also include second identification information associated with identifying the second user device with the security infrastructure110. In an example, the second identification information may include a unique second installation identifier associated with installing the second security application on the second user device. The second security application may determine the second installation identifier based at least in part on information associated with the second user device installing a present (e.g., existing) instance of the second security application. In some aspects, the security infrastructure110may provide the second installation identifier to the second user device in association with the second user device installing the second security application. In some aspects, the security infrastructure110may store the second installation identifier in the second security application. Further, the second identification information may include a second master public key associated with the second TPM device included in the second user device. In some aspects, the second security application may request the second TPM device to determine a second master key. In some aspects, the second master key may be associated with signing, encrypting, and/or decrypting of data by the second TPM device. The second master key may include an asymmetric master key pair including the second master public key and a second master private key. In some aspects, only the second TPM device may have access to the second master private key (e.g., the second TPM device may keep the second master key confidential). Based at least in part on determining the second master key, the second TPM device may return a unique second identifier associated with (e.g., that identifies) the second master private key to the second security application. In some aspects, the second master key and/or the second identifier may be specific to (e.g., may be utilized by) the second user device. In some aspects, based at least in part on receiving the second registration information, the security infrastructure110may optionally confirm possession of the second master private key by the second user device. To do so, the security infrastructure110may conduct a second challenge-response procedure with the second user device. In an example, the security infrastructure110may determine second validation data to be utilized during the second challenge-response procedure. The second validation data may include, for example, an alphanumeric string, a one-time password, or a combination thereof. The alphanumeric string and/or the one-time password may include random and unbiased characters. The security infrastructure110may challenge the second user device to sign the second validation data by transmitting the second validation data to the second security application. The second security application may transmit a second signature request to the second TPM device to sign the second validation data. The second signature request may include the second identifier in association with the second validation data received from the security infrastructure110. Based at least in part on the second signature request including the second identifier in association with the second validation data, the second security application may indicate to the second TPM device that the second master private key, associated with the second identifier, is to be utilized to sign the second validation data. In other words, based at least in part on transmitting the second identifier in association with the second validation data, the second security application may enable the second TPM device to utilize the second master private key, identified by the second identifier, to sign the second validation data. In some aspects, operation of the second TPM device to sign, encrypt, and/or decrypt data may be associated with the verification of biometric information such that a request or the second TPM device to sign, encrypt, and/or decrypt data is to indicate successful verification of biometric information. In this case, the second signature request may indicate and/or include a result of the second security application verifying biometric information. In an example, the second security application may receive and verify biometric information in real time (e.g., while transmitting the second signature request), as discussed elsewhere herein. When the received biometric information matches stored authentic biometric information associated with an authorized user, the second security application may determine that the received biometric information belongs to the authorized user and may select to transmit the second signature request. Further, the second security application may configure the second signature request to indicate successful verification of the received biometric information. Based at least in part on receiving the second signature request, the second TPM device may sign the second validation data. In an example, the second TPM device may determine, from the indicated successful verification of the received biometric information, that the received biometric information matches the stored authentic biometric information. Further, the second TPM device may determine that the second validation data is to be signed using the second master private key based at least in part on the second validation data being received in association with the second identifier, as indicated by the second signature request. As a result, the second TPM device may utilize the second master private key to sign the second validation data. In some aspects, the second TPM device may utilize a hash function (e.g., SHA 1, MD5, etc.) to hash characters included in the second validation data and may encrypt the hashed characters with the second master private key. The second TPM device may provide the signed validation data to the second security application. In turn, the second security application may respond to the challenge by transmitting the signed second validation data to the security infrastructure110, which may utilize the second master public key to validate the signed second validation data. In an example, the security infrastructure110may utilize the association between the second master public key and the second master private key to validate the signed second validation data. For instance, the security infrastructure110may calculate a hash of the characters included in the second validation data. Further, the security infrastructure110may attempt to decrypt the signed second validation data with the second master public key to receive the hashed characters included in the signed second validation data. The security infrastructure110may compare the calculated hash with the hashed characters included in the second validation data. When the result of the comparison indicates that the calculated hash matches (e.g., is the same as) the hashed characters included in the second validation data, the security infrastructure110may determine that the second user device, to which the security infrastructure110had transmitted the second validation data, has signed the second validation data by utilizing the second master private key. In other words, the security infrastructure110may determine that the second user device has adequately responded to the challenge. In this case, the security infrastructure110may determine that the second user device is in possession of the second master private key. As shown by reference numeral215, the security infrastructure110may correlate and/or store, in the memory (e.g., database114), received registration information. In an example, the security infrastructure110may compare the received registration information with all previously stored registration information. In this case, the security infrastructure110may compare the second registration information with the first registration information. Based at least in part on a result of the comparison, the security infrastructure110may determine that metadata (e.g., account number with the network service provider) included in the second registration information matches (e.g., is the same as) metadata (e.g., account number with the network service provider) included in the first registration information. As a result, the security infrastructure110may determine that the first user device and the second user device are both associated with the account registered with the network service provider. Further, the security infrastructure110may store correlation information indicating that the first user device and the second user device are both associated with the account registered with the network service provider. In a similar and/or analogous manner, when the security device110receives third registration from a third user device (not shown) associated with the account registered with the network service provider, the security device110may store correlation information indicating that the first user device, the second user device, and the third user device are associated with the account registered with the network service provider, and so on. When the first user device is to receive a network service, the first security application may authenticate the first user device with the network service provider. To do so, the first security application may determine availability of the first biometric unit. In some aspects, as shown by reference numeral220, the first security application may determine that the first biometric unit is unavailable. In an example, the operating system being utilized by the first user device may indicate to the first security application that the first biometric unit is unavailable based at least in part on a malfunction associated with the first biometric unit. In this case, the first security application may determine that the first user device is to rely on another user device (e.g., the second user device or the third user device) associated with the registered account for authentication with the network service provider. In some aspects, when the first user device is not equipped with the first biometric unit, the first security application may similarly determine that the first user device is to rely on another user device (e.g., the second user device or the third user device) associated with the registered account for authentication with the network service provider. Based at least in part on such a determination, as shown by reference numeral225, the first security application may transmit an authentication request to the security infrastructure110. The authentication request may include a request for the security infrastructure110to determine and provide a list of other user devices associated with the registered account, the other user devices including currently available biometric units. Based at least in part on receiving the authentication request from the first security application, the security infrastructure110may utilize the stored correlation information to determine the list of user devices associated with the registered account, with which the first user device is also associated. As a result, the security infrastructure110may determine that any number of user devices including the second user device and the third user device is associated with the registered account, with which the first user device is also associated. Further, the security infrastructure110may determine the list to include the any number of user devices including first user device and the third user device. In some aspects, prior to determining the list, the security infrastructure110may confirm current statuses (e.g., availability) of respective biometric units with the any number of user devices. In an example, as shown by reference numeral230, the security infrastructure110may transmit a status message to the second user device requesting the second user device to provide a status associated with current availability of the second biometric unit and another status message to the third user device requesting the third user device to provide a status associated with current availability of the third biometric unit. In some aspects, the security infrastructure110may identify the first user device in the status messages and may receive responses that identify the first user device to allow the security infrastructure110to efficiently track status messages and responses associated with the first user device. In some aspects, the status messages may also identify a network service to be received by the first user device. Based at least in part on receiving the status message, the second user device may determine availability of the second biometric unit, as discussed elsewhere herein (e.g., block205). When the second security application determines that the second biometric unit is currently available, the second security application may indicate the current availability of the second biometric unit to the security infrastructure110. In this case, the security infrastructure110may determine the list to include the second user device. Alternatively, when the third security application determines that the third biometric unit is currently unavailable, the third security application may indicate the current unavailability of the third biometric unit to the security infrastructure110. In this case, the security infrastructure110may determine the list to exclude the third user device. Based at least in part on determining the list of user devices, as shown by reference numeral235, the security infrastructure110may transmit the list of user devices to the first security application. Based at least in part on receiving the list of user devices, as shown by reference numeral240, the first security application may transmit a selection message to the security infrastructure110, the selection message indicating selection of select a user device listed in the list of user devices to enable authentication of the first user device with the network service provider. In an example, the selection message may indicate selection of the second user device to enable authentication of the first user device with the network service provider. In some aspects, selection of the second user device may be based at least in part on availability of an authorized user to provide biometric information. In an example, the first user device may select the second user device based at least in part on the second user device being currently located near the first user device. In another example, the first user device may select the second user device based at least in part on an understanding that the authorized user is available to provide biometric information by utilizing the second biometric unit. Based at least in part on receiving the selection message, the security infrastructure110may determine that the second user device is to be utilized to enable authentication of the first user device with the network service provider. In this case, as shown by reference numeral245, the security infrastructure110may transmit an authentication message to indicate to the second user device that the second user device is to enable authentication of the first user device with the network service provider. The authentication message may include, for example, the first installation identifier associated with the first security application and/or the first master public key associated with the first TPM device. As shown by reference numeral250, the first user device, the security infrastructure110, and the second user device may authenticate the first user device with the network service provider. Based at least in part on receiving the authentication message, the second security application may utilize the second biometric unit and the second TPM device to determine the first factor based at least in part on verifying biometric information. Based at least in part on successful verification of biometric information in real time (e.g., during encrypting the first factor), the second security application may encrypt the first factor based at least in part on utilizing the first master public key associated with the first TPM device. The second security application may transmit the encrypted first factor to the security infrastructure110, which may relay (e.g., transmit) the encrypted first factor to the first security application. In some aspects, the second security application may also transmit an indication that biometric information was verified in real time while determining the first factor. In some aspects, the second security application may also transmit the first installation identifier, associated with the first security application, in association with the encrypted first factor. The first security application may transmit a first decryption request to the first TPM device. The first decryption request may include the first unique identifier and/or the first master public key in association with the encrypted first factor to indicate to the first TPM device that the encrypted first factor is to be decrypted based at least in part on utilizing the first master key associated with the first unique identifier and/or the first master public key. The first decryption request may also indicate and/or include a result of the successful verification of biometric information in real time by the second security application. Based at least in part on receiving the first decryption request, the first TPM device may determine, from the included and/or indicated result of the successful verification, that the received biometric information matches the stored authentic biometric information. Further, the first TPM device may determine that the encrypted first factor is to be decrypted using the first master private key associated with the first master public key, as indicated by the first decryption request. As a result, the first TPM device may decrypt the encrypted first factor and provide the first factor to the first security application. The first security application may communicate the first factor to the network service provider for authentication. Based at least in part on successful authentication of the first factor, the network service provider may prompt the first user device to communicate the second factor within the predetermined duration of time. Based at least in part on receiving the prompt, the first user device may request the second user device (via the security infrastructure110) to determine and provide the second factor. In this case, the second security application may utilize the second biometric unit and the second TPM device to determine the second factor based at least in part on utilizing the secret information and the security algorithm, as discussed later on. Based at least in part on successful verification of the biometric information in real time (e.g., during encrypting the second factor), the second security application may encrypt the second factor based at least in part on utilizing the first master public key associated with the first TPM device. The second security application may transmit the encrypted second factor to the security infrastructure110, which may relay (e.g., transmit) the encrypted second factor to the first security application within the predetermined duration of time. In some aspects, the second security application may also transmit the first installation identifier, associated with the first security application, in association with the encrypted second factor. The first security application may transmit a second decryption request to the first TPM device. The second decryption request may include the first unique identifier and/or the first master public key in association with the encrypted second factor to indicate to the first TPM device that the encrypted second factor is to be decrypted based at least in part on utilizing the first master private key associated with the first unique identifier and/or the first master public key. The second decryption request may also indicate and/or include a result of the successful verification of biometric information by the second security application. Based at least in part on receiving the second decryption request, the first TPM device may determine, from the included and/or indicated result of the successful verification, that the received biometric information matches the stored authentic biometric information. Further, the first TPM device may determine that the encrypted second factor is to be decrypted using the first master private key associated with the first master public key, as indicated by the second decryption request. As a result, the first TPM device may decrypt the encrypted second factor and provide the second factor to the first security application. The first security application may communicate, within the predetermined duration of time, the second factor to the network service provider for authentication. Based at least in part on successful authentication of the second factor, the network service provider may provide the first user device with the network services. In this way, even when the first user device is not equipped with a biometric unit or experiences a malfunctioning biometric unit, and the first user device is unable to receive and/or verify biometric information, the first user device may be securely authenticated with the network service provider. As a result, the first security application and the second security application may enable efficient utilization of resources (e.g., management resources, network resources, financial resources, time resources, processing resources, memory resources, power consumption resources, battery life, or the like) by the user devices and/or the network service provider. Additionally, the first security application and the second security application may avoid having to utilize a separate user device to receive the network services, thereby reducing an inconvenience associated with receiving the network services and/or avoiding a delay in receiving the network services. In some aspects, the security infrastructure110may enable the first user device and the second user device to be included in a mesh network based at least in part on determining that the first user device and the second user device are associated with the same account registered with the network service provider. As a result, the first user device and the second user device may establish a meshnet connection to communicate with each other without the security infrastructure110relaying messages between the first user device and the second user device. In some aspects, the data communicated by the first user device and the second user device via the meshnet connection may be encrypted. In some aspects, the network service provider may be owned or operated or included within the security infrastructure110. Determination of the first factor and the second factor by the second security application, the second biometric unit, and the second TPM device based at least in part on verifying biometric information will now be discussed. In some aspects, during registration of the account, the second security application may determine authentication information, which may include first factor authentication information and second factor authentication information. The first factor authentication information may include, for example, information associated with determining a first factor such as, for example, a username and/or a password associated with authenticating with the network service provider. In an example, the first factor authentication information may include predetermined information such as, for example, a hint, a question, and/or a string of alphanumeric characters to enable the second security application to determine the first factor in real time (e.g., during authentication with the network service provider). The second factor authentication information may include predetermined information such as, for example, secret information associated with determining the second factor (e.g., one-time password, one-time pin, one-time token, or the like). In some aspects, based at least in part on utilizing the secret information in association with, for example, a security algorithm, the second security application may determine the second factor in real time (e.g., during authentication with the network service provider). In an example, the security algorithm may include a one-time password algorithm, a time-based one-time password algorithm, or the like. The second security application may encrypt authentication information. In some aspects, the second security application may determine a first cryptographic key and may encrypt the first factor authentication information based at least in part on utilizing the first cryptographic key. Further, the second security application may determine a second cryptographic key and may encrypt the second factor authentication information (e.g., secret information) based at least in part on utilizing the second cryptographic key. In some aspects, the first cryptographic key and the second cryptographic key may include respective symmetric cryptographic keys. The second security application may store encryption information including, for example, encrypted first factor authentication information and encrypted second factor authentication information in a memory (e.g., memory630) associated with the second user device and/or in a memory (e.g., database114) associated with the security infrastructure110. The second security application may transmit to the second TPM device encryption requests to encrypt the cryptographic keys. In some aspects, the encryption requests may include a first encryption request to encrypt the first cryptographic key based at least in part on utilizing the second master key. When the second master key includes a symmetric second master key, the TPM device108may utilize the symmetric second master key to encrypt the assigned private key. When the second master key includes a second master public key and a second master private key, the TPM device108may utilize the second master public key to encrypt the cryptographic keys. In some aspects, the TPM device108may provide the encrypted cryptographic keys to the client application104. The first encryption request may include the second unique identifier in association with the first cryptographic key to indicate to the second TPM device that the first cryptographic key is to be encrypted based at least in part on utilizing the second master key that is associated with (e.g., identified by) the second unique identifier. The first encryption request may also include and/or indicate a result of the second security application verifying biometric information. In an example, the second security application may receive and verify biometric information in real time (e.g., while transmitting the first encryption request), as discussed elsewhere herein. When the received biometric information matches the stored authentic biometric information, the second security application may determine that the received biometric information belongs to the authorized user and may select to transmit the first encryption request. Further, the first encryption request may include and/or indicate the result of the successful verification of the received biometric information. Based at least in part on receiving the first encryption request, the second TPM device may determine, from the included and/or indicated result of the successful verification and/or authentication, that the received biometric information matches the stored authentic biometric information. Further, the second TPM device may determine that the first cryptographic key is to be encrypted using the second master key associated with the second unique identifier, as indicated by the first encryption request. As a result, the second TPM device may encrypt the first cryptographic key based at least in part on utilizing the second master key. In some aspects, the second TPM device may provide the encrypted first cryptographic key to the second security application. The second security application may transmit a second encryption request to encrypt the second cryptographic key based at least in part on utilizing the second master key. The second encryption request may include the second unique identifier in association with the second cryptographic key to indicate to the second TPM device that the second cryptographic key is to be encrypted based at least in part on utilizing the second master key that is associated with (e.g., identified by) the second unique identifier. The second encryption request may also include and/or indicate a result of the second security application verifying biometric information. In an example, the second security application may receive and verify biometric information in real time (e.g., while transmitting the second encryption request), as discussed elsewhere herein. When the received biometric information matches the stored authentic biometric information, the second security application may determine that the received biometric information belongs to the authorized user and may select to transmit the second encryption request. Further, the second encryption request may include and/or indicate the result of the successful verification of the received biometric information. Based at least in part on receiving the second encryption request, the second TPM device may determine, from the included and/or indicated result of the successful verification, that the received biometric information matches the stored authentic biometric information. Further, the second TPM device may determine that the second cryptographic key is to be encrypted using the second master key associated with the second unique identifier, as indicated by the second encryption request. As a result, the second TPM device may encrypt the second cryptographic key based at least in part on utilizing the second master key. In some aspects, the second TPM device may provide the encrypted second cryptographic key to the second security application. When the first user device is to receive the network services from the network service provider, the second user device/second security application may enable authentication of the first user device with the network service provider. Based at least in part on receiving the authentication message from the security infrastructure110, the second security application may retrieve the encrypted first cryptographic key and/or the encrypted second cryptographic key from the memory associated with the second user device. Further, the second security application may transmit to the second TPM device decryption requests to decrypt the encrypted cryptographic keys. In some aspects, the decryption requests may include a first decryption request to decrypt the encrypted first cryptographic key based at least in part on utilizing the second master key. The first decryption request may include the second unique identifier and/or the second master public key in association with the encrypted first cryptographic key to indicate to the second TPM device that the encrypted first cryptographic key is to be decrypted based at least in part on utilizing the second master private key that is associated with (e.g., identified by) the first unique identifier and/or the second master public key. In some aspects, the second TPM device may require the association of the second unique identifier with the encrypted first cryptographic key to indicate to the second TPM device that the encrypted first cryptographic key is to be decrypted based at least in part on utilizing the second master key. When the second master key includes the symmetric second master key, the TPM device108may utilize the symmetric second master key to decrypt the encrypted cryptographic keys. When the second master key includes the second master public key and the second master private key, the TPM device108may utilize the second master private key to decrypt the encrypted cryptographic keys. In some aspects, the TPM device108may provide the decrypted cryptographic keys to the client application104. The first decryption request may also include a result of the second security application verifying biometric information. In an example, the second security application may receive and verify biometric information in real time (e.g., while transmitting the first decryption request), as discussed elsewhere herein. When the received biometric information matches the stored authentic biometric information, the second security application may determine that the received biometric information belongs to the authorized user and may select to transmit the first decryption request. Further, the first decryption request may include and/or indicate the result of the successful verification of the received biometric information. Based at least in part on receiving the first decryption request, the second TPM device may determine, from the included and/or indicated result of the verification, that the received biometric information matches the stored authentic biometric information. Further, the second TPM device may determine that the encrypted first cryptographic key is to be decrypted using the second master key associated with the first unique identifier, as indicated by the first decryption request. As a result, the second TPM device may decrypt the encrypted first cryptographic key based at least in part on utilizing the second master key. The second security application may utilize the first cryptographic key to decrypt the first factor authentication information. The second security application may utilize the first factor authentication information to determine the first factor. Also, the second security application may encrypt the first factor based at least in part on utilizing the first master public key associated with the first TPM device. Further, the second security application may transmit the encrypted first factor to the first user device directly or via the security infrastructure110. In some aspects, the second security application may also transmit an indication that biometric information was verified in real time while determining the first factor. The first security application may decrypt the encrypted first factor, as discussed elsewhere herein, and may communicate the first factor to the network service provider may authenticate the first factor. Based at least in part on successful authentication of the first factor, the network service provider may prompt the first user device for communication of the second factor. Based at least in part on successful authentication of the first factor, the first security application may request, directly or via the security infrastructure110, the second security application to determine and provide the second factor within the predetermined duration of time. In this case, the second security application may transmit a second decryption request to decrypt the encrypted second cryptographic key based at least in part on utilizing the second master key. The second decryption request may include the second unique identifier and/or the second master public key in association with the encrypted second cryptographic key to indicate to the second TPM device that the encrypted second cryptographic key is to be decrypted based at least in part on utilizing the second master key that is associated with (e.g., identified by) the second unique identifier and/or the second master public key. In some aspects, the second TPM device may require the association of the second unique identifier with the encrypted second cryptographic key to indicate to the second TPM device that the encrypted second cryptographic key is to be decrypted based at least in part on utilizing the second master key. The second decryption request may also include and/or indicate a result of the second security application verifying and authenticating biometric information. In an example, the second security application may receive and verify biometric information in real time (e.g., while transmitting the second decryption request), as discussed elsewhere herein. When the received biometric information matches the stored authentic biometric information, the second security application may determine that the received biometric information belongs to the authorized user and may select to transmit the second decryption request. Further, the second decryption request may include and/or indicate the result of the successful verification of the received biometric information. Based at least in part on receiving the second decryption request, the second TPM device may determine, from the included and/or indicated result of the successful verification, that the received biometric information matches the stored authentic biometric information. Further, the second TPM device may determine that the encrypted second cryptographic key is to be decrypted using the second master key associated with the second unique identifier and/or the second master public key, as indicated by the second decryption request. As a result, the second TPM device may decrypt the encrypted second cryptographic key based at least in part on utilizing the second master key. In some aspects, the second TPM device may provide the decrypted second cryptographic key to the second security application. The second security application may utilize the second cryptographic key to decrypt the second factor authentication information. The second security application may utilize the second factor authentication information with the security algorithm to determine the second factor. Also, the second security application may encrypt the second factor based at least in part on utilizing the first master public key associated with the first TPM device. Further, the second security application may transmit the encrypted second factor to the first user device directly or via the security infrastructure110. In some aspects, the second security application may also transmit an indication that biometric information was verified in real time while determining the second factor. The first security application may decrypt the encrypted second factor, as discussed elsewhere herein, and may communicate the second factor to the network service provider within the predetermined duration of time. The network service provider may authenticate the second factor. Based at least in part on successful authentication of the second factor, the network service provider may provide the first user device with the network services. By utilizing the techniques discussed herein, even when a user device is not equipped with a biometric unit or experiences a malfunctioning biometric unit during authentication, and the user device is unable to receive and/or verify biometric information, the user device may be securely authenticated with the network service provider. As a result, respective security applications may enable efficient utilization of resources (e.g., management resources, network resources, financial resources, time resources, processing resources, memory resources, power consumption resources, battery life, or the like) by the user devices and/or the network service provider. Additionally, the respective security applications may avoid having to utilize a separate user device to receive the network services, thereby reducing an inconvenience associated with receiving the network services and/or avoiding a delay in receiving the network services. As indicated above,FIG.2is provided as an example. Other examples may differ from what is described with regard toFIG.2. FIG.3is an illustration of an example process300associated with a secure cross-device authentication system, according to various aspects of the present disclosure. In some aspects, the process300may be performed by a memory and/or a processor/controller (e.g., processor620) associated with a user device (e.g., user device102) executing a security application. As shown by reference numeral310, process300may include determining, by a first user device, unavailability of a first biometric unit associated with the first user device for verification of first biometric information. For instance, the first user device may utilize the associated processor/controller to determine unavailability of a first biometric unit associated with the first user device for verification of first biometric information, as discussed elsewhere herein. As shown by reference numeral320, process300may include selecting, by the first user device based at least in part on determining unavailability of the first biometric unit, a second biometric unit associated with a second user device for verification of second biometric information. For instance, the first user device may utilize the associated processor/controller to select, based at least in part on determining unavailability of the first biometric unit, a second biometric unit associated with a second user device for verification of second biometric information, as discussed elsewhere herein. As shown by reference numeral330, process300may include receiving, by the first user device from the second user device based at least in part on a first verification of the second biometric information, a first factor associated with authentication of the first user device by a service provider. For instance, the first user device may utilize an associated communication interface (e.g., communication interface670) with the associated processor/controller to receive, from the second user device based at least in part on a first verification of the second biometric information, a first factor associated with authentication of the first user device by a service provider, as discussed elsewhere herein. As shown by reference numeral340, process300may include receiving, by the first user device from the second user device based at least in part on successful authentication of the first factor and on a second verification of the second biometric information, a second factor associated with authentication of the first user device by the service provider. For instance, the first user device may utilize the associated communication interface and processor/controller to receive, from the second user device based at least in part on successful authentication of the first factor and on a second verification of the second biometric information, a second factor associated with authentication of the first user device by the service provider, as discussed elsewhere herein. As shown by reference numeral350, process300may include receiving, by the first user device from the service provider, a service based at least in part on successful authentication of the second factor. For instance, the first user device may utilize the associated communication interface and processor/controller to receive, from the service provider, a service based at least in part on successful authentication of the second factor, as discussed elsewhere herein. Process300may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, in process300, selecting the second biometric unit includes determining availability of the second biometric unit for verification of the second biometric information. In a second aspect, alone or in combination with the first aspect, in process300, receiving the first factor includes receiving the first factor in encrypted form, the first factor being encrypted based at least in part on utilizing a public key associated with a trusted module associated with the first device. In a third aspect, alone or in combination with the first through second aspects, in process300, receiving the first factor includes receiving a first indication indicating that the first verification of the second biometric information was successful. In a fourth aspect, alone or in combination with the first through third aspects, in process300, receiving the second factor includes receiving the second factor in encrypted form, the second factor being encrypted based at least in part on utilizing a public key associated with a trusted module associated with the first device. In a fifth aspect, alone or in combination with the first through fourth aspects, in process300, receiving the second factor includes receiving a second indication indicating that the second verification of the second biometric information was successful. In a sixth aspect, alone or in combination with the first through fifth aspects, process300may include transmitting the first factor to the service provider for authentication; and transmitting the second factor to the service provider for authentication based at least in part on successful authentication of the first factor. AlthoughFIG.3shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.3. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.3is provided as an example. Other examples may differ from what is described with regard toFIG.3. FIG.4is an illustration of an example process400associated with a secure cross-device authentication system, according to various aspects of the present disclosure. In some aspects, the process400may be performed by a memory and/or a processor/controller (e.g., processing unit112, processor620) associated with an infrastructure device (e.g., security infrastructure110) configuring a security application. As shown by reference numeral410, process400may include receiving, by an infrastructure device from a first user device, a request to provide a list of available user devices that are available to be utilized for authenticating the first user device with a service provider. For instance, the infrastructure device may utilize an associated communication interface (e.g., communication interface670) with the associated processor/controller to receive, from a first user device, a request to provide a list of available user devices that are available to be utilized for authenticating the first user device with a service provider, as discussed elsewhere herein. As shown by reference numeral420, process400may include receiving, by the infrastructure device from the first user device based at least in part on providing the list of available user devices, a selection message indicating a selection of a second user device, from among the available user devices, for authenticating the first user device with the service provider. For instance, the infrastructure device may utilize the associated communication interface and processor/controller to receive, from the first user device based at least in part on providing the list of available user devices, a selection message indicating a selection of a second user device, from among the available user devices, for authenticating the first user device with the service provider, as discussed elsewhere herein. As shown by reference numeral430, process400may include transmitting, by the infrastructure device to the second user device based at least in part on receiving the selection message, an authentication message indicating that the second user device is to authenticate the first user device with the service provider. For instance, the infrastructure device may utilize the communication interface and associated processor/controller to transmit, to the second user device based at least in part on receiving the selection message, an authentication message indicating that the second user device is to authenticate the first user device with the service provider, as discussed elsewhere herein. As shown by reference numeral440, process400may include receiving, by the infrastructure device from the second user device based at least in part on transmitting the authentication message, one or more encrypted authentication factors associated with authenticating the first user device with the service provider. For instance, the infrastructure device may utilize the associated communication interface and processor/controller to receive, from the second user device based at least in part on transmitting the authentication message, one or more encrypted authentication factors associated with authenticating the first user device with the service provider, as discussed elsewhere herein. As shown by reference numeral450, process400may include transmitting, by the infrastructure device to the first user device, the one or more encrypted factors associated with authenticating the first user device with the service provider. For instance, the infrastructure device may utilize the communication interface and associated processor/controller to transmit, to the first user device, the one or more encrypted factors associated with authenticating the first user device with the service provider, as discussed elsewhere herein. Process400may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, process400may include determining the list of available user devices based at least in part on determining user devices with currently available biometric units. In a second aspect, alone or in combination with the first aspect, process400may include determining one or more user devices to be included in the list of available user devices based at least in part on determining user devices that are associated with the first user device. In a third aspect, alone or in combination with the first through second aspects, in process400, receiving the one or more encrypted authentication factors includes receiving a first encrypted authentication factor, and receiving a second encrypted authentication factor based at least in part on successful authentication of the first encrypted authentication factor by the service provider. In a fourth aspect, alone or in combination with the first through third aspects, in process400, receiving the one or more encrypted authentication factors includes receiving the one or more encrypted authentication factors based at least in part on verification of biometric information. In a fifth aspect, alone or in combination with the first through fourth aspects, in process400, receiving the one or more encrypted authentication factors includes receiving an indication that the encrypted authentication factor is encrypted based at least in part on real-time verification of biometric information. In a sixth aspect, alone or in combination with the first through fifth aspects, in process400, receiving the one or more encrypted authentication factors includes receiving an encrypted authentication factor that is encrypted based at least in part on utilizing an encryption key associated with a trusted device included in the first user device. AlthoughFIG.4shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.4. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.4is provided as an example. Other examples may differ from what is described with regard toFIG.4. FIG.5is an illustration of an example process500associated with a secure cross-device authentication system, according to various aspects of the present disclosure. In some aspects, the process500may be performed by a memory and/or a processor/controller (e.g., processor620) associated with a user device (e.g., second user device102) executing a security application. As shown by reference numeral510, process500may include receiving, by a second user device, an authentication message indicating that the second user device is to authenticate a first user device with a service provider that provides a service to the first user device. For instance, the second user device may utilize an associated communication interface (e.g., communication interface670) with the associated memory and processor to receive an authentication message indicating that the second user device is to authenticate a first user device with a service provider that provides a service to the first user device, as discussed elsewhere herein. As shown by reference numeral520, process500may include determining, by the second user device, one or more authentication factors associated with authenticating the first user device with the service provider. For instance, the second user device may utilize the associated memory and processor to determine one or more authentication factors associated with authenticating the first user device with the service provider, as discussed elsewhere herein. As shown by reference numeral530, process500may include encrypting, by the second user device, the one or more authentication factors based at least in part on utilizing an encryption key associated with a trusted device included in the first user device. For instance, the second user device may utilize the associated memory and processor to encrypt the one or more authentication factors based at least in part on utilizing an encryption key associated with a trusted device included in the first user device, as discussed elsewhere herein. As shown by reference numeral540, process500may include transmitting, by the second user device, one or more encrypted authentication factors to enable authentication of the first user device with the service provider. For instance, the second user device may utilize the associated memory and processor to transmit one or more encrypted authentication factors to enable authentication of the first user device with the service provider, as discussed elsewhere herein. Process500may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, in process500, determining the one or more authentication factors includes determining a first authentication factor, and determining a second authentication factor based at least in part on successful authentication of the first factor by the service provider. In a second aspect, alone or in combination with the first aspect, in process500, determining the one or more authentication factors includes determining a first authentication factor based at least in part on a predetermined string of alphanumeric characters. In a third aspect, alone or in combination with the first through second aspects, in process500, determining the one or more authentication factors includes determining a first authentication factor, and determining a second authentication factor based at least in part on predetermined secret information and a security algorithm. In a fourth aspect, alone or in combination with the first through third aspects, in process500, encrypting the one or more authentication factors includes verifying biometric information. In a fifth aspect, alone or in combination with the first through fourth aspects, in process500, transmitting the one or more encrypted authentication factors includes transmitting an indication indicating successful verification of biometric information. In a sixth aspect, alone or in combination with the first through fifth aspects, in process500, receiving the authentication message includes receiving the authentication message based at least in part on a determination that the second user device includes a biometric unit. AlthoughFIG.5shows example blocks of the process, in some aspects, the process may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.5. Additionally, or alternatively, two or more of the blocks of the process may be performed in parallel. As indicated above,FIG.5is provided as an example. Other examples may differ from what is described with regard toFIG.5. FIG.6is an illustration of example devices600, according to various aspects of the present disclosure. In some aspects, the example devices600may form part of or implement the systems, environments, infrastructures, components, or the like described elsewhere herein and may be used to perform the example processes described elsewhere herein. The example devices600may include a universal bus610communicatively coupling a processor620, a memory630, a storage component640, an input component650, an output component660, and a communication interface670. Bus610may include a component that permits communication among multiple components of a device600. Processor620may be implemented in hardware, firmware, and/or a combination of hardware and software. Processor620may take the form of a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component. In some aspects, processor620may include one or more processors capable of being programmed to perform a function. Memory630may include a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use by processor620. Storage component640may store information and/or software related to the operation and use of a device600. For example, storage component640may include a hard disk (e.g., a magnetic disk, an optical disk, and/or a magneto-optic disk), a solid state drive (SSD), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, along with a corresponding drive. Input component650may include a component that permits a device600to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, and/or a microphone). Additionally, or alternatively, input component650may include a component for determining location (e.g., a global positioning system (GPS) component) and/or a sensor (e.g., an accelerometer, a gyroscope, an actuator, another type of positional or environmental sensor, and/or the like). Output component660may include a component that provides output information from device600(via, for example, a display, a speaker, a haptic feedback component, an audio or visual indicator, and/or the like). Communication interface670may include a transceiver-like component (e.g., a transceiver, a separate receiver, a separate transmitter, and/or the like) that enables a device600to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. Communication interface670may permit device600to receive information from another device and/or provide information to another device. For example, communication interface670may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, and/or the like. A device600may perform one or more processes described elsewhere herein. A device600may perform these processes based on processor620executing software instructions stored by a non-transitory computer-readable medium, such as memory630and/or storage component640. As used herein, the term “computer-readable medium” may refer to a non-transitory memory device. A memory device may include memory space within a single physical storage device or memory space spread across multiple physical storage devices. Software instructions may be read into memory630and/or storage component640from another computer-readable medium or from another device via communication interface670. When executed, software instructions stored in memory630and/or storage component640may cause processor620to perform one or more processes described elsewhere herein. Additionally, or alternatively, hardware circuitry may be used in place of or in combination with software instructions to perform one or more processes described elsewhere herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. The quantity and arrangement of components shown inFIG.6are provided as an example. In practice, a device600may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.6. Additionally, or alternatively, a set of components (e.g., one or more components) of a device600may perform one or more functions described as being performed by another set of components of a device600. As indicated above,FIG.6is provided as an example. Other examples may differ from what is described with regard toFIG.6. Persons of ordinary skill in the art will appreciate that the aspects encompassed by the present disclosure are not limited to the particular exemplary aspects described herein. In that regard, although illustrative aspects have been shown and described, a wide range of modification, change, and substitution is contemplated in the foregoing disclosure. It is understood that such variations may be made to the aspects without departing from the scope of the present disclosure. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the present disclosure. The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the aspects to the precise form disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the aspects. As used herein, the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. As used herein, a processor is implemented in hardware, firmware, or a combination of hardware and software. As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, or not equal to the threshold, among other examples, or combinations thereof. It will be apparent that systems or methods described herein may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems or methods is not limiting of the aspects. Thus, the operation and behavior of the systems or methods were described herein without reference to specific software code—it being understood that software and hardware can be designed to implement the systems or methods based, at least in part, on the description herein. Even though particular combinations of features are recited in the claims or disclosed in the specification, these combinations are not intended to limit the disclosure of various aspects. In fact, many of these features may be combined in ways not specifically recited in the claims or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various aspects includes each dependent claim in combination with every other claim in the claim set. A phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (for example, a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”). | 102,121 |
11943366 | The figures depict embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein. DETAILED DESCRIPTION FIG.1is a block diagram illustrating one embodiment of a computing environment100for transferring enrollment in authentication services between client devices (e.g., client device130A and130B). In the embodiment shown, the computing environment100includes an authentication system110that provides authentication using client devices130, a service provider120that provides services in response to access requests authenticated using the client devices130, client devices130that are enrolled in authentication services of the authentication system110, and a network140that enables communication between the various components ofFIG.1. In some embodiments, a user of one or more of the client devices130is a member of an organization (e.g., an employee of a corporation) that contracts with the authentication system110to handle authentication on behalf of the members of the organization to access services of the service provider120, which can include internal services of the organization (e.g., an enterprise network or suite of applications) or third-party services used by the organization. Example third-party services include SALESFORCE, MICROSOFT OFFICE 365, SLACK, DOCUSIGN, ZOOM, or the like. In different embodiments, the computing environment100and its components may include different or additional elements than those illustrated inFIGS.1-2. Furthermore, the functionality may be distributed among the elements in a different manner than described. The components ofFIG.1are now described in more detail. The authentication system110authenticates requests for access to services of the service provider120(i.e., access requests). In particular, the authentication system110uses a client device that is enrolled in authentication services with the authentication system110(e.g., the enrolled client device130A) in order to authenticate requests for services associated with the enrolled client device. The authentication system110may be an authentication platform providing various authentication services for accessing services of service providers, such as single sign-on capabilities, multi-factor authentication (MFA), identity proofing, application programming interface access management, or other authentication services. During enrollment of a client device130the client device130generates enrollment information including authentication credentials of the client device and provides some or all of the enrollment information to the authentication system110. Authentication credentials are data values used by an enrolled client device that enable the enrolled client device to prove its identity to the authentication system110. Authentication credentials can include authentication certificates (e.g., MFA certificates), authentication/encryption keys (e.g., shared secret keys or public/private key pairs) various hardware security tokens, software security tokens, other authentication credentials, or any combination thereof. Some authentication credentials used by a client device130may be accessible to external systems (i.e., public authentication credentials), such as a public key, while others may only be accessible to the client device130(i.e., private authentication credentials), such as a private key. In embodiments, the authentication credentials are associated with a client device during enrollment of the client device in authentication services of the authentication system110. For example, the client device130may generate an authentication certificate including a public/private encryption key pair and provide the public key to the authentication system110. Enrollment information can additionally include other information used by the authentication system110to authenticate client devices130, such as characteristics of the client device130(e.g., IP address, geographic location, version, etc.), information describing a user associated with the client device130(e.g., name, age, email, etc.), or various authentication factors (e.g., passwords, secret questions, etc.). The authentication system110further provides one or more processes to users associated with the client devices130to transfer authentication credentials from an enrolled client device130(e.g., the client device130A) to a new non-enrolled client device130(e.g., the client device130B). The one or more transference processes provided by the authentication system110are described in greater detail below with reference toFIGS.2-4. The service provider120provides services to computing devices or systems in response to successful authentication of requests for access to services (i.e., access requests) by the authentication system110. As described above with reference to the system environment100, in some embodiments the service provider120provides access to internal services of an organization of which a user of the client device130is a member. In the same or different embodiments, the service provider120provides access to services of one or more third-party service providers. The service provider120communicates with the authentication system110to authenticate access requests by users of the service provider120. In embodiments, the service provider120receives authentication responses from the authentication system110including information indicating whether or not an access request was successfully authenticated. Based on the received authentication responses, the service provider120determines whether or not to provide access to requested services. In some embodiments, the authentication system110is a component of the service provider120. The client devices130(e.g., the client devices130A and130B) are computing devices that can be enrolled in authentication services of the authentication system130to authenticate access requests for services of the service provider120. For instance, a client device130can be a desktop computer, a laptop computer, a mobile device (e.g., a mobile phone, a tablet, etc.), or any other device suitable to execute the client application125. In embodiments, the client devices130generate enrollment information in order to enroll in authentication services of the authentication system110, including authentication credentials to use for authenticating through the authentication system110. The client devices130store the authentication credentials in secure storage. The term “secure storage,” as used herein, refers to storage of data that can only be accessed by authorized systems or processes. The client devices130further communicate with the authentication system110in order to transfer enrollment between client devices130(e.g., enrolling the client device130B in place of client device130A). In order to facilitate transfer of enrollment, the client devices130are configured to authorize the transfer on the enrolled device (i.e., the client device130enrolled at the start of the transfer) or the non-enrolled device (i.e., the client device130enrolled at the end of the transfer). Techniques for authorizing the enrollment transfer on the enrolled or non-enrolled client devices are described in greater detail below with reference toFIGS.2-4. Some or all of the processes on the client devices130relating to authentication through the authentication system110may be facilitated by software associated with the authentication system110on the client devices130(e.g., a client-side authentication application or process). Although only two client devices130are depicted inFIG.1, this is done in order to illustrate at least an enrolled client device (e.g., the client device130A) and a non-enrolled client device (e.g., the client device130B), and the computing environment100can include any number of enrolled or non-enrolled client devices130. In some embodiments, the client devices130securely store authentication credentials on a cryptographic microprocessor of the client devices130(e.g., as part of a keychain), such as a trusted platform module (TPM). In these embodiments, the cryptographic microprocessor may use a variety of cryptographic algorithms to encrypt the authentication credentials, such as the Rivest-Shamir-Adleman (RSA) algorithm, the secure hash algorithm1(SHA1), or Hash based Message Authentication Code (HMAC). As part of securely storing the authentication credentials in the cryptographic microprocessor, a client device130and the authentication system110may exchange encryption information (e.g., public keys of respective public/private key pairs) to enable the client device130to authenticate with the authentication system110using authentication credentials of the client device130. As an example, the client devices130may securely store private authentication credentials using TPM in order to prevent other systems from accessing the private authentication credentials. The network140connects the authentication system110, the service provider120, and the client devices130. The network140may be any suitable communications network for data transmission. In an embodiment such as that illustrated inFIG.1, the network140uses standard communications technologies or protocols and can include the interne. In another embodiment, the entities use custom or dedicated data communications technologies. FIG.2is a block diagram illustrating one embodiment of the authentication system110. In the embodiment shown, the authentication system110includes an authentication module210that authenticates client devices130, an enrollment transfer module220that transfers enrollment from enrolled client devices130to non-enrolled client devices130, and an enrollment information store230that stores enrollment information for client devices130. The components ofFIG.2are now described in more detail. The authentication module210authenticates access requests (e.g., for services of the service provider120) using enrolled client devices130. In embodiments, the authentication module210enrolls client devices130in authentication services provided by the authentication module210. In particular, the authentication module210associates enrollment information with the enrolled client device130(e.g., stored in the enrollment information store230). The enrollment information associated with an enrolled client device130can include authentication credentials (e.g., a public encryption key) generated by the enrolled client device130and provided to the authentication system110. The authentication module210may enroll a client device130in authentication services in response to receiving an enrollment request from the client device130. After enrolling a client device130in authentication services, the authentication module210receives requests to access services that are associated with the enrolled client device130(e.g., provided by a user of the enrolled client device130on the enrolled client device130or on another computing device). The authentication module210uses the enrolled client device130to authenticate the access request, such as requesting authentication information (e.g., an MFA factor) from the enrolled client device130(e.g., via an MFA push challenge) generated using authentication credentials to authenticate the client devices130. In some embodiments, the authentication services provided by the authentication module210include MFA. In this case, the authentication module210receives authentication factors from enrolled client devices130generated using the authentication credentials. For instance, the authentication credentials of an enrolled client device130can include a public/private key pair (e.g., of an MFA certificate) used to sign a payload of authentication-related information to provide as an authentication factor to the authentication system110. The authentication module210may request various numbers of authentication factors from an enrolled client device130, such an initial authentication factor, a secondary authentication factor, a tertiary authentication factor, etc. Furthermore, as part of the MFA authentication process, the authentication module210may solicit various types of authentication factors from a client device130, such as possession factors (e.g., software tokens), user knowledge factors (e.g., passwords, secret question answer, etc.), inherent factors (e.g., an identifier of an enrolled client device130or biometric data for a user of the client device130), location-based factors (e.g., a GPS coordinate of the enrolled client device130), or any other suitable MFA authentication factor. The enrollment transfer module220facilitates transfer of enrollment of an enrolled client device130(e.g., the client device130A) to a non-enrolled client device130(e.g., the client device130B). In particular, the enrollment transfer module220facilitates updating or replacing enrollment information for the enrolled client device with enrollment information for the non-enrolled client device. In embodiments, the enrollment transfer module220receives a request to transfer enrollment of an enrolled client device130to a non-enrolled client device130(herein an “enrollment transfer request”). The enrollment transfer request is received from the enrolled client device130and authorized by the enrolled client device130using authentication credentials associated with enrollment information for the enrolled client device130(e.g., signed using a private key). For example, a user of the enrolled client device may interact with a user interface associated with the authentication system110on the non-enrolled client device130or the enrolled client device130to submit the enrollment transfer request. Responsive to the enrollment transfer request, the enrollment transfer module220communicates with the enrolled client device and the non-enrolled client device in order to replace enrollment information associated with the enrolled client device with enrolment information associated with the non-enrolled client device. In particular, the enrollment transfer module220receives enrollment information for the non-enrolled client device including authentication credentials generated by the non-enrolled client device130. The enrollment transfer module220associates the enrollment information for the non-enrolled client device130with the non-enrolled client device130in place of some or all of the enrollment information for the enrolled client device130(e.g., stored in the enrollment information store230). The enrollment information associated with the non-enrolled client device130includes one or more authentication credentials of the non-enrolled client device that replace of one or more authentication credentials of the enrolled client device. For example, the enrollment module220may replace a public key of the enrolled client device with a public key of the non-enrolled client device. The enrollment transfer module220may further notify the enrolled client device130or the non-enrolled client device130indicating that the enrollment was or was not successfully transferred. After the enrollment transfer module220completes the transfer of enrollment to the previously non-enrolled client device130, the non-enrolled client device can be used for authenticating relevant access requests by the authentication system110(e.g., to access services of the service provider120). In some embodiments, the enrolled client device130and the non-enrolled client device130communicate via a personal area network (e.g., Bluetooth, ZigBee, etc.) in order to initiate the transfer of enrollment by the enrollment transfer module220. In this case, the non-enrolled client device130can request authorization of the transfer of enrollment from the enrolled client device130over the personal area network, or vice versa. For example, a user of both the enrolled client device and the non-enrolled client device can pair the enrolled and non-enrolled client devices via a personal area network and interact with the non-enrolled device to request authorization of the enrollment transfer request from the non-enrolled client device via the personal area network. Similarly, the user can interact with the enrolled client device to provide authorization of the transfer to the enrolled client device (e.g., via a user interface). After authorizing the enrollment transfer request from the non-enrolled client device130, the enrolled client device130provides authorization to the non-enrolled client device130that includes enrollment information for the enrolled client device130, such as an identifier of the enrolled client device130or a public key of the enrolled client device130associated with enrollment information at the authentication system110. After the enrolled client device130provides authorization to the enrolled client device, the non-enrolled client device130can use the enrollment information for the enrolled client device130to generate new enrollment information for the non-enrolled client device130. In particular, the non-enrolled client device130can generate one or more authentication credentials (e.g., an MFA certificate with a public/private key pair) for communicating with the authentication system110to replace authentication credentials used by the enrolled client device130. Some or all of the new enrollment information for the client device130B is provided to the enrollment transfer module220to replace enrollment information associated with the enrolled client device130, such as a public key of the non-enrolled client device130. The enrolled client device130or the non-enrolled client device130may further provide information indicating the authorization of the of the enrollment transfer by the enrolled client device to the enrollment transfer module220in order to confirm authorization of the transfer. In other embodiments, the enrolled client device130, the non-enrolled client device130, or the enrollment transfer module220can perform similar processes to initiate the transfer process as those described above using other types of device-to-device communication protocols (e.g., near-field communication (NFC)) or other networks (e.g., a local area network, wide area network, etc.). Relevant embodiments are described in greater detail below with reference toFIG.3. In the same or different embodiments, the enrollment transfer module220provides the enrolled client device130with a transfer authorization token in response to receiving an enrollment transfer request from the enrolled client device130. For example, the enrollment transfer module220may provide the enrolled client device130with a password (e.g., a one-time password) or a QR code. In this case, the enrollment transfer module220obtains new enrolment information from the non-enrolled client device130in response to verifying the same transfer authorization token received from the non-enrolled client device130. As an example, the enrolled client device130may provide the transfer authorization token to the non-enrolled client device130, such as in response to receiving user authorization to do so (e.g., via an interaction with a user interface displayed by the enrolled client device130). As another example, the enrollment transfer module220may provide a password to the enrolled client device and a user associated with the enrolled and non-enrolled client devices may manually provide the password back to the enrollment transfer module220using the non-enrolled client device (e.g., via an interaction with a user interface displayed by the non-enrolled client device130). In some embodiments, various steps of the enrollment transfer process facilitated by the enrollment transfer module220include performing presence verification of a user associated with the enrolled client device130or the non-enrolled client device130. Presence verification can include various processes to solicit input from a user on a client device130to verify that the user is the one physically interacting with the client device130. This presence verification is performed by the enrollment transfer module220as a security measure to ensure an authorized user is requesting enrollment transfer (e.g., the user who owns or otherwise operates the enrolled client device130), and not some other unauthorized person or computer process. The enrollment transfer module220may request presence verification from the enrolled client device, the non-enrolled client device, or both, before initiating the enrollment transfer or during the enrollment transfer. For example, the enrolled client device may provide presence information to the enrollment transfer module220authorized using authentication credentials of the non-enrolled client device associated with enrollment information for the non-enrolled client device, such as authorized via signing using a private key. In some embodiments, the enrollment transfer module220only requests presence verification, or any other form of authentication, from the user on the enrolled client device130in order to simplify the user experience in executing the enrollment transfer process. In order to perform presence verification, the enrollment transfer module220can request a user associated with the enrolled and non-enrolled client devices130provide various presence information, such as a password, MFA authentication factors, biometric information, or other information verifying the presence of a user. In embodiments where the enrollment transfer module220requests biometric information, the biometric information can include various types of biometric information the client devices130are configured to receive, such as user fingerprint information, user facial imaging information, user iris or retina information, user voice information, or other biometric information. In some embodiments, the enrollment transfer module220verifies the received presence information, such as comparing the received presence information to information stored by the authentication system200(e.g., in the enrollment information store230). In the same or different embodiments, the client device130that receives the presence information verifies the received presence information (e.g., performs biometric authentication) and conveys the verification to the enrollment transfer module220. In some embodiments, the enrollment transfer module220provides an interface for customizing presence verification for certain client devices130(e.g., client devices associated with an organization). For example, enabling or disabling presence verification, configuring which client devices provider presence verification during the enrollment transfer process, configuring the type or amount of presence information to collect, etc. In this case, the enrollment transfer module220may provide an interface to an administrator associated with certain client devices to customize the presence verification for enrollment transfer. TECHNIQUESFORTRANSFERRINGENROLLMENTINFORMATION FIG.3is a sequence diagram illustrating an embodiment of interactions between the authentication system110, an enrolled client device, and a non-enrolled client device to transfer enrollment from the enrolled client device to the non-enrolled client device using a personal area network. In the embodiment shown inFIG.3, at the start of the sequence the client device130A is enrolled in authentication services with the authentication system110(i.e., the enrolled device) and the client device130B is not enrolled (i.e., the non-enrolled client device). The sequence of interactions depicted inFIG.3begins with the non-enrolled client device130B requesting310authorization of an enrollment transfer from the enrolled client device130A. In particular, the client device130B connects to the client device130A via a personal area network (e.g., Bluetooth) and requests authorization of the enrollment transfer using the personal area network. In response to the request for authorization, the client device130A provides320authorization of the enrollment transfer to the client device130B. In particular, the authorization includes enrollment information of the client device130A, such as an identifier of enrollment information associated with the client device130A on the authentication system110. In some cases, the client device130A verifies the presence of a user associated with the client devices130A and130B in order to authorize the enrollment transfer request (e.g., using biometric authentication). As an example, a user that owns the client device130A may initiate the enrollment process after acquiring the client device130B using the above described process. After receiving authorization from the client device130A, the client device130B generates330enrollment information for the client device130B using the enrollment information for the client device130A included in the authorization, where the new enrollment information includes one or more new authentication credentials for the client device130B. For instance, the client device130B may generate a new public/private key pair to be used for authentication through the authentication system110. The client device130B provides some of the new enrollment information for the client device130B to the client device130A, including information corresponding to an authentication credential of the one or more authentication credentials (e.g., a public key of an MFA certificate of the client device130B). After receiving the enrollment information for the client device130B, the client device130A provides350an enrollment transfer request to the authentication system110authorized using one or more authentication credentials of the client device130A (e.g., a private key of the client device130A), where the enrollment transfer request includes the enrollment information for the client device130B. The user of the client device130A may further authorize the enrollment transfer request by providing presence information to the authentication system110(e.g., via biometric authentication). Based on the authorized transfer request (e.g., verified by the authentication system110using an authentication credential associated with the client device130A), the authentication system110enrolls360the client device130B in authentication services to replace the client device130A. In particular, the authentication system110associates the authentication credential of the client device130B with enrollment information in place of an authentication credential of client device130A. For example, the enrollment transfer module220may update a public key of an MFA certificate of the client device130A with a public key of an MFA certificate of the client device130B (e.g., stored in the enrollment information store230). After enrolling the client device130B, the authentication system110provides370confirmation of the enrollment of the client device130B to the client device130A. Based on the confirmation from the authentication system110, the client device130B provides380confirmation that the transfer was successfully completed to the client device130B. After confirming the successful transfer, the client device130A removes350the enrollment information of the client device130A (e.g., removes authentication credentials stored via TPM), thus completing the transfer. FIG.4is a sequence diagram illustrating an embodiment of interactions between the client system110, an enrolled client device, and a non-enrolled client device to transfer enrollment from the enrolled client device from the enrolled client device to the non-enrolled client device using a transfer authorization token. In the embodiment shown inFIG.4, at the start of the sequence the client device130A is enrolled in authentication services with the authentication system110(i.e., the enrolled device) and the client device130B is not enrolled (i.e., the non-enrolled client device). The sequence of interactions depicted inFIG.4begins with the client device130A requesting405transfer of enrollment for the client device130A to the client device130B by the authentication system110. The client device130A may authorize the request for transfer of enrollment using authentication credentials associated with enrollment information of the client device130A, such as authorized by signing using a private key. In response to the request, the authentication system110provides410a transfer authorization token to the client device130A, such as a password or QR code. After receiving the authorization token, the authentication system110receives415the transfer authorization token from the client device130B. For example, the user of the client device130A may manually submit the authorization token to the authentication system110on the client device130B, or the client device130A may transmit the authorization token to the client device130B over a network. The authentication system110verifies420the transformation authorization token, such as comparing the authorization token to a local copy of the authorization token. After verifying420the transfer authorization token, the authentication system110requests425enrollment information from the non-enrolled client device130. Based on the request430, the client device130B generates430new enrollment information for the client device130B, where the new enrollment information includes one or more authentication credentials for the client device130B. For instance, the client device130B may generate a new public/private key pair to be used for authentication through the authentication system110. The client device130B provides435some of the new enrollment information for the client device130B to the authentication system110, including information corresponding to an authentication credential of the one or more generated authentication credential (e.g., a public key from an MFA certificate of the client device130B). Using the enrollment information for the client device130B, the authentication system110enrolls440the client device130B in authentication services to replace the client device130A, as described at step360ofFIG.3. After enrolling the client device130B, the authentication system110provides445confirmation of the enrollment of the client device130B and the client device130A. After receiving confirmation of the successful transfer, the client device130A removes450the enrollment information of the client device130A (e.g., removes authentication credentials stored via TPM), thus completing the transfer. As such, through the interactions depicted inFIG.3or4, or performed by other processes described herein, the authentication system110provides an efficient end-user experience for transferring enrollment in authentication services from an enrolled client device to a non-enrolled client device. For example, using the sequence of interactions depicted inFIG.3, the interactions performed by a user may be limited to pairing the enrolled client device and the non-enrolled client device via a personal area network and authorizing the transfer on one of the devices (e.g., selecting a “authorize transfer” button on user interfaces of the devices or providing biometric information for biometric authentication). As another example, using the sequence of interactions depicted inFIG.4, the interactions performed by the user may be limited to requesting a transfer token on the enrolled device and inputting the transfer token on the non-enrolled device. Furthermore, the transfer processes described herein do not necessitate backing up the enrollment information for the enrolled device (e.g., on the authentication system110) in order to transfer the enrollment information to the non-enrolled device. In contrast, conventional systems would require the user perform more numerous and more complicated steps, such as logging into an account of the user associated with the conventional system and manually adjusting user settings to unenroll the enrolled client device and enroll the non-enrolled client device. As another example, conventional systems might require a user to contact administrators of the conventional system and for the administrators to perform tasks in order to facilitate enrolling the non-enrolled client device. In other embodiments than those shown inFIG.3orFIG.4, some or all of the steps may be performed by other entities or components. In addition, some embodiments may perform the steps in parallel, perform the steps in different orders, or perform different steps. EXEMPLARYCOMPUTERARCHITECTURE FIG.5is a block diagram illustrating physical components of a computer500used as part or all of authentication system110, the service provider120, or the client devices130, in accordance with an embodiment. Illustrated are at least one processor502coupled to a chipset504. Also coupled to the chipset504are a memory506, a storage device508, a graphics adapter512, and a network adapter516. A display518is coupled to the graphics adapter512. In one embodiment, the functionality of the chipset504is provided by a memory controller hub520and an I/O controller hub522. In another embodiment, the memory506is coupled directly to the processor502instead of the chipset504. The storage device508is any non-transitory computer-readable storage medium, such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. The memory506holds instructions and data used by the processor502. The graphics adapter512displays images and other information on the display518. The network adapter516couples the computer500to a local or wide area network. As is known in the art, a computer500can have different and/or other components than those shown inFIG.5. In addition, the computer500can lack certain illustrated components. In one embodiment, a computer500, such as a host or smartphone, may lack a graphics adapter512, and/or display518, as well as a keyboard510or external pointing device514. Moreover, the storage device508can be local and/or remote from the computer500(such as embodied within a storage area network (SAN)). As is known in the art, the computer500is adapted to execute computer program modules for providing functionality described herein. As used herein, the term “module” refers to computer program logic utilized to provide the specified functionality. Thus, a module can be implemented in hardware, firmware, and/or software. In one embodiment, program modules are stored on the storage device508, loaded into the memory506, and executed by the processor502. OTHERCONSIDERATIONS The present invention has been described in particular detail with respect to one possible embodiment. Those of skill in the art will appreciate that the invention may be practiced in other embodiments. First, the particular naming of the components and variables, capitalization of terms, the attributes, data structures, or any other programming or structural aspect is not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, formats, or protocols. Also, the particular division of functionality between the various system components described herein is merely for purposes of example, and is not mandatory; functions performed by a single system component may instead be performed by multiple components, and functions performed by multiple components may instead performed by a single component. Some portions of the above description present the features of the present invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. These operations, while described functionally or logically, are understood to be implemented by computer programs. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules or by functional names, without loss of generality. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices. Certain aspects of the present invention include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present invention could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems. The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored on a computer readable medium that can be accessed by the computer. Such a computer program may be stored in a non-transitory computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of computer-readable storage medium suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. The algorithms and operations presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will be apparent to those of skill in the art, along with equivalent variations. In addition, the present invention is not described with reference to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references to specific languages are provided for invention of enablement and best mode of the present invention. The present invention is well suited to a wide variety of computer network systems over numerous topologies. Within this field, the configuration and management of large networks comprise storage devices and computers that are communicatively coupled to dissimilar computers and storage devices over a network, such as the Internet. As used herein, any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Similarly, use of “a” or “an” preceding an element or component is done merely for convenience. This description should be understood to mean that one or more of the element or component is present unless it is obvious that it is meant otherwise. Where values are described as “approximate” or “substantially” (or their derivatives), such values should be construed as accurate +/−10% unless another meaning is apparent from the context. From example, “approximately ten” should be understood to mean “in a range from nine to eleven.” As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). Finally, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the claims. | 41,482 |
11943367 | DETAILED DESCRIPTION As mentioned, many cryptographic primitives have been developed and are used to enable secure communications and to secure data at rest. Cryptographic primitives can be used by (e.g., included in, implemented by, leveraged by, etc.) secure protocols, applications, utilities, frameworks, operating system libraries, and the like. For example, protocols for secure communications include, but are not limited to, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL), Transport Layer Security (TLS), and Media Access Control Security (MACsec). For example, frameworks that include (e.g., implement, support, etc.) secure communications include, but are not limited to, Data Over Cable Service Interface Specification (DOCSIS), Operational Data Provisioning (ODP), and the Data Plane Development Kit (DPDK). IPsec provides secure communications between two devices communicating over an Internet Protocol (IP) network. IPsec can be used to authenticate and encrypt data packets transmitted using the IP protocol. SSL and TLs are transport layer protocols for secure communications. MACsec, defined by the IEEE standard 802.1AE, provides point-to-point security on Ethernet links and can be used in combination with other security protocols, such as IPsec and SSL, to provide end-to-end network security. DOCSIS, which can be used for two-way communication of IP data packets between a computer network and cable television network, includes a security library that provides a framework for managing and provisioning security protocol operations. The security operations can be offloaded to hardware-based devices (e.g., a cryptographic processor). ODP is framework for operational analytics for decision making in operative business processes and for data extraction and replication. DPDK defines Application Programming Interfaces (APIs) that support different cryptographic primitives, including cipher, authentication, chained cipher/authentication and authenticated encryption with associated data (AEAD) symmetric and asymmetric cryptographic operations. At the start of a secure communications session (such as using IPsec, TLS, or SSL), communicating peer devices (e.g., applications, frameworks, or utilities therein) may determine (such as during a handshake or a call set-up process) the cipher suite to be used. As such, a security context can be set during (e.g., by, as a result of, etc.) the handshake process. A cipher is an algorithm (e.g., a set of steps) for performing a cryptographic primitive (e.g., function, etc.). Cryptographic primitives include encryption, decryption, hashing, or digital signing. Other uses of cryptographic primitives are possible and the disclosure is not limited to those mentioned herein. While not specifically detailed herein, a person skilled in the art recognizes that the disclosure herein also applies to data-at-rest security (such as encryption). For examples, operating systems may use encryption to keep passwords secret, conceal some parts of the system, or to ensure that updates and patches are from trusted sources (such as the maker of the system being updated or patched); and as an example of securing data at rest, an entire drive (i.e., the data therein) may be encrypted and the correct credentials may be required to access (e.g., decrypt and read) the data. In the context of development (e.g., application development), the application developer may choose a cipher suite or a set of crypto algorithms to be used, such as for secure communications or the application developer may be an implementor of a particular cryptographic algorithm. For example, the application may use or may be a protocol, a library, a framework, a utility, or the like, that may use or implement several cryptographic algorithms and the application developer may select one or more of the several cryptographic algorithms for use in the application. Cryptographic processing can consume significant computational resources including, but not limited to, compute time, clock time, memory, and power (battery or electricity). A server (e.g., an e-commerce server, an RTC server, etc.) may, at any point in time, have thousands of concurrently active secure communication sessions. As another example, a user device with limited resources (e.g., a mobile phone, a tablet, a wearable device, a personal computer, or the like) may participate in one or more simultaneous secure sessions. A specialized hardware processor (referred to herein as a cryptographic processor) can be used to reduce computational resource utilization of cryptographic primitives. Cryptographic primitives can be offloaded to the cryptographic processor. For example, the cryptographic processor may be used for random number generation (which can be used with respect to digital keys), hash processing (which can be used for message authentication), and/or stream and/or block encryption or decryption. A protocol, an application, a utility, a framework, an operating system library, or any other type of program that offloads execution (e.g., performance, etc.) of a cryptographic primitive to a cryptographic processor is referred to herein as a cryptographic-primitive requester. The cryptographic processor can support many different cryptographic primitives. Different cryptographic primitives may be associated with one or more instructions of an instruction set of the cryptographic processor. To offload processing of cryptographic primitives to the cryptographic processor, a developer of a cryptographic-primitive requester may have to know and use (e.g., configure, etc.) the respective appropriate instructions of the instruction set and the proper use (e.g., operand set up, etc.) of the appropriate instructions including framing (e.g., setting up, configuring, etc.) the contexts for the different instructions. As such, it can be onerous for an application developer to draft program instructions to offload cryptographic primitive processing of a cryptographic-primitive requester to the cryptographic processor. Additionally, experience gained by the developer in using the cryptographic processor with one set of cryptographic primitives may not be portable (e.g., re-usable) with other cryptographic primitives. As such, a generic cryptography wrapper as described herein can be used to simplify and streamline cryptographic-primitive requester development therewith providing flexibility, simplification, error avoidance, and experience portability to other domains when using the cryptographic processor. The various cryptographic primitives (e.g., encryption and authentication primitives) of different domain protocols (e.g., IPsec, TLS, SSL, DOCSIS, etc.) that may be supported (e.g., implemented by, etc.) by the cryptographic processor can be used under (e.g., via, through, etc.) the generic cryptography wrapper. The generic cryptography wrapper can be unaware of (e.g., agnostic to, etc.) the application-level protocol of an application that issues requests to the cryptographic processor via the generic cryptography wrapper to perform cryptographic primitives. The generic cryptography wrapper can process and meet the requirement of the protocol specific cryptography functionality with a generalized approach. To illustrate, and without limitations, an application that may be using a particular protocol (e.g., SSL, TLS, etc.) may issue requests to perform cryptographic primitives to the generic cryptography wrapper. The requests need not include details of the particular protocol. That is, for example, the generic cryptography wrapper may not be configured for the particular protocol in order to perform the cryptographic primitives requested. As the generic cryptography wrapper can be agnostic to specific protocols, cryptographic accelerations of additional frameworks may be possible to support without change in the generic cryptography wrapper. Additional frameworks can be supported by providing wrapper-specific parameters inline (such as by configuring context and input parameters as described herein). To illustrate, and without limitations, a current iteration of a cryptography processor with which the generic cryptography wrapper is used may not support hardware security modules (HSMs) (such as for safeguarding and managing encryption and decryption digital keys). When a next iteration of the cryptography processor supports HSMs, the generic cryptography wrapper need not be modified. Rather requests to the cryptography processor via the generic cryptography wrapper to use HSMs can be via request configuration of the generic cryptography wrapper. FIG.1is a diagram of an example of an environment100where a generic cryptography wrapper can be used. InFIG.1, the environment100can include multiple apparatuses and networks, such as an apparatus102, an apparatus104, and a network106. The apparatuses can be implemented by any configuration of one or more computers, such as a microcomputer, a mainframe computer, a supercomputer, a general-purpose computer, a special-purpose/dedicated computer, an integrated computer, a database computer, a remote server computer, a personal computer, a laptop computer, a tablet computer, a cell phone, a personal data assistant (PDA), a wearable computing device, or a computing service provided by a computing service provider (e.g., a web host or a cloud service provider). In some implementations, the computing device can be implemented in the form of multiple groups of computers that are at different geographic locations and can communicate with one another, such as by way of a network. While certain operations can be shared by multiple computers, in some implementations, different computers can be assigned to different operations. In some implementations, the environment100can be implemented using general-purpose computers with a computer program that, when executed, performs any of the respective methods, algorithms, and/or instructions described herein. In addition, or alternatively, for example, special-purpose computers/processors including specialized hardware can be utilized for carrying out any of the methods, algorithms, or instructions described herein. The apparatus102can include a processor108and a memory110. The processor108can be any type of device or devices capable of manipulating or processing data. The terms “signal,” “data,” and “information” are used interchangeably. The processor108can include any number of any combination of a central processor (e.g., a central processing unit or CPU), a graphics processor (e.g., a graphics processing unit or GPU), an intellectual property (IP) core, an application-specific integrated circuits (ASIC), a programmable logic array (e.g., a field-programmable gate array or FPGA), an optical processor, a programmable logic controller, a microcontroller, a microprocessor, a digital signal processor, or any other suitable circuit. The processor108can also be distributed across multiple machines (e.g., each machine or device having one or more processors) that can be coupled directly or connected via a network. The memory110can be any transitory or non-transitory device capable of storing instructions and data that can be accessed by the processor (e.g., via a bus). The memory110herein can include any number of any combination of a random-access memory (RAM), a read-only memory (ROM), a firmware, an optical disc, a magnetic disk, a hard drive, a solid-state drive, a flash drive, a security digital (SD) card, a memory stick, a compact flash (CF) card, or any suitable type of storage device. The memory110can also be distributed across multiple machines, such as a network-based memory or a cloud-based memory. The memory110can include data, an operating system, and one or more applications. The data can include any data for processing (e.g., an audio stream, a video stream, or a multimedia stream). An application can include instructions executable by the processor108to generate control signals for performing functions of the methods or processes disclosed herein. In some implementations, the apparatus102can further include a secondary storage device (e.g., an external storage device). The secondary storage device can provide additional memory when high processing needs exist. The secondary storage device can be any suitable non-transitory computer-readable medium, such as a ROM, an optical disc, a magnetic disk, a hard drive, a solid-state drive, a flash drive, a security digital (SD) card, a memory stick, or a compact flash (CF) card. Further, the secondary storage device can be a component of the apparatus102or can be a shared device accessible by multiple apparatuses via a network. In some implementations, the application in the memory110can be stored in whole or in part in the secondary storage device and loaded into the memory110as needed for processing. The apparatus102can further include an input/output (I/O) device (i.e., I/O device112). The I/O device112can also be any type of input devices, such as a keyboard, a numerical keypad, a mouse, a trackball, a microphone, a touch-sensitive device (e.g., a touchscreen), a sensor, or a gesture-sensitive input device. The I/O device112can be any output device capable of transmitting a visual, acoustic, or tactile signal to a user, such as a display, a touch-sensitive device (e.g., a touchscreen), a speaker, an earphone, a light-emitting diode (LED) indicator, or a vibration motor. For example, the I/O device112can be a display to display a rendering of graphics data, such as a liquid crystal display (LCD), a cathode-ray tube (CRT), an LED display, or an organic light-emitting diode (OLED) display. In some cases, an output device can also function as an input device, such as a touchscreen. The apparatus102can further include a communication device114to communicate with another apparatus via a network106. The network106can be any type of communications networks in any combination, such as a wireless network or a wired network. The wireless network can include, for example, a Wi-Fi network, a Bluetooth network, an infrared network, a near-field communications (NFC) network, or a cellular data network. The wired network can include, for example, an Ethernet network. The network106can be a local area network (LAN), a wide area networks (WAN), a virtual private network (VPN), or the Internet. The network106can include multiple server computers (or “servers” for simplicity). The servers can interconnect with each other. One or more of the servers can also connect to end-user apparatuses, such as the apparatus102and the apparatus104. The communication device114can include any number of any combination of device for sending and receiving data, such as a transponder/transceiver device, a modem, a router, a gateway, a wired network adapter, a wireless network adapter, a Bluetooth adapter, an infrared adapter, an NFC adapter, or a cellular antenna. The apparatus102can include a cryptographic processor109and a generic cryptographic wrapper111. The apparatus102can include one or more cryptographic-primitive requesters that offload performance of one or more cryptographic primitives to the cryptographic processor109. In an example, the generic cryptographic wrapper111and the cryptographic processor109can be integrated into one unit. In an example, the generic cryptographic wrapper111can be a library or the like that provides an Application Programming Interface (API) that a cryptographic-primitive requester can use to offload the execution of a cryptographic primitive to the cryptographic processor109. The API can be an instruction of the instruction set of the cryptographic processor109. The cryptographic-primitive requester can use the generic cryptographic wrapper111to configure context data and input data to be used by the cryptographer processor109to complete (e.g., execute, carry out, perform, etc.) cryptographic primitives indicated by the configuration. The cryptographer processor109can include micro code (for example, coded in an assembly language of the cryptographer processor109) to parse (e.g., read, interpret, etc.) the context data and input data as described herein to perform the cryptographic primitives. The output of cryptographic primitive (e.g., encrypted data, a MAC value, etc.) can be provided to the cryptographic-primitive requester. Several techniques can be available for providing the output of the cryptographic primitive to the cryptographic-primitive requester. For example, the cryptographic-primitive requester can use a pole mode. In the pole mode, cryptographic-primitive requester can continually pole a completion memory location (i.e., where the cryptographic processor write a status indicating completion of the cryptographic primitive). In an example, an interrupt can be raised by the cryptographic processor to indicate completion of the cryptographic primitive. Responsive to the completion of the cryptographic primitive, the cryptographic-primitive requester can read a status indicator (e.g., an error condition or successful completion) and/or read the output from an output memory space, as described herein. The apparatus102can securely communicate with the apparatus104via the network106. That the apparatus102securely communicates with the apparatus104can include that a cryptographic-primitive requester of the apparatus102(such as an application, a utility, a library, or the like stored in the memory110or an HSM (not shown) of the apparatus102) offloads cryptographic-primitives to the cryptographic processor109via the generic cryptographic wrapper111to effectuate the secure communication. The apparatus104can have a configuration that may be similar to the that of the apparatus102. The apparatus102can communicate with the apparatus104via the network106. The apparatus102and the apparatus104can also communicate with other apparatuses (not shown) connected to the network106. It should also be noted that parts or components of the apparatus102and the apparatus104and the environment100can include elements not limited to those shown inFIG.1. Without departing from the scope of this disclosure, the apparatus102and the apparatus104and the environment100can include more or fewer parts, components, and hardware or software modules for performing various functions in addition or related to offloading cryptographic primitives to a cryptographic processor using a generic cryptographic wrapper. FIG.2is a diagram of an example200of using a cryptographic wrapper. The example200includes a cryptographic processor202, which can be or can be similar to, the cryptographic processor109ofFIG.1; and a generic cryptographic wrapper204, which can be or can be similar to, the generic cryptographic wrapper111ofFIG.1. One or more cryptographic-primitive requesters can offload cryptographic primitives to the cryptographic processor202. Offloading a cryptographic primitive to the cryptographic processor202can mean causing the cryptographic processor202to perform the cryptographic primitive. The one or more cryptographic-primitive requesters can offload cryptographic primitive via requests to the generic cryptographic wrapper204. The one or more cryptographic-primitive requesters shown inFIG.2include an application206, a utility or library208, and a framework210. However, the disclosure is not so limited and other types of cryptographic-primitive requester are possible. By way of illustrative examples, the utility or library208can be an implementation of the open source OpenSSL that offloads cryptographic primitives to a hardware cryptographic processor, such as the cryptographic processor202. As is known, OpenSSL is a general purpose cryptography library that provides an open source implementation of the SSL and the TLS protocols; and the application206can be an RTC video application or an email application or some other application that may encrypt data before transmission or decrypt received data. In some examples, the application206can, or can additionally, use the utility or library208or the framework210. As such, cryptographic primitives may be performed (e.g., offloaded to the cryptographic processor202) by the utility or library208or the framework210responsive to the application206. During the development of a cryptographic-primitive requester (such as of the application206, the utility or library208, or the framework210), the generic cryptographic wrapper204may be, or may have a corresponding, development component (e.g., a library) or the like that can be used by the developer. The developer can include, in the source instructions (e.g., source code, etc.) of the cryptographic-primitive requester, instructions to the development component. The developer may provide configuration information to the instructions to the development component according to the application needs of the cryptographic primitives to be offloaded. The source instructions, including the instructions to the development component, may be compiled, linked, assembled, or the like, such that, when executed, the instructions to the development component cause the cryptographic primitives described in the instructions according to the configuration information to be offloaded to the cryptographic processor202. FIG.3is a diagram of an example300of use cases of using a cryptographic wrapper. The example300includes a cryptographic processor302, which can be the cryptographic processor302ofFIG.2or the cryptographic processor109ofFIG.1; and a cryptographic wrapper304, which can be the generic cryptographic wrapper111ofFIG.1or the generic cryptographic wrapper204ofFIG.2. The example300includes several examples of cryptographic-primitive requesters. However, other cryptographic-primitive requesters are possible. The cryptographic-primitive requesters of the example300include a wireless application306, an application308that uses SSL, an application310that uses IPsec, an application312that uses HSM, an ODP framework implementation314, and a DPDK framework implementation316. The cryptographic-primitive requesters of the example300offload at least some cryptographic primitives to the cryptographic processor302by making requests to the cryptographic wrapper304. Responsive to the requests to the cryptographic wrapper304, the cryptographic processor302performs the cryptographic primitives. FIG.4is a flowchart of a technique400for performing cryptographic primitives. Performing a cryptographic primitive can include receiving402an instruction to perform a cryptographic primitive and performing404the cryptographic primitive and storing an output of the cryptographic primitive in an output data structure. The technique400can be implemented, in part or in whole, by a generic cryptographic wrapper, such as the generic cryptographic wrapper111ofFIG.1, the generic cryptographic wrapper204ofFIG.2, or the cryptographic wrapper304ofFIG.3. The technique400can be implemented, in part or in whole, by a cryptographic processor, such as the cryptographic processor109ofFIG.1, the cryptographic processor202ofFIG.2, or the cryptographic processor302ofFIG.3. The technique400can be implemented by an apparatus, such as the apparatus102ofFIG.1. The technique400can be implemented, in whole or in part, as a software program that may be executed by the apparatus. The software program can include machine-readable instructions that may be stored in a memory such as the memory110, and that, when executed by a processor, such as the processor108or the cryptographic processor109ofFIG.1, may cause the apparatus to perform at least portions of the technique400. At402, the technique400receives an instruction to perform a cryptographic primitive. In an example, the instruction can be an instruction of an instruction set of the cryptographic processor. In an example, the instruction can be an instruction of the generic cryptographic wrapper. In an example, the instruction can be translated into one or more instructions of an instruction set of the cryptographic processor. The translation into one or more instructions of the cryptographic processor can depend on operand values (e.g., a configuration) of the instruction. For example, the cryptographic processor can read the configuration information to determine the cryptographic primitive to be performed and to perform the cryptographic primitive. As described herein, the configuration information can include at least one of contextual data or input data. The cryptographic primitive can be an instruction to perform an encryption of input, an instruction to perform hash-based message authentication code authentication (HMAC) of the input data, other instruction, or a combination thereof. The instruction can include one or more operands. The instructions and the one or more operands can be as described with respect toFIG.5. At404, the technique400performs the cryptographic primitive and stores an output of the cryptographic primitive in an output data structure. The output data structure can be as described with respect toFIG.9. FIG.5is a diagram of an example500of a layout of an instruction and operands. In an example, the instruction can be given by a combination of a major opcode502and a minor opcode504. The example500includes a context operand506, an input data operand508, and an output data operand510. As mentioned above, a security context can be set up (such as between communicating peers) during a handshake process. The security context can be used to configure at least one of a minor opcode504, the context operand506, the input data operand508, or a combination thereof. In an example, the cryptographic-primitive requester can set (e.g., configure, etc.) one or more of the fields of the minor opcode504, such as on a per-request basis. In an example, the context operand506can be used to provide configurations, which may not be changeable in a session. For example, the same context provided by the context operand506can be used for encryption or decryption by providing different values in the minor opcode504. FIG.6is a table600that illustrates semantics of bits of the major opcode502and the minor opcode504. The minor opcode504can be configured by a cryptographic-primitive requester. It is noted that in the description herein, and unless the context indicates otherwise, statements such as field X has a value Y where the field is a context data field or an input data field, should be understood to mean that the cryptographic-primitive requester configured the field X with the value Y or that the cryptographic-primitive requester set the field X to the value Y. A combination of values of the major opcode502and the minor opcode504can indicate the cryptographic primitive to be performed. In an example, the cryptographic primitive can be configured with (or, equivalently, the generic cryptographic wrapper can be configured so that the cryptographic processor can perform) one or more cryptographic operations related to one or both of message authentication or encryption. For example, the cryptographic primitive can be configured with one of an encrypt-then-authenticate primitive (in which an authentication operation is performed after an encryption operation has been performed), an authenticate-then-encrypt primitive (in which an encryption operation is performed after an authentication operation has been performed), an authenticate only primitive, an encrypt only primitive, more primitives, fewer primitives, other primitives, or a combination thereof. A field602(e.g., bitfield4of the minor opcode504ofFIG.5) of the table600indicates a mode of encryption. If the field602has a first value (e.g., 0), then the mode of encryption is to encrypt a message first and then perform an authentication primitive, which may be referred to as the encrypt-then-authenticate, as described below. If the field602is configured to have a second value (e.g., 1), then the mode of encryption is to authenticate a message first and then perform an encryption primitive, which may be referred to as the authenticate-then-encrypt, as described below. The cryptographic-primitive requester can set the value of the field602based on, for example, a protocol that is used by the cryptographic-primitive requester. For example, if the cryptographic-primitive requester is, or is using, the IPsec stack, then the cryptographic-primitive requester can set the field602to the first value. For example, if the cryptographic-primitive requester is, or is using, the SSL stack, then the cryptographic-primitive requester can set the field602to the second value. Encrypting a message (e.g., a cleartext or plaintext) can include authenticating the message. Message authentication or message origin authentication can be used to ensure that a message has not been modified while in transit and that the receiving party can verify the source (e.g., the identity of the sender) of the message. Authenticating a message can include performing a hash function to obtain a message authentication code (MAC). Encrypting and hashing can be separate steps. However, some encryption modes (such as Authenticated Encryption with Associated Data (AEAD) algorithms) include a MAC therewith combining the encryption and authentication steps. As is known, the encrypt-then-authenticate (also referred to as encrypt-then-MAC) primitive can encrypt a plaintext to obtain a ciphertext, compute a message authentication code (MAC) on the ciphertext, and append it to the ciphertext. The initialization vector (IV) and an encryption method identifier can be included in the MACed data. As is also known, the authenticate-then-encrypt (also referred to as MAC-then-encrypt) primitive can compute a MAC on a plaintext, append the MAC to the plaintext, and then encrypt the combination of the cleartext with the MAC. As is also known, a encrypt-and-authenticate primitive can compute a MAC on a plaintext, encrypt the plaintext to obtain a ciphertext, and then append the MAC at the end of the ciphertext. A field604, when configured to a certain value (e.g., 1) indicates that 32-bit cyclic redundancy check (CRC32) is supported. For example, if the field604is set to 1, then a CRC-32 operation can be performed for checksum with encryption or decryption, as the case may be. A field606, when configured to a certain value (e.g., 1) indicates that DOCSIS is supported. That is, the DOCSIS encryption with the DOCSIS padding pattern is to be used. A field608(e.g., bit 0 of the minor opcode504ofFIG.5) of the table600indicates, in combination with other fields, which of authentication, encryption, or decryption is to be performed. For example, if the field608is set to a first value (e.g., 0), then encryption and/or authentication (e.g., encryption+MAC) are to be performed. For example, if the field608is set to a second value (e.g., 1), then the authentication and/or decryption (MAC+decryption) are to be performed. More specifically, and in an example, if a field702ofFIG.7is set to NULL, then the first value of the field608can indicate that the cryptographic primitive is authentication only; if a field708ofFIG.7is set to NULL, then the first value of the field608can indicate encryption only; and if the field608is set to the second value and the field602is set to second value, then the cryptographic primitive can be to perform SSL record processing decryption. Referring toFIG.5again, in an example, each of the context operand506, the input data operand508, and the output data operand510can be or can include respective memory addresses of corresponding data structures in an address space of the cryptographic-primitive requester. As such, the cryptographic-primitive requester can allocate memory in its address space for the context, the input data, and the output data of the cryptographic primitive. The cryptographic processor and the generic cryptography wrapper can have access to the memory indicated by the respective memory addresses. The cryptographic processor and the generic cryptography wrapper can have write access to at least the memory indicated by the output data operand510. Each of the memory spaces indicated by the context operand506, the input data operand508, and the output data operand510can have defined semantics, structure, and size (such as in bytes). The semantics of a field can change depending on values of one or more other fields. Examples of the context, input, and output memory spaces are described with respectFIGS.7-9, respectively. FIG.7is an example of a context memory space700. The context memory space700can be configured (such as by a cryptographic-primitive requester) to include an initialization vectors (IV), encryption keys, decryption keys, processing options, more options, fewer options or a combination thereof. While one configuration and a set of semantics are described herein, other configurations or semantics are possible. Some of the fields may be have overloaded semantics. That is, some of the fields may have different semantics depending on the values of some other fields. That is, a field may contain first data that is used in a first way when one or more other fields contains second data; and the field may contain third data that is used in a second way when one or more other fields contains fourth data. Each of fields can be sized (such as in bits, bytes, or words) to fit the data to be stored therein. At least some of context information may be optionally provided. For example, some cryptographic primitives can use initialization vectors to prevent identical sequences of text from producing the same ciphertext when encrypted. As such, an IV may not be configured in the context memory space700depending on the cryptographic primitive; rather, the IV may be provided on a per-request basis as input data. As such, and further described with respect toFIG.8, IVs be can provided as input data. As such, the IVs can be optionally provided a input data or as contextual data. A field702can indicate the cipher type to be used. A table730illustrates possible values for the field702. The field702can be configured to be the value 0000 (i.e., the NULL value) to indicate that encryption is not to be performed by the cryptographic processor but that a MAC is to be obtained (e.g., calculated, etc.) by the cryptographic processor. The field702can be configured to a first value (e.g., the bit string 0001) to indicate that the Triple Data Encryption Algorithm (also referred to as Triple DES or 3DES) in block cipher mode (CBC) is to be used for encrypting data in the input memory space. The field702can be configured to be a second value (e.g., the bit string 0011) to indicate that the Advanced Encryption Standard (AES) in block cipher mode (CBC) is to be used for encrypting data in the input memory space. And so on. In the case that the cipher type is an AES cipher type (e.g., AES-CBC, AES-ECB, AES-CFB-AES-CTR, etc.), a field704can be used to indicate a length of the AES key. As is known, AES allows for three different key lengths: 128, 192, or 256 bits. As such, values 01, 10, and 11 of the field704can used to indicate a key length of 128, 192, and 256 bits, respectively, as shown in a table732. However, other values of the field704are possible. A field706can be used to indicate whether the encryption IV is provided as context data (i.e., in the memory space indicated by the context operand506) or as input data (i.e., in the memory space indicated by the input data operand508). For example, if the field706is configured to be (e.g., is set to, etc.) a first value (e.g., 0), then the cryptography processor can read the encryption IV from the context data; and if the field706is configured to be a second value (e.g., 1), then the cryptography processor can read the encryption IV from the input data in cases where IVs are provided by cryptographic-primitive requesters (such as on per request basis). A field708can be used to indicate a hash function or a message authentication code generation cipher algorithm. A table734illustrates an example of values of the field708and the corresponding hash functions to use. For example, if the field708is configured with a first value (e.g., 0000), then message authentication is not to be performed by the cryptographic processor; if the field708is configured with a second value (e.g., 0001), then the MD5 message-digest hash algorithm, which produces a 128-bit hash value, is to be performed; if the field708is configured with a third value (e.g., 0010), then the Secure Hash Algorithm 1 (SHA1), which is a cryptographic hash function that produces a 160-bit hash value (known as a message digest) from an input, is to be performed; and so on. In a case where the field702is configured with a cipher type of ChaCha20 (e.g., the field702is set to a value 1100 as shown in the table730), then a field710can be configured with a first value (e.g., 0) or a second value (e.g., 1). The first value can indicate that a keystream is to be calculated for every packet. The second value can indicate that the keystream is to be obtained from another field of the context memory space700(i.e., a field722). A first value (e.g., 1) of a field712can indicate that the cryptographic processor is to read a key used by the authentication algorithm from the input memory space indicated by the input data operand508; and a second value (e.g., 0) of the field712can indicate that the key is to read from the context memory space indicated by the context operand506. To illustrate, the field708may be set to a value 1001 indicating that AES with the cipher-based MAC (CMAC) is to be performed. CMAC is a block cipher-based message authentication code algorithm that can be used to provide authentication and integrity of binary data. CMAC uses a key, which can be different from the key used by the encryption method (e.g., AES). As such, the field712can be configured to direct the cryptographic primitive to read the CMAC key either from the context data or from the input data. In the case of HMAC, inner padding (ipad) and outer padding (opad) values are used. In a first example, the context data may be configured to include the ipad and opad values. In another example, for at least some input data, the authentication keys can be used to obtain respective ipad and opad values. To reduce subsequent processing, the first time that the ipad and the opad values are calculated, they can be stored as context data. A field714can be used to indicate whether the ipad and opad values are provided as context data for use by the cryptographic processor; or whether the cryptographic processor is to calculate the ipad and opad for input data. As such, a first value (e.g., 1) of the field714can indicate that the cryptographic processor is to use the IV and keys to obtain the ipad and opad values; and a second value (e.g., 0) of the field714can indicate that the context data is configured to include the ipad and opad values to be used. A field716can be configured to indicate a length of the MAC to be calculated by the cryptographic processor. A field718can be configured to include the encryption key. In the case that the field702indicates an AES (except AES-XTS) or a DES cipher type, the field718includes the key to be used. In case that the cipher type is AES-XTS (e.g., a value of 1000 as shown in the table730) multiple keys are derived from the received key, as is known. The first key (e.g., KEY_1) can be stored in the field718and the second key (e.g., KEY_2) can be stored in another field (e.g., a field722). However, KEY_1 and KEY_2 can be stored in other fields. In the case that the field706indicates that the encryption IV is provided as context data, then a field720can include the encryption IV. In the case that the field702indicates that the AES-XTS cipher type is to be used, then the field720can include the tweak that is to be used along with the AES key. The field722can include a first, a second, or a third value depending on other field values. The first value can be the ipad value, as described with respect to the field714. The second value can be the second key (KEY_2) for the cipher AES-XTS, as described with respect to the field716. The third value can be a key handle (e.g., a memory address of a the value of the key) in the case that the field702is configured (e.g., a value of 1100) to indicate the ChaCha20 cipher type and the field710is configured with the second value indicating that keystream is to be obtained from a location indicated by the key handle. In some special scenario, such as Chacha-poly, the key stream size is in kilobytes. In such cases, the field710can include a pointer to the key stream. The actual key stream may be in the memory space of the cryptographic-primitive requester. A field724can include a first value or a second value. The first value can be the opad value as described above. In a case that the field714indicates that the cryptographic processor is to use the IV and keys to obtain the ipad and opad values, the second value can be the key to be used by the authentication algorithm. FIG.8is an example of an input memory space800. The input memory space800can be configured by the cryptographic-primitive requester. The input memory space800can include a header area and a data field812. The header area can include fields802-810. The data field812can be configured to include at least one of data to be encrypted, data to be authenticated, the initialization vector, the authentication key, other data, or a combination thereof. The field802can include a length (such as in bits, bytes, words, or the like) of first data, within the data field812, to be encrypted. The field804can include a length of second data, within the data field812, to be authenticated. In an example, the first data to be encrypted may be the same as the second data to be authenticated. The first data and the second data are referred to herein as plaintext. In an example, the first data to be encrypted may be different from the second data to be authenticated. To illustrate the use of the fields802and804, and without limitations, assume that the data field812is configured with the data of an IPsec packet. The IPsec packet includes a packet header and a packet payload. In IPsec, authentication starts from the packet header to include authentication of the header and the payload; and encryption is performed on the payload data. As such, the field802can be configured to be the length of the payload data and the field804can be configured to be the sum of the length of the packet header and the length of the packet payload. In some situations, some data (i.e., pass-through data) of data field812may be passed through unmodified (e.g., as is) without being encrypted and/or authenticated. The field806indicates an offset within the data field812of the start of the data to be encrypted. The data before the offset is pass-through data. In the case that the field706ofFIG.7is configured (e.g., set to the second value) to indicate that the encryption IV can be read from the input data, then the field808can indicate an offset within the data field812of the IV. The field810can indicate the offset of the data within the data field812to be authenticated. The pass-through data can be determined using a difference between (e.g., between the addresses of) the authentication data start (i.e., offset) and start of the data for encrypt+HMAC. That is, the pass-through data can be obtained using the difference between encryption data start and start of the data for encryption only. The semantics of the data field812can be as follows. In a first case, no encryption is to be performed by the cryptographic processor (e.g., the field702ofFIG.7is set to the NULL value) and the authentication type is AES_CMAC (e.g., the field708is set to the value 1001 of table734ofFIG.7). As such, in the first case, the cryptographic processor performs only HMAC. In the first case, if the cryptographic processor determines that the field712is configured to indicate that the authentication key is in the data field812, then a number of bytes indicated by the field704ofFIG.7can be the bytes that the cryptographic processor reads (e.g., uses) as the authentication key. The bytes of the authentication key can be followed by the plaintext to be authenticated. If the field712is configured to indicate that the authentication key is in the context memory space, then the data field812includes only the plaintext. The cryptographic processor can read the plaintext, which can include pass-through data and the data to be authenticated (which may be referred to as additional authenticated data (AAD)). In a second case, the cryptographic processor is to perform encryption. In a first example of the second case, the field602ofFIG.6may be set such that the cryptographic processor is to authenticate-then-encrypt (e.g., the field602is set to 1). As such, the cryptographic processor obtains a MAC according to the MAC select algorithm indicated by the field708ofFIG.7. The cryptographic processor inserts the MAC (which is of a length indicated by the field716ofFIG.7) at the address indicated by the addition of the addresses in the fields804and810(e.g., value of the field804+value of the field810). The memory space of the data field812where the MAC is to be inserted by the cryptographic processor may be initialized, such as by the cryptographic-primitive requester, to zeros. As such, the data field812can include the plaintext data, followed by a memory space for the MAC, and followed by a padding area (in bytes). To illustrate, IPsec and SSL may perform different types of padding. As such, the cryptographic-primitive requester can configure portions of the data field812with the padding data. In a second example of the second case, the field602ofFIG.6may be set such that the cryptographic processor is to perform encryption-then-authentication (e.g., the field602is set to 0). In this case, the data field812can be configured to include the plaintext, which may include pass-through data, the encryption IV (if the field706ofFIG.7is configured to indicate that the encryption IV is provided as input data), the AAD, and any plaintext data to be encrypted. FIG.9is an example of an output memory space900. Corresponding to the first case described with respect toFIG.8(i.e., that no encryption is to be performed by the cryptographic processor and the authentication type is AES_CMAC, the output memory space900can include a field902that includes the MAC that is obtained (e.g., calculated, determined, etc.) by the cryptographic processor. As such, the cryptographic processor can write to the field902. Corresponding to the second case described with respect toFIG.8, the output memory space900can include a field904that includes the pass-through data, which the cryptographic processor may copy from the input memory space, an optional field906, and a field908in which the cryptographic processor writes the encrypted data that is the output of the cryptographic primitive. The optional field906can include the encryption IV in the case that the field706ofFIG.7indicates that the encryption IV is provided as input data. The cryptographic processor can copy the encryption IV from the field720ofFIG.7. The field908can include encrypted data as described with respect to a table910. The table910describes how the cryptographic processor formats the contents of the encrypted data field (i.e., the field908) based on the cipher type (if applicable) as indicated in a column912. A row916illustrates that if the cryptographic primitive is such that the cryptographic processor is to perform authenticate-then-encrypt (i.e., in the case that the field602ofFIG.6is configured to have the second value), and the field702or the field708ofFIG.7, as the case may be, is configured to be any of AES-CBC, 3DES-CBC, AES-CTR, SHA1, SHA2, or MD5, then the cryptographic processor outputs in the field908any pass-through data, followed by the AAD, and followed by the encrypted data of a size given by the field802ofFIG.8. A row918illustrates that if the cryptographic primitive is such that the cryptographic processor is to perform encrypt-then-authenticate (i.e., in the case that the field602ofFIG.6is configured to have the first value), and the field702or the field708ofFIG.7, as the case may be, is configured to be any of AES-CBC without DOCSIS (without DOCSIS means that the field606ofFIG.6is configured to a value (e.g., 0) indicates that DOCSIS is not supported) or AES-ECB, then the cryptographic processor outputs in the field908the encrypted data ROUNDUP16 (i.e., the encrypted data rounded up in size to a nearest multiple of 16 bytes) to a length given by the field802ofFIG.8. For brevity, explanations of the other rows of the table910are omitted as they are clear to a person skilled in the art. The cryptographic processor can set a completion code, such as in a register or a memory location or the like that the cryptographic-primitive requester can read to determine the status of the cryptographic primitive. Respective completion codes can be associated with the following conditions that the cryptographic processor can indicate: successful completion of the cryptographic primitive, invalid data length (such as in the case that the value of the field802ofFIG.8is less than 16 bytes for the cipher types AES-XTS and AES-CTS), invalid context length (such as in the case that the context operand506ofFIG.5is not 23 words), invalid cipher type (such as in the case that the field702ofFIG.7is set to an unsupported value, such as a value that is not listed in the table730ofFIG.7), invalid HMAC type (in the case that the field708is set to an unsupported value such as a value that is not listed in the table734ofFIG.7). Other conditions can be available. FIG.10is flowchart of an example of a technique1000for using a cryptographic processor. The technique1000can be implemented, for example, as a software program that may be executed by an apparatus such as the apparatus102ofFIG.1. The software program can be a cryptographic-primitive requester that can include machine-readable instructions that may be stored in a memory such as the memory110ofFIG.1, and that, when executed by a processor, such as the processor108ofFIG.1, may cause the computing device to perform the technique1000. The technique1000can be implemented using specialized hardware or firmware. Multiple processors, memories, or both, may be used. At1002, the technique1000establishes a secure communications session with a peer. For example the apparatus102can establish a secure communications sessions with the apparatus104ofFIG.1. As described herein, establishing the secure communications session can include identifying at least one of an encryption cipher or a hashing cipher, and an encryption key, such as during a handshake process. At1004, the technique1000configures one or more data structures for a cryptographic primitive to be performed by the cryptographic processor. As described above, the one or more data structures can include (and the cryptographic-primitive requester can configure) an encryption cipher type (such as described with respect to the field702ofFIG.7), an encryption initialization vector source (such as described with respect to the field720ofFIG.7), a message authentication code cipher algorithm (such as described with respect to the field708ofFIG.7), and a mode of encryption relating to an order of performing an encryption operation and an authentication operation (such as described with respect to the field602ofFIG.6). At1006, the technique1000identifies plaintext data for transmission to the peer. The plaintext may be a packet that includes a payload to be transmitted to the peer. However, other plaintext is possible. The payload can be of any type of data including by not limited to text data, audio data, image data, video data, and the like. At1008, the technique1000configures an input data structure for the cryptographic primitive. For example, the cryptographic-primitive requester can configure the input data structure by adding the plaintext data for transmission to the peer to the input data structure. At1110, the technique1000invokes an opcode that causes the cryptographic processor to perform the cryptographic primitive and place a ciphertext that is output by the cryptographic primitive in an output structure. At1112, the cryptographic-primitive requester transmits a secure message using the ciphertext to the peer. In an example, the cryptographic processor places a message authentication code in the output structure. In an example, and as described with respect toFIG.8, the one or more data structures can include a first length of data to be encrypted and a second length of data to be authenticated. For simplicity of explanation, the techniques400and1000are depicted and described as a series of blocks, steps, or operations. However, the blocks, steps, or operations in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, other steps or operations not presented and described herein may be used. Furthermore, not all illustrated steps or operations may be required to implement a technique in accordance with the disclosed subject matter. The word “example” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” is not necessarily to be construed as being preferred or advantageous over other aspects or designs. Rather, use of the word “example” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise or clearly indicated otherwise by the context, the statement “X includes A or B” is intended to mean any of the natural inclusive permutations thereof. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more,” unless specified otherwise or clearly indicated by the context to be directed to a singular form. Moreover, use of the term “an implementation” or the term “one implementation” throughout this disclosure is not intended to mean the same implementation unless described as such. Implementations of the techniques400and1000(and the algorithms, methods, instructions, etc., stored thereon and/or executed thereby, including by techniques400and1000) can be realized in hardware, software, or any combination thereof. The hardware can include, for example, computers, intellectual property (IP) cores, application-specific integrated circuits (ASICs), programmable logic arrays, optical processors, programmable logic controllers, microcode, microcontrollers, servers, microprocessors, digital signal processors, or any other suitable circuit. In the claims, the term “processor” should be understood as encompassing any of the foregoing hardware, either singly or in combination. The terms “signal” and “data” are used interchangeably. Further, all or a portion of implementations of this disclosure can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be any device that can, for example, tangibly contain, store, communicate, or transport the program for use by or in connection with any processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or semiconductor device. Other suitable mediums are also available. The above-described implementations and other aspects have been described in order to facilitate easy understanding of this disclosure and do not limit this disclosure. On the contrary, this disclosure is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation as is permitted under the law so as to encompass all such modifications and equivalent arrangements. | 56,524 |
11943368 | The features and advantages of the disclosed technologies will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number. DETAILED DESCRIPTION I. Introduction The following detailed description refers to the accompanying drawings that illustrate exemplary embodiments of the present invention. However, the scope of the present invention is not limited to these embodiments, but is instead defined by the appended claims. Thus, embodiments beyond those shown in the accompanying drawings, such as modified versions of the illustrated embodiments, may nevertheless be encompassed by the present invention. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” or the like, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the relevant art(s) to implement such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Descriptors such as “first”, “second”, “third”, etc. are used to reference some elements discussed herein. Such descriptors are used to facilitate the discussion of the example embodiments and do not indicate a required order of the referenced elements, unless an affirmative statement is made herein that such an order is required. II. Example Embodiments Example embodiments described herein are capable of provisioning a trusted execution environment (TEE) based on (e.g., based at least in part on) a chain of trust that includes a platform on which the TEE executes. A TEE is a secure area associated with a platform in a computing system. For example, a TEE may ensure that sensitive data is stored, processed, and protected in an isolated, trusted environment. In another example, a TEE may provide isolated, safe execution of authorized software. Accordingly, a TEE may provide end-to-end security by enforcing protected execution of authenticated code, confidentiality, authenticity, privacy, system integrity, and data access rights. Any suitable number (e.g., 1, 2, 3, or 4) of TEEs may be provisioned. A chain of trust may be established from each TEE to the platform on which an operating system that launched the TEE runs. For example, a client device may establish a first chain of trust from a first TEE to a platform on which an operating system that launched the first TEE runs; the first TEE may establish a second chain of trust from a second TEE to a platform on which an operating system that launched the second TEE runs; the second TEE may establish a third chain of trust from a third TEE to a platform on which an operating system that launched the third TEE runs, and so on. Any two or more TEEs may be launched by the same operating system or different operating systems running on the same platform or by different operating systems running on respective platforms. Once the chain of trust is established for a TEE, the TEE can be provisioned with information, including but not limited to policies, secret keys, secret data, and/or secret code. Accordingly, the TEE can be customized with the information without other parties, such as a cloud provider, being able to know or manipulate the information. In accordance with the aforementioned example, the client device may provision the first TEE with first information; the first TEE may provision the second TEE with second information; the second TEE may provision the third TEE with third information, and so on. It will be recognized that the first, second, and third information may be the same or different. Example techniques described herein have a variety of benefits as compared to conventional techniques for provisioning TEEs. For instance, the example techniques may be capable of increasing security of a distributed computing system. For instance, example techniques may increase security of TEE(s) in the distributed computing system and information with which the TEE(s) are provisioned. The example techniques may be capable of provisioning a TEE with any suitable information (e.g., a customer's policies, keys, data, and/or code) in an untrusted environment (e.g., in view of an untrusted cloud service or a malicious attacker) based on (e.g., based at least in part on) trust in a platform on which an operating system that launches the TEE runs. The TEE may be customized with the information without other entities (e.g., a provider of the cloud service or the malicious attacker) being able to know or manipulate the information. The example embodiments may provide “opaque computing.” Opaque computing is a superset of “confidential computing” that conceals sensitive code and data from a cloud provider. “Confidential computing” provides encryption of data when the data is at rest and while the data is in use. Accordingly, by employing confidential computing, the data can be processed in a distributed computing system (e.g., a public or private cloud) with assurance that the data remains under customer control. By employing opaque computing, the cloud provider may charge customers for compute time, data storage, and network traffic, but the cloud provider has no access to plaintext of the code or data no matter where in the distributed computing system such code or data are stored. The goal of opaque computing is achieved when the cloud provider cannot be cryptographically compelled to disclose customers' code and data. Accordingly, confidential computing shields sensitive bits from a single machine, and opaque computing extends this paradigm to the entire distributed computing system and all the services that are offered by the distributed computing system to a customer. The example techniques may enable a customer of a cloud service to run publicly available code (e.g., open source code) in secret (e.g., in presence of a hostile cloud operator) and have the code enforce the customer's rules and policies. The example techniques may enable multiple customers of a cloud service to operate respective TEEs side-by-side on a common platform, and each customer may be unable to know or manipulate information with which the other customers' TEEs are provisioned. The example techniques may be capable of utilizing consensus algorithms (e.g., without modification) to establish chains of trust from TEEs to their platforms and to provision the TEEs with information to customize the TEEs in a distributed computing system. A consensus algorithm may utilize the same trust mechanism to establish trust in each successive TEE. The set of communications that is used to establish a chain of trust from a first TEE to its platform and to provision the first TEE may be the same as the set of communications that is used to establish a chain of trust from a second TEE to its platform and to provision the second TEE, which may be the same as the set of communications that is used to establish a chain of trust from a third TEE to its platform and to provision the third TEE, and so on. FIG.1is a block diagram of an example trusted execution environment (TEE) provisioning system100in accordance with an embodiment. Generally speaking, TEE provisioning system100operates to provide information to users (e.g., software engineers, application developers, etc.) in response to requests (e.g., hypertext transfer protocol (HTTP) requests) that are received from the users. The information may include documents (e.g., Web pages, images, audio files, video files, etc.), output of executables, and/or any other suitable type of information. In accordance with example embodiments described herein, TEE provisioning system100provisions a TEE based on a chain of trust that includes a platform on which the TEE executes. Detail regarding techniques for provisioning a TEE based on a chain of trust that includes a platform on which the TEE executes is provided in the following discussion. As shown inFIG.1, TEE provisioning system100includes a plurality of user systems102A-102M, a network104, and a plurality of servers106A-106N. Communication among user systems102A-102M and servers106A-106N is carried out over network104using well-known network communication protocols. Network104may be a wide-area network (e.g., the Internet), a local area network (LAN), another type of network, or a combination thereof. Servers106A-106N and network104are shown to be included in a distributed computing system108(e.g., a public cloud or a private cloud). User systems102A-102M are processing systems that are capable of communicating with servers106A-106N. An example of a processing system is a system that includes at least one processor that is capable of manipulating data in accordance with a set of instructions. For instance, a processing system may be a computer, a personal digital assistant, etc. User systems102A-102M are configured to provide requests to servers106A-106N for requesting information stored on (or otherwise accessible via) servers106A-106N. For instance, a user may initiate a request for executing a computer program (e.g., an application) using a client (e.g., a Web browser, Web crawler, or other type of client) deployed on a user system102that is owned by or otherwise accessible to the user. In accordance with some example embodiments, user systems102A-102M are capable of accessing domains (e.g., Web sites) hosted by servers106A-106N, so that user systems102A-102M may access information that is available via the domains. Such domains may include Web pages, which may be provided as hypertext markup language (HTML) documents and objects (e.g., files) that are linked therein, for example. Each of user devices102A-102M may include any client-enabled system or device, including but not limited to a desktop computer, a laptop computer, a tablet computer, a wearable computer such as a smart watch or a head-mounted computer, a personal digital assistant, a cellular telephone, an Internet of things (IoT) device, or the like. It will be recognized that any one or more user systems102A-102M may communicate with any one or more servers106A-106N. First user device102A is shown to include client-side TEE provision logic110for illustrative purposes. Client-side TEE provision logic110is configured to perform client-side aspects of the example techniques described herein. For instance, client-side TEE provision logic110may establish a chain of trust from a TEE to a platform based at least in part on receipt of measurements of the TEE that are gathered by the platform and that are signed with a platform signing key of the platform. The measurements indicate attributes of the TEE. The TEE is hosted by distributed computing system108. For instance, the TEE may be hosted by any of servers106A-106N. The platform is configured to execute an operating system. The operating system is configured to launch the TEE from a template. Client-side TEE provision logic110may provision the TEE with information in absence of a secure channel between the client device and the TEE to customize the TEE with the information based at least in part on the chain of trust. Servers106A-106N are processing systems that are capable of communicating with user systems102A-102M. Servers106A-106N are configured to execute computer programs that provide information to user devices102A-102M. For instance, servers106A-106N may push such information to user devices102A-102M or provide the information in response to requests that are received from user devices102A-102M. The requests may be user-generated or generated without user involvement. For example, policies that are applied to a user device are done without explicit user requests. In accordance with this example, the policies are applied in the background even if no user is logged onto the user device. In further accordance with this example, the user device (e.g., an agent thereon) may poll a server for policy on a schedule (e.g., once per hour) or on events (e.g., device wakeup, user unlock, etc.). In further accordance with this example, the server may push the policy to the user device (e.g., an agent thereon) via an open HTTP endpoint. The information provided by servers106A-106N may include documents (e.g., Web pages, images, audio files, video files, etc.), output of executables, or any other suitable type of information. In accordance with some example embodiments, servers106A-106N are configured to host respective Web sites, so that the Web sites are accessible to users of TEE provisioning system100. First server(s)106A is shown to include server-side TEE provision logic112for illustrative purposes. Server-side TEE provision logic112is configured to perform server-side aspects of the example techniques described herein. For instance, server-side TEE provision logic112may include a first TEE (e.g., the TEE provisioned by client-side TEE provision logic110, as described above) that establishes a chain of trust from a second TEE to a platform based at least in part on receipt of measurements of the second TEE that are gathered by the platform and that are signed with a platform signing key of the platform. The measurements indicate attributes of the second TEE. The first and second TEEs are hosted by distributed computing system108. The platform is configured to execute an operating system. The operating system is configured to launch the second TEE from a template. The platform associated with the second TEE and a platform associated with the first TEE may be the same or different. The first TEE provisions the second TEE with information in absence of a secure channel between the first TEE and the second TEE to customize the second TEE with the information based at least in part on the chain of trust. Each of client-side TEE provision logic110and server-side TEE provision logic112may be implemented in various ways to provision a TEE based on a chain of trust that includes a platform on which the TEE executes, including being implemented in hardware, software, firmware, or any combination thereof. For example, each of client-side TEE provision logic110and server-side TEE provision logic112may be implemented as computer program code configured to be executed in one or more processors. In another example, each of client-side TEE provision logic110and server-side TEE provision logic112may be at least partially implemented as hardware logic/electrical circuitry. For instance, each of client-side TEE provision logic110and server-side TEE provision logic112may be at least partially implemented in a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), an application-specific standard product (ASSP), a system-on-a-chip system (SoC), a complex programmable logic device (CPLD), etc. Each SoC may include an integrated circuit chip that includes one or more of a processor (e.g., a microcontroller, microprocessor, digital signal processor (DSP), etc.), memory, one or more communication interfaces, and/or further circuits and/or embedded firmware to perform its functions. Client-side TEE provision logic110is shown to be incorporated in first user device102A for illustrative purposes and is not intended to be limiting. It will be recognized that client-side TEE provision logic110may be incorporated in any of the user systems102A-102M. Server-side TEE provision logic112is shown to be incorporated in first server(s)106A for illustrative purposes and is not intended to be limiting. It will be recognized that server-side TEE provision logic112may be incorporated in any one or more of the servers106A-106N. FIG.2is an example activity diagram200in accordance with an embodiment.FIG.2depicts a client device202, a platform204, a trusted execution environment (TEE)206, and an operating system208. Activities212,214,216,218,220,222,224,226,228,230,232,234,236,238, and240will now be described with reference to client device202, platform204, TEE206, and operating system208. In activity212, client device202generates a request for a TEE. For example, client device202may be owned or controlled by a customer of a cloud service. Client device202may generate the request based on instructions that are received from the customer. The instructions from the customer may indicate that the customer wishes to set up a web service, attestation service, database, machine learning system, etc. Client device202may generate the request for the TEE for purposes of setting up the web service, attestation service, database, machine learning system, etc. In activity214, operating system208, which runs on platform204, launches the TEE from a template. The template is executable code. For instance, the template may be a piece of executable code that has not been customized with regard to a client device or customer associated therewith. The template represents a known starting point for customizing the TEE. Accordingly, the template may include nothing that is customer-specific. For instance, the template may not include any customer secrets (e.g., secret keys, code, and/or data of the customer). In one example, the template may have been previously received from client device202. For instance, the customer may have commissioned a developer to create the template. In another example, the template may have been selected by the customer from a gallery that is provided by the cloud service. In an example embodiment, the TEE is an enclave, and the platform is a central processing unit (CPU). In one aspect of this embodiment, the enclave may run on a virtual machine, and the virtual machine may run on the CPU. In another aspect of this embodiment, the enclave may run on a virtual machine; the virtual machine may run on a host operating system (e.g., operating system208); and the host operating system may run on the CPU. The level of nesting is arbitrary. In another embodiment, a blind hypervisor is used, in which case the virtual machine is the TEE. In activity216, client device202requests identification information that identifies TEE206. It should be noted that the request is shown inFIG.2to be sent directly from client device202to TEE206for purposes of illustration and is not intended to be limiting. For instance, the request may be sent from client device202to software running on platform204, and the software may forward the request to TEE206. In accordance with this example, the software may provide message-passing facilities between client device202and TEE206. In activity218, TEE206requests a report that includes the identification information from platform204. For instance, the request may be a GET REPORT request as will be understood by persons skilled in the relevant art. In activity220, platform204provides the report to TEE206. The report includes measurements of TEE206. The measurements include the identification information. For instance, the measurements may indicate unforgeable attributes of TEE206(e.g., an author, publisher, security version number, code type, and/or compilation date of TEE206and/or a key used to sign the measurements of TEE206). It will be recognized that asymmetric and/or symmetric authentication techniques may be used to authenticate the measurements. For example, platform204may sign the measurements with a platform signing key (PSK) before providing the measurements to TEE206. In another example, one or more symmetric key-based message authentication codes (MACs) may be used as proof-of-authenticity of a report. Examples of a symmetric key-based MAC include but are not limited to keyed-hash MAC (HMAC) and cipher-based MAC (CMAC). In activity222, TEE206adds self-reported measurements to the report, resulting in an updated report. The self-reported measurements are measurements that TEE206gathers or generates about itself. For instance, the self-reported measurements may be a hash (e.g., having a fixed length value) of a structure that includes any of a variety of keys, policies, or other suitable information. In activity222, TEE206may further request that platform204sign the updated report. In activity224, platform204signs the updated report with the PSK and provides the signed, updated report to TEE206. In activity226, TEE206forwards the signed, updated report to client device202. In activity228, client device202provides rules (a.k.a. policies) for TEE206to enforce. For instance, the rules may specify conditions under which TEE206is to apply a signature to information, a rule for updating rules, or keys of customers that are to connect to TEE206. In activity230, TEE206provides confirmation that the rules have been received from client device202. For example, TEE206may provide cryptographic proof that the rules have been received. The cloud provider must not be able to manipulate the cryptographic proof. In another example, TEE206may provide the confirmation by repeating the rules back to client device202as part of a report that is signed by the platform. In activity232, client device202provides a public portion of a policy update key (i.e., PUKpub) to TEE206. The PUKpub may correspond to a private portion of the policy update key (i.e., PUKpri) that client device202intends to use to sign subsequent (e.g., updated) rules that are to be sent to TEE206. For instance, the PUKpub may be used by TEE206to verify that the subsequent rules are provided by client device202. It may be said that activity232personalizes TEE206so that client device202need not kill TEE206and request another TEE to implement the subsequent rules. In activity234, TEE206provides a public portion of a secret import key (i.e., SIKpub) to client device202, so that client device202may use the SIKpub to encrypt secret information (e.g., keys, data, and/or code) that is to be sent to TEE206. The SIKpub corresponds to a private portion of the secret import key (i.e., SIKpri) that is usable by TEE206to decrypt the secret information. The secret information is capable of being decrypted only by TEE206because TEE206is the only entity in possession of the SIKpri. In activity236, client device202provides the secret information, which is encrypted with the SIKpub, to TEE206. In activity238, client device202requests auditing information from TEE206(e.g., to determine whether TEE206is operating as expected). In activity240, TEE206provides the auditing information to client device202. Any one or more of activities216,218,220,222,224, and/or226may be used to establish a chain of trust from TEE206to platform204. Any one or more of activities228,230,232,234, and/or236may be used to provision TEE206with information for purposes of customizing TEE206with the information. For example, any of activities228,230, and/or232may be used to provision TEE206with rules. In another example, any of activities234and/or236may be used to provision TEE206with secret information. In some example embodiments, one or more activities212,214,216,218,220,222,224,226,228,230,232,234,236,238, and/or240of activity diagram200may not be performed. Moreover, activities in addition to or in lieu of activities212,214,216,218,220,222,224,226,228,230,232,234,236,238, and/or240may be performed. It will be recognized that some activities may be combined. For example, client device202may combine activities228and232. In another example, TEE206may combine activities226and234. In accordance with this example, TEE206may include SIKpub in the signed, updated report. FIG.3is a block diagram of an example trusted execution environment (TEE) provisioning system300in accordance with an embodiment. As shown inFIG.3, TEE provisioning system300includes a client device302and a distributed computing system308. Client device302is an example implementation of first user device102A shown inFIG.1. Client device302may perform any one or more of the activities performed by client device202ofFIG.2. Client device302includes client-side TEE provision logic310, which is operable in a manner similar to client-side TEE provision logic110shown inFIG.1. Client-side TEE provision logic310generates a TEE request332in response to (e.g., based on) instructions that are received from a user associated with client device302. The TEE request332requests for a trusted execution environment (TEE)316to be created. Client-side TEE provision logic310provides the TEE request to an operating system330. Client-side TEE provision logic310then generates an identification (ID) information request322. The ID information request322requests identifying information that identifies TEE316. Client-side TEE provision logic310provides the ID information request322to TEE316 Client-side TEE provision logic310receives measurements326, which are signed with a platform signing key320, from distributed computing system308in response to providing the ID information request322. The measurements326include the identifying information, which is specified by the ID information request322. Client-side TEE provision logic310verifies that the measurements326are received from TEE316based on the measurements326being signed with the platform signing key318. For instance, client-side TEE provision logic310may use a public key that corresponds to the platform signing key320to verify that the measurements326are received from TEE316. The public key may be received from a manufacturer of platform318. For instance, client device302may download the public key from a website of the manufacturer (e.g., based on a user of client device302selecting a hyperlink associated with the public key on the website). Distributed computing system308includes a service314and server-side TEE provision logic312. Service314hosts a platform318on which TEE316executes. Service314may utilize web technology, such as Hypertext Transfer Protocol (HTTP), for machine-to-machine communication. For instance, service314may transfer machine-readable file formats, such as Extensible Markup Language (XML) and JavaScript Object Notation (JSON), between client-side TEE provision logic310and server-side TEE provision logic312. Service314may serve as an intermediary through which communications, such as TEE request332, ID information request322, and measurements326pass. Server-side TEE provision logic312includes TEE316, platform318, and operating system330. Operating system330runs on platform318. Operating system330receives the TEE request from client-side TEE provision logic310via service314. Operating system launches TEE316from a template based on receipt of the TEE request332from client-side TEE provision logic310. TEE316may perform any one or more of the activities performed by TEE206ofFIG.2. As shown in FIG.3, TEE316receives the ID information request322from client-side TEE provision logic310via service314. TEE316generates a report request324based on receipt of the ID information request322from client-side TEE provision logic310. The report request324requests the identifying information, which is specified by the ID information request322. TEE316provides the report request324to platform318. TEE316receives the measurements326from platform318in response to the report request324being provided to platform318. The measurements326include the identifying information. The measurements326may be signed with the platform signing key320. TEE316may add self-reported measurements to the measurements326that are received from platform318, resulting in an updated version of the measurements326. TEE316may provide the updated version of the measurements326to platform318so that platform318can sign the updated version of the measurements326with the platform signing key320. TEE316forwards the measurements326(e.g., the updated version of the measurements326) to client-side TEE provision logic310in satisfaction of the ID information request322. Platform318may be one or more processors (e.g., a central processing unit (CPU)). Platform318may perform any one or more of the activities performed by platform204ofFIG.2. As shown inFIG.2, platform318receives the report request324from TEE316. Platform318gathers the measurements326, including the identification information. For instance, platform318may gather the measurements326in response to receipt of the report request324. Platform318provides the measurements326to TEE316in satisfaction of the report request324. It should be noted that platform318is unable to change TEE316once TEE316is launched, though platform may terminate TEE316. For instance, platform318is unable to change the identifying information that is included in the measurements326. Platform318may be certified by an entity that a user of client device302(e.g., a customer of service314) trusts. For instance, a signing key certificate of platform318may be issued (e.g., signed) by a manufacturer of platform318or by an attestation service. It will be recognized that platform318need not necessarily be included in server-side TEE provision logic312. For instance, platform318may be external to server-side TEE provision logic312. FIG.4is a block diagram of another example TEE provisioning system400in accordance with an embodiment. As shown inFIG.4, TEE provisioning system400includes a client device402and a distributed computing system408. Client device402includes client-side TEE provision logic410, which is operable in a manner similar to client-side TEE provision logic310shown inFIG.3. For instance, client-side TEE provision logic410provides ID information requests to respective TEEs416A-416P and receives corresponding measurements from the respective TEEs416A-416P. The measurements from each TEE identify the respective TEE. For instance, client-side TEE provision logic410may provide a first ID information request to a first TEE416A and receive first measurements from first TEE416A; client-side TEE provision logic410may provide a second ID information request to a second TEE416B and receive second measurements from second TEE416B, and so on. Distributed computing system408includes a service414and server-side TEE provision logic412. Service414is operable in a manner similar to service314shown inFIG.3. For instance, service414hosts a platform418, which executes TEEs416A-416P. Server-side TEE provision logic412includes TEEs416A-416P and platform418. Each of TEEs416A-416P is operable in a manner similar to TEE316shown inFIG.3. For instance, each of TEEs416A-416P requests measurements that identify the respective TEE from platform418and forwards the measurements to client-side TEE provision logic410upon receipt of the measurements from platform418. Each of the TEEs416A-416P may add its own self-reported measurements to the measurements that are received from platform418before forwarding the measurements to client-side TEE provision logic410. The measurements that are provided to client-side TEE provision logic410may be signed with a platform signing key420associated with platform418. Platform418is operable in a manner similar to platform318shown inFIG.3. For instance, platform418executes an operating system that launches TEEs416A-416P; platform418gathers the measurements for each of the TEEs416A-416P; and platform418provides the measurements to the respective TEEs416A-416P in satisfaction of the requests that are received from those TEEs416A-416P. Platform418may sign the measurements for each of the TEEs416A-416P before providing those measurements to the respective TEE. It may be beneficial to establish initial consensus among TEEs416A-416P before server-side TEE provision logic412begins to service requests from client-side TEE provision logic410. The goal of initial consensus establishment is to make all the TEEs running on all platforms agree on an asymmetric “Provisioning Encryption Key” (PEK). To establish initial consensus among TEEs416A-416P, one of the TEEs416A-416P may be designated as the “primary TEE”. Any of TEEs416A-416P may serve as the primary TEE. The primary TEE generates the PEK. Each peer TEE (i.e., each of the TEEs416A-416P that is a peer of the primary TEE) generates a unique asymmetric Key Import Key (KIK). The primary TEE contacts each of its peers and imparts the PEK onto them by executing a sequence of steps once for each peer. In a first step of the sequence, the primary TEE retrieves from a quote from the peer. The quote includes a public portion of the KIK (KlKpub) that is generated by that peer. In a second step, the primary TEE validates the received quote (e.g., by comparing the peer's measurements to its own or by other means, such as a higher version number of the same enclave) and confirms that the signature over the quote chains up to the platform associated with the TEE. If the check fails, the primary TEE terminates the attempt. In a third step, the primary TEE encrypts its PEK to KlKpub and sends them to the peer potentially along with its own quote, which includes PEKpub. In a fourth step, the peer may decrypt the received PEK and SSK using its private portion of the KIK (i.e., KlKpri). At this point, any peer can start any suitable number of identical enclaves, all of which have access to the same PEK. Use of multiple identical peer enclaves on each TEE is supported so that each TEE can service multiple concurrent requests in a multi-threaded (thread pool) fashion. The same provisioning protocol may be used at any time to introduce another TEE into the set. A primary TEE may be removed just like any other peer TEE; the stateful service may automatically ensure that another TEE is designated as the primary TEE. With the initial consensus established, TEEs416A-416P may start to provision flows, receiving customer-specific keys and policies. The steps of the sequence described above are provided for illustrative purposes and are not intended to be limiting. It will be recognized that the sequence may not include one or more of the steps. Moreover, the sequence may include one or more steps in addition to or in lieu of any one or more of the steps described above. A protocol to provide customer-specific provisioning may be executed once per TEE when client-side TEE provision logic410indicates that a new TEE is to be configured. In a first step of the protocol, client-side TEE provision logic410communicates with one of the TEEs416A-416P randomly through a load balancer. Client-side TEE provision logic410retrieves a quote, which includes a public portion of the PEK (i.e., PEKpub), from the TEE. Since the TEEs416A-416P share the same PEK, it does not matter which TEE client-side TEE provision logic410contacts first, or whether subsequent messages in this flow are received by the same TEE. However, data written out to shared storage by any TEE may be quickly made available to all peer TEEs. In a second step, client-side TEE provision logic410receives and validates the quote in the response. If the response does not match expectations (e.g., the wrong TEE measurements are reported, or the signature does not check out), the provisioning process may be terminated and the attestation service instance may not be created. It should be noted that the TEE that client-side TEE provision logic410chooses to trust with its keys and policies is established out of band. In a third step, client-side TEE provision logic410encrypts rules and secrets that are requested by the TEE with PEKpub and provides the encrypted rules and secrets to the TEE. In a fourth step, any of TEEs416A-416P receives the provisioning blob, decrypts the provisioning blob using a private portion of the PEK (i.e., PEKpri), and makes the provisioning blob available to its peer TEEs. In a fifth step, the TEE responds with a quote that includes information from the provisioning blob. Client-side TEE provision logic410receives this blob, validates the blob, and has an option of deleting the new TEE if the quote does not validate correctly. The client-side TEE provision logic410has the option of caching the PEKpub or re-querying the PEKpub from the TEE each time. In one example, client-side TEE provision logic410is responsible for maintaining a private portion of the PUK (i.e., PUKpri). In another example, client-side TEE provision logic410use a web service, such as Azure Key Vault, to maintain PUKpri. The steps of the protocol described above are provided for illustrative purposes and are not intended to be limiting. It will be recognized that the protocol may not include one or more of the steps. Moreover, the protocol may include one or more steps in addition to or in lieu of any one or more of the steps described above. FIG.5is a block diagram of yet another example TEE provisioning system500in accordance with an embodiment. As shown inFIG.5, TEE provisioning system500includes a first client device502A, a second client device502B, and a distributed computing system508. First client device502A may be owned or operated by customer A of a service514, and second device502B may be owned or operated by customer B of service514. The first and second client devices502A and502B include client-side TEE provision logic510A and510B, respectively. Each of the first client-side TEE provision logic510A and the second client-side TEE provision logic510BG is operable in a manner similar to client-side TEE provision logic310shown inFIG.3. For instance, first client-side TEE provision logic510A provides first ID information requests to respective TEEs516A-516Z and receives corresponding measurements from the respective TEEs516A-516Z. Second client-side TEE provision logic510B provides second ID information requests to respective TEEs536A-536Z and receives corresponding measurements from the respective TEEs536A-536Z. The measurements from each TEE identify the respective TEE. TEEs516A-516Z are labeled as “first TEE A,” “second TEE A,” and so on inFIG.5to indicate that TEEs516A-516Z correspond to customer A. TEEs536A-536Z are labeled as “first TEE B,” “second TEE B,” and so on inFIG.5to indicate that TEEs536A-536Z correspond to customer B. Distributed computing system508includes a service514and a plurality of computers528A-528Z. Service514is operable in a manner similar to service314shown inFIG.3. For instance, service514hosts platforms associated with the respective computers528A-528N. The platforms execute respective TEEs516A-516Z, which correspond to customer A, and respective TEEs536A-536Z, which correspond to customer B. Accordingly, each of the platforms executes a TEE that corresponds to customer A and a TEE that corresponds to customer B. For instance, the platform associated with first computer528A executes first TEE A516A, which corresponds to customer A, and first TEE B536A, which corresponds to customer B; the platform associated with second computer528B executes second TEE A516B, which corresponds to customer A, and second TEE B536B, which corresponds to customer B, and so on. It will be recognized that each of the platforms is described as executing a single TEE corresponding to each of the customers A and B for illustrative purposes and is not intended to be limiting. For instance, each of the platforms associated with the respective computers528A-528Z may host any suitable number (e.g., 1, 2, 3, or 4) of TEEs corresponding to each customer. Distributed computing system508includes first server-side TEE provision logic512A and second server-side TEE provision logic512B. First server-side TEE provision logic512A includes TEEs516A-516Z, which correspond to customer A. Second server-side TEE provision logic512B includes TEEs536A-536Z, which correspond to customer B. Each of TEEs516A-516Z and each of TEEs536A-536Z is operable in a manner similar to TEE316shown inFIG.3. For example, TEEs516A-516Z request measurements that identify the respective TEEs516A-516Z from the respective platforms associated with the respective computers528A-528Z and forward the measurements to first client-side TEE provision logic510A upon receipt of the measurements from the respective platforms. The TEEs516A-516Z may add their own self-reported measurements to the measurements that are received from the respective platforms associated with the respective computers528A-528Z before forwarding the measurements to first client-side TEE provision logic510A. The measurements that are provided to first client-side TEE provision logic510A by the respective TEEs516A-516Z may be signed with platform signing keys associated with the respective platforms. In another example, TEEs536A-536Z request measurements that identify the respective TEEs536A-536Z from the respective platforms associated with the respective computers528A-528Z and forward the measurements to second client-side TEE provision logic510B upon receipt of the measurements from the respective platforms. The TEEs536A-536Z may add their own self-reported measurements to the measurements that are received from the respective platforms associated with the respective computers528A-528Z before forwarding the measurements to second client-side TEE provision logic510B. The measurements that are provided to second client-side TEE provision logic510B by the respective TEEs536A-536Z may be signed with platform signing keys associated with the respective platforms. FIG.6depicts a flowchart600of an example method for provisioning a TEE based on a chain of trust that includes a platform on which the TEE executes in accordance with an embodiment. Flowchart600may be performed by any of user devices102A-102M shown inFIG.1, for example. For illustrative purposes, flowchart600is described with respect to client device700shown inFIG.7. Client device700includes client-side TEE provision logic710. Client-side TEE provision logic710includes trust logic702, provision logic704, and audit logic706. Further structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the discussion regarding flowchart600. The method ofFIG.6is described from the perspective of a client device. As shown inFIG.6, the method of flowchart600begins at step602. In step602, a chain of trust from a trusted execution environment to a platform is established based at least in part on receipt of measurements of the trusted execution environment that are gathered by the platform and that are signed with a platform signing key of the platform. For instance, the chain of trust may be established out-of-band (e.g., not as part of the protocol of communication between the client device and the trusted execution environment). The measurements indicate attributes of the trusted execution environment, which is hosted by a distributed computing system (e.g., private cloud or public cloud) coupled to the client device. For instance, the measurements may indicate unforgeable attributes of trusted execution environment (e.g., an author, publisher, security version number, code type, and/or compilation date of the trusted execution environment and/or a key used to sign the measurements). The platform vouches for the measurements (e.g., by signing the measurements with the platform signing key). The platform is configured to execute an operating system that launches the trusted execution environment from a template. For instance, the template may be executable code. For instance, the template may be a piece of executable code that has not been customized with regard to a client device or customer associated therewith. The template represents a known starting point for customizing the TEE. Accordingly, the template may include nothing that is customer-specific. For instance, the template may not include any customer secrets (e.g., secret keys, code, and/or data of the customer). In an example implementation, trust logic702establishes the chain of trust based at least in part on receipt of measurements722of the trusted execution environment that are gathered by the platform and signed with the platform signing key. It should be noted that trust logic702may receive self-reported measurements from the trusted execution environment in addition to the measurements that are gathered by the platform. The self-reported measurements also may be signed with the platform signing key. Trust logic702may generate a provisioning instruction726based on the chain of trust being established. For instance, the provisioning instruction726may instruct provisioning logic704to provision the trusted execution environment with information728. In an example embodiment, establishing the chain of trust at step602includes using a public key that corresponds to the platform signing key to verify that the measurements are signed with the platform signing key. For instance, the platform signing key and the public key may form an asymmetric key pair. The platform signing key is in the possession of the platform. The certificate that corresponds to the platform signing key (a.k.a. the platform signing key certificate) may or may not be in the possession of the platform. For instance, the platform may provide an identifier that can be used with a service associated with the platform to obtain the platform signing key certificate. The platform signing key may be queried from the platform or from a manufacturer of the platform. At step604, the trusted execution environment is provisioned (e.g., imbued) with information in absence of a secure channel between the client device and the trusted execution environment to customize the trusted execution environment with the information based at least in part on the chain of trust. For instance, the trusted execution environment may be provisioned with the information to set up a web service, attestation service, database, machine learning system, etc. In an example implementation, provision logic704provisions the trusted execution environment with the information728to customize the trusted execution environment with the information728based at least in part on the chain of trust. For instance, provision logic704may provision the trusted execution environment with the information728based on receipt of the provisioning instruction726. In an example embodiment, the trusted execution environment is securely provisioned with the information. To securely provision the trusted execution environment with the information means to be cryptographically convinced that the trusted execution environment has received rules that have been provided (e.g., from the client device in this example) to the trusted execution environment for enforcement, and the information has been provided to the trusted execution environment without the information being disclosed to entities other than the trusted execution environment. For instance, securely provisioning the trusted execution environment with the information may require that the trusted execution environment is able to repeat the information back to the provider of the information (e.g., back to the client device in this example). At step606, auditing information is requested from the trusted execution environment. The auditing information indicates a detected state of the trusted execution environment. For example, the detected state may indicate whether the trusted execution environment is operating on the latest policies, with the latest keys, with the latest data, with the latest code, etc. In another example, the detected state may indicate confirm receipt of specified information (e.g., specified rules, keys, data, or code). In an example implementation, audit logic706requests auditing information724from the trusted execution environment. For instance, audit logic706may generate an audit request730, which requests the auditing information724. At step608, a determination is made whether the detected state and a reference state are same. For instance, a determination as to whether the trusted execution environment is to be provisioned with additional information may be based on whether the detected state and the reference state are the same. In an example implementation, trust logic702determines whether the detected state and the reference state are the same. For example, trust logic702may receive the auditing information724. In accordance with this example, trust logic702may analyze the auditing information724to determine the detected state of the trusted execution environment. In further accordance with this example, trusted logic702may compare the detected state to the reference state to determine whether the detected state and the reference state are the same. In an aspect of this implementation, trust logic702may be configured to generate an additional provisioning instruction in response to the detected state and the reference state being the same. The additional provisioning instruction may instruct provision logic704to provision the trusted execution environment with additional information. Trust logic702may be configured to not generate the additional provisioning instruction in response to the detected state and the reference state not being the same. If the detected state and the reference state are the same, flow continues to step610. Otherwise, flow continues to step612. At step610, the trusted execution environment is provisioned with the additional information to further customize the trusted execution environment with the additional information. For example, the trusted execution environment may be provisioned with the additional information in absence of the secure channel between the client device and the trusted execution environment. In another example, the trusted execution environment may be provisioned with the additional information based at least in part on the detected state and the reference state being the same. In an example implementation, provision logic704provisions the trusted execution environment with the additional information. At step612, the trusted execution environment is not provisioned with the additional information. In an example implementation, provision logic704does not provision the trusted execution environment with the additional information. In some example embodiments, one or more steps602,604,606,608,610, and/or612of flowchart600may not be performed. Moreover, steps in addition to or in lieu of steps602,604,606,608,610, and/or612may be performed. For instance, in an example embodiment, the distributed computing system provides a cloud service. In accordance with this embodiment, the method of flowchart600further includes using cryptographic communications to exclude entities other than the client device from knowing the information and to exclude the other entities from manipulating the information. In further accordance with this embodiment, the other entities include a provider of the cloud service. The provider of the cloud service may own the platform, though the example embodiments are not limited in this respect. In an aspect of this embodiment, using the cryptographic communications may be further to exclude the other entities from knowing responses from the trusted execution environment regarding the information and to exclude the other entities from manipulating the responses. In another aspect of this embodiment, provisioning the trusted execution environment with the information at step604may include provisioning the trusted execution environment with the information in plain view of the provider of the cloud service. In an example implementation, trust logic702and/or provision logic704may use the cryptographic communications to exclude the entities other than client device700from knowing the information728and to exclude the other entities from manipulating the information728. In another example embodiment, provisioning the trusted execution environment with the information at step604includes provisioning the trusted execution environment with at least one policy in absence of the secure channel between the client device and the trusted execution environment based at least in part on the chain of trust. Examples of a policy include but are not limited to (1) a requirement that specified condition(s) are satisfied in order for a signature to be applied to information (e.g., information of a designated type) and (2) a requirement that a specified procedure is to be followed in order to update rules (e.g., a designated signing key is to be used to sign the rules). In an example implementation, provision logic704provisions the trusted execution environment with the at least one policy. In an aspect of this embodiment, the method of flowchart600further includes providing an update regarding the at least one policy to the trusted execution environment. For instance, the at least one policy may indicate that the update is to be verified by the trusted execution environment based at least in part on the update being signed with the private key. In an example implementation, provision logic704may provide the update to the trusted execution environment. In further accordance with this aspect, the update is signed with a policy update key. The policy update key corresponds to a public key that is usable by the trusted execution environment to verify that the update is provided by the client device. In yet another example embodiment, provisioning the trusted execution environment with the information at step604includes provisioning the trusted execution environment with secret information in absence of the secure channel between the client device and the trusted execution environment based at least in part on the chain of trust. The secret information includes data, software code, and/or key(s). In an example implementation, provision logic704provisions the trusted execution environment with the secret information. In an aspect of this embodiment, the method of flowchart600further includes encrypting the secret information with a secret import key that is received from the trusted execution environment. The secret import key corresponds to a private key that is usable by the trusted execution environment to decrypt the secret information. In an example implementation, provision logic704encrypts the secret information with the secret import key. In another aspect of this embodiment, provisioning the trusted execution environment with the information at step604includes provision the trusted execution environment with at least one policy in absence of the secure channel between the client device and the trusted execution environment based at least in part on the chain of trust. In accordance with this aspect, the method of flowchart600further includes determining whether to provision the trusted execution environment with the secret information based at least in part on whether the client device receives confirmation from the trusted execution environment that the trusted execution environment has received the at least one policy from the client device. In an example implementation, provision logic704determines whether to provision the trusted execution environment with the secret information based at least in part on whether client device700receives confirmation from the trusted execution environment that the trusted execution environment has received the at least one policy from client device700. For instance, provision logic704may be configured to provision the trusted execution environment with the secret information in response to client device700receiving the confirmation. Provision logic704may be configured to not provision the trusted execution environment with the secret information in response to client device700not receiving the confirmation. In further accordance with this aspect, provisioning the trusted execution environment with the secret information is performed based at least in part on receipt of the confirmation from the trusted execution environment that the trusted execution environment has received the at least one policy from the client device. For example, the confirmation may include the at least one policy. In accordance with this example, the trusted execution environment may repeat the at least one policy back to the client device to enable the client device to confirm that the at least one policy that is repeated back to the client device is the same as the at least one policy with which the trusted execution environment was provisioned. It should be noted that provisioning the trusted execution environment with the secret information may be contingent on the trusted execution environment providing assurance that the trusted execution environment will enforce the at least one policy. In still another example embodiment, the method of flowchart600further includes encrypting a public portion of a policy update key that is associated with the client device with a public portion of a provisioning encryption key. For instance, trust logic702may encrypt the public portion of the policy update key with the public portion of the provisioning encryption key. The policy update key has a private portion that is usable by the client device to sign an update regarding at least one policy that is to be provided to the trusted execution environment. The provisioning encryption key has a private portion that is associated with a plurality of trusted execution environments. The private portion of the provisioning encryption key is usable by the trusted execution environment to decrypt the public portion of the policy update key. The plurality of trusted execution environments includes the trusted execution environment. It will be recognized that client device700may not include one or more of trust logic702, provision logic704, and/or audit logic706. Furthermore, client device700may include components in addition to or in lieu of trust logic702, provision logic704, and/or audit logic706. FIG.8depicts a flowchart800of another example method for provisioning a TEE based on a chain of trust that includes a platform on which the TEE executes in accordance with an embodiment. Flowchart800may be performed by any one or more of servers106A-106N shown inFIG.1(e.g., by a TEE executing thereon), for example. For illustrative purposes, flowchart800is described with respect to first TEE900shown inFIG.9. First TEE900includes server-side TEE provision logic912. Server-side TEE provision logic912includes trust logic902, provision logic904, and audit logic906. Further structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the discussion regarding flowchart800. The method ofFIG.8is described from the perspective of a first trusted execution environment. As shown inFIG.8, the method of flowchart800begins at step802. In step802, a chain of trust from a second trusted execution environment to a platform is established based at least in part on receipt of measurements of the second trusted execution environment that are gathered by the platform and that are signed with a platform signing key of the platform. The measurements indicate attributes of the second trusted execution environment. The second trusted execution environment is hosted by a distributed computing system that hosts the first trusted execution environment. The platform is configured to execute an operating system that launches the second trusted execution environment from a template. The operating system may be configured to launch the first trusted execution environment from the template, as well. In an example implementation, trust logic902establishes the chain of trust based at least in part on receipt of measurements922of the second trusted execution environment that are gathered by the platform and signed with the platform signing key. It should be noted that trust logic902may receive self-reported measurements from the second trusted execution environment in addition to the measurements that are gathered by the platform. The self-reported measurements also may be signed with the platform signing key. Trust logic902may generate a provisioning instruction926based on the chain of trust being established. For instance, the provisioning instruction926may instruct provisioning logic904to provision the second trusted execution environment with information928. In an example embodiment, establishing the chain of trust at step802includes establishing the chain of trust from the second trusted execution environment to the platform further based at least in part on receipt of a notice from the second trusted execution environment. In accordance with this embodiment, the notice informs the first trusted execution environment of existence of the second trusted execution environment. In another example embodiment, establishing the chain of trust at step802includes using a public key that corresponds to the platform signing key to verify that the measurements are signed with the platform signing key. At step804, the second trusted execution environment is provisioned (e.g., securely provisioned) with information in absence of a secure channel between the first trusted execution environment and the second trusted execution environment to customize the second trusted execution environment with the information based at least in part on the chain of trust. In an example implementation, provision logic904provisions the second trusted execution environment with the information828to customize the second trusted execution environment with the information928based at least in part on the chain of trust. For instance, provision logic904may provision the second trusted execution environment with the information928based on receipt of the provisioning instruction926. At step806, auditing information is requested from the second trusted execution environment. The auditing information indicates a detected state of the second trusted execution environment. In an example implementation, audit logic906requests auditing information924from the second trusted execution environment. For instance, audit logic906may generate an audit request930, which requests the auditing information924. At step808, a determination is made whether the detected state and a reference state are same. In an example implementation, trust logic902determines whether the detected state and the reference state are the same. For example, trust logic902may analyze the auditing information924to determine the detected state of the trusted execution environment. In accordance with this example, trusted logic902may compare the detected state to the reference state to determine whether the detected state and the reference state are the same. In an aspect of this implementation, trust logic902may be configured to generate an additional provisioning instruction in response to the detected state and the reference state being the same. The additional provisioning instruction may instruct provision logic904to provision the second trusted execution environment with additional information. Trust logic902may be configured to not generate the additional provisioning instruction in response to the detected state and the reference state not being the same. If the detected state and the reference state are the same, flow continues to step810. Otherwise, flow continues to step812. At step810, the second trusted execution environment is provisioned with the additional information to further customize the second trusted execution environment with the additional information. For example, the second trusted execution environment may be provisioned with the additional information in absence of the secure channel between the first trusted execution device and the second trusted execution environment. In another example, the second trusted execution environment may be provisioned with the additional information based at least in part on the detected state and the reference state being the same. In an example implementation, provision logic904provisions the second trusted execution environment with the additional information. At step812, the second trusted execution environment is not provisioned with the additional information. In an example implementation, provision logic904does not provision the second trusted execution environment with the additional information. In an example embodiment, establishing the chain of trust at step802includes establishing the chain of trust in accordance with a consensus algorithm that is configured to provide redundancy of the first trusted execution environment. A consensus algorithm is an algorithm that is configured to perform consensus among multiple unreliable entities (e.g., TEEs). Consensus is a process by which the multiple unreliable entities agree on a common (e.g., single) result. In accordance with this embodiment, provisioning the second trusted execution environment with the information at step804includes provisioning the second trusted execution environment with the information in accordance with the consensus algorithm. In further accordance with this embodiment, the information includes policies and secret information, which are copied from the first trusted execution environment. In an aspect of this embodiment, establishing the chain of trust at step802is performed in response to a cloud service that is provided by the distributed computing system instructing an operating system that runs on the platform to launch the second trusted execution environment in accordance with the consensus algorithm. In some example embodiments, one or more steps802,804,806,808,810, and/or812of flowchart800may not be performed. Moreover, steps in addition to or in lieu of steps802,804,806,808,810, and/or812may be performed. For instance, in an example embodiment, the method of flowchart800further includes using cryptographic communications to exclude entities other than the first trusted execution environment from knowing the information and to exclude the other entities from manipulating the information. The other entities include a provider of a cloud service that is hosted by the distributed computing system. In an example implementation, trust logic902and/or provision logic904may use the cryptographic communications to exclude the entities other than first trusted execution environment900from knowing the information928and to exclude the other entities from manipulating the information928. In another example embodiment, provisioning the second trusted execution environment at step804includes provisioning the second trusted execution environment with at least one policy in absence of the secure channel between the first trusted execution environment and the second trusted execution environment based at least in part on the chain of trust. In an example implementation, provision logic904provisions the second trusted execution environment with the at least one policy. In an aspect of this embodiment, the method of flowchart800further includes providing an update regarding the at least one policy to the second trusted execution environment. In an example implementation, provision logic904may provide the update to the second trusted execution environment. In accordance with this aspect, the update is signed with a policy update key, which corresponds to a public key that is usable by the second trusted execution environment to verify that the update is provided by the first trusted execution environment. In yet another example embodiment, provisioning the second trusted execution environment at step802includes provisioning the second trusted execution environment with secret information in absence of the secure channel between the first trusted execution environment and the second trusted execution environment based at least in part on the chain of trust. The secret information includes data, software code, and/or key(s). In an example implementation, provision logic904provisions the second trusted execution environment with the secret information. In an aspect of this embodiment, the method of flowchart800further includes encrypting the secret information with a secret import key that is received from the second trusted execution environment. In accordance with this embodiment, the secret import key corresponds to a private key that is usable by the second trusted execution environment to decrypt the secret information. In an example implementation, provision logic904encrypts the secret information with the secret import key. In another aspect of this embodiment, provisioning the second trusted execution environment with the information at step804includes provisioning the second trusted execution environment with at least one policy in absence of the secure channel between the first trusted execution environment and the second trusted execution environment based at least in part on the chain of trust. In accordance with this aspect, the method of flowchart800further includes determining whether to provision the second trusted execution environment with the secret information based at least in part on whether the first trusted execution environment receives confirmation from the second trusted execution environment that the second trusted execution environment has received the at least one policy from the first trusted execution environment. In an example implementation, provision logic904determines whether to provision the second trusted execution environment with the secret information based at least in part on whether first trusted execution environment900receives confirmation from the second trusted execution environment that the second trusted execution environment has received the at least one policy from first trusted execution environment900. For instance, provision logic904may be configured to provision the second trusted execution environment with the secret information in response to first trusted execution environment900receiving the confirmation. Provision logic904may be configured to not provision the second trusted execution environment with the secret information in response to first trusted execution environment900not receiving the confirmation. In further accordance with this aspect, provisioning the second trusted execution environment at step804includes provisioning the second trusted execution environment with the secret information in absence of the secure channel between the first trusted execution environment and the second trusted execution environment based at least in part on receipt of the confirmation from the second trusted execution environment that the second trusted execution environment has received the at least one policy from the first trusted execution environment. For instance, the confirmation may include the at least one policy. In still another example embodiment, the method of flowchart800includes collaborating with other trusted execution environments in a plurality of trusted execution environments to have a provisioning encryption key provided to each of the plurality of trusted execution environments in accordance with a consensus algorithm. For instance, provision logic904may collaborate with the other trusted execution environments to have the provisioning encryption key provided to each of the plurality of trusted execution environments in accordance with the consensus algorithm. The plurality of trusted execution environments includes the first and second trusted execution environments. In accordance with this embodiment, the method of flowchart800includes encrypting the information with a public portion of the provisioning encryption key. For instance, provision logic904may encrypt the information928with the public portion of the provisioning encryption key. The information may be capable of being decrypted with a private portion of the provisioning encryption key by the second trusted execution environment. In further accordance with this embodiment, the method of flowchart800includes provisioning the second trusted execution environment with the information in response to encrypting the information with the public portion of the provisioning encryption key. In an aspect of this embodiment, collaborating with the other trusted execution environments includes encrypting the provisioning encryption key, for each of the other trusted execution environments that is to receive the provisioning encryption key from the first trusted execution environment, with a public portion of a secret import key associated with the respective trusted execution environment. The provisioning encryption key may be capable of being decrypted by each trusted execution environment that is to receive the provisioning encryption key from the first trusted execution environment with a private portion of the secret import key that is associated with the respective trusted execution environment. It will be recognized that first TEE900may not include one or more of trust logic902, provision logic904, and/or audit logic906. Furthermore, first TEE900may include components in addition to or in lieu of trust logic902, provision logic904, and/or audit logic906. FIG.10is a system diagram of an exemplary mobile device1000including a variety of optional hardware and software components, shown generally as1002. Any components1002in the mobile device may communicate with any other component, though not all connections are shown, for ease of illustration. The mobile device1000may be any of a variety of computing devices (e.g., cell phone, smartphone, handheld computer, Personal Digital Assistant (PDA), etc.) and may allow wireless two-way communications with one or more mobile communications networks1004, such as a cellular or satellite network, or with a local area or wide area network. The mobile device1000may include a processor1010(e.g., signal processor, microprocessor, ASIC, or other control and processing logic circuitry) for performing such tasks as signal coding, data processing, input/output processing, power control, and/or other functions. An operating system1012may control the allocation and usage of the components1002and support for one or more applications1014(a.k.a. application programs). The applications1014may include common mobile computing applications (e.g., email applications, calendars, contact managers, web browsers, messaging applications) and any other computing applications (e.g., word processing applications, mapping applications, media player applications). The mobile device1000may include memory1020. The memory1020may include non-removable memory1022and/or removable memory1024. The non-removable memory1022may include RAM, ROM, flash memory, a hard disk, or other well-known memory storage technologies. The removable memory1024may include flash memory or a Subscriber Identity Module (SIM) card, which is well known in GSM communication systems, or other well-known memory storage technologies, such as “smart cards.” The memory1020may store data and/or code for running the operating system1012and the applications1014. Example data may include web pages, text, images, sound files, video data, or other data sets to be sent to and/or received from one or more network servers or other devices via one or more wired or wireless networks. Memory1020may store a subscriber identifier, such as an International Mobile Subscriber Identity (IMSI), and an equipment identifier, such as an International Mobile Equipment Identifier (IMEI). Such identifiers may be transmitted to a network server to identify users and equipment. The mobile device1000may support one or more input devices1030, such as a touch screen1032, microphone1034, camera1036, physical keyboard1038and/or trackball1040and one or more output devices1050, such as a speaker1052and a display1054. Touch screens, such as the touch screen1032, may detect input in different ways. For example, capacitive touch screens detect touch input when an object (e.g., a fingertip) distorts or interrupts an electrical current running across the surface. As another example, touch screens may use optical sensors to detect touch input when beams from the optical sensors are interrupted. Physical contact with the surface of the screen is not necessary for input to be detected by some touch screens. For example, the touch screen1032may support a finger hover detection using capacitive sensing, as is well understood in the art. Other detection techniques may be used, including but not limited to camera-based detection and ultrasonic-based detection. To implement a finger hover, a user's finger is typically within a predetermined spaced distance above the touch screen, such as between 0.1 to 0.25 inches, or between 0.0.25 inches and 0.05 inches, or between 0.0.5 inches and 0.75 inches, or between 0.75 inches and 1 inch, or between 1 inch and 1.5 inches, etc. The mobile device1000may include client-side TEE provision logic1092. The client-side TEE provision logic1092is configured to provision a TEE based on a chain of trust that includes a platform on which the TEE executes in accordance with any one or more of the techniques described herein. Other possible output devices (not shown) may include piezoelectric or other haptic output devices. Some devices may serve more than one input/output function. For example, touch screen1032and display1054may be combined in a single input/output device. The input devices1030may include a Natural User Interface (NUI). An NUI is any interface technology that enables a user to interact with a device in a “natural” manner, free from artificial constraints imposed by input devices such as mice, keyboards, remote controls, and the like. Examples of NUI methods include those relying on speech recognition, touch and stylus recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, voice and speech, vision, touch, gestures, and machine intelligence. Other examples of a NUI include motion gesture detection using accelerometers/gyroscopes, facial recognition, 3D displays, head, eye, and gaze tracking, immersive augmented reality and virtual reality systems, all of which provide a more natural interface, as well as technologies for sensing brain activity using electric field sensing electrodes (EEG and related methods). Thus, in one specific example, the operating system1012or applications1014may include speech-recognition software as part of a voice control interface that allows a user to operate the mobile device1000via voice commands. Furthermore, the mobile device1000may include input devices and software that allows for user interaction via a user's spatial gestures, such as detecting and interpreting gestures to provide input to a gaming application. Wireless modem(s)1060may be coupled to antenna(s) (not shown) and may support two-way communications between the processor1010and external devices, as is well understood in the art. The modem(s)1060are shown generically and may include a cellular modem1066for communicating with the mobile communication network1004and/or other radio-based modems (e.g., Bluetooth1064and/or Wi-Fi1062). At least one of the wireless modem(s)1060is typically configured for communication with one or more cellular networks, such as a GSM network for data and voice communications within a single cellular network, between cellular networks, or between the mobile device and a public switched telephone network (PSTN). The mobile device may further include at least one input/output port1080, a power supply1082, a satellite navigation system receiver1084, such as a Global Positioning System (GPS) receiver, an accelerometer1086, and/or a physical connector1090, which may be a USB port, IEEE 1394 (FireWire) port, and/or RS-232 port. The illustrated components1002are not required or all-inclusive, as any components may be deleted and other components may be added as would be recognized by one skilled in the art. Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth herein. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed methods may be used in conjunction with other methods. Any one or more of client-side TEE provision logic110, server-side TEE provision logic112, platform204, TEE206, client-side TEE provision logic310, server-side TEE provision logic312, service314, TEE316, platform318, operating system310, client-side TEE provision logic410, server-side TEE provision logic412, service414, any one or more of TEEs416A-416P, platform418, first client-side TEE provision logic510A, second client-side TEE provision logic510B, first server-side TEE provision logic512A, second server-side TEE provision logic512B, service514, any one or more of TEEs516A-516Z, any one or more of TEEs536A-536Z, activity diagram200, flowchart600, and/or flowchart800may be implemented in hardware, software, firmware, or any combination thereof. For example, any one or more of client-side TEE provision logic110, server-side TEE provision logic112, platform204, TEE206, client-side TEE provision logic310, server-side TEE provision logic312, service314, TEE316, platform318, operating system330, client-side TEE provision logic410, server-side TEE provision logic412, service414, any one or more of TEEs416A-416P, platform418, first client-side TEE provision logic510A, second client-side TEE provision logic510B, first server-side TEE provision logic512A, second server-side TEE provision logic512B, service514, any one or more of TEEs516A-516Z, any one or more of TEEs536A-536Z, activity diagram200, flowchart600, and/or flowchart800may be implemented, at least in part, as computer program code configured to be executed in one or more processors. In another example, any one or more of client-side TEE provision logic110, server-side TEE provision logic112, platform204, TEE206, client-side TEE provision logic310, server-side TEE provision logic312, service314, TEE316, platform318, operating system330, client-side TEE provision logic410, server-side TEE provision logic412, service414, any one or more of TEEs416A-416P, platform418, first client-side TEE provision logic510A, second client-side TEE provision logic510B, first server-side TEE provision logic512A, second server-side TEE provision logic512B, service514, any one or more of TEEs516A-516Z, any one or more of TEEs536A-536Z, activity diagram200, flowchart600, and/or flowchart800may be implemented, at least in part, as hardware logic/electrical circuitry. Such hardware logic/electrical circuitry may include one or more hardware logic components. Examples of a hardware logic component include but are not limited to a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), an application-specific standard product (ASSP), a system-on-a-chip system (SoC), a complex programmable logic device (CPLD), etc. For instance, a SoC may include an integrated circuit chip that includes one or more of a processor (e.g., a microcontroller, microprocessor, digital signal processor (DSP), etc.), memory, one or more communication interfaces, and/or further circuits and/or embedded firmware to perform its functions. III. Further Discussion of Some Example Embodiments An example client device comprises memory and one or more processors coupled to the memory. The one or more processors are configured to perform operations comprising establish a chain of trust from a trusted execution environment to a platform based at least in part on receipt of measurements of the trusted execution environment that are gathered by the platform and that are signed with a platform signing key of the platform. The measurements indicate attributes of the trusted execution environment, which is hosted by a distributed computing system coupled to the client device. The platform is configured to execute an operating system. The operating system is configured to launch the trusted execution environment from a template. The one or more processors are configured to perform the operations further comprising provision the trusted execution environment with information in absence of a secure channel between the client device and the trusted execution environment to customize the trusted execution environment with the information based at least in part on the chain of trust. In a first aspect of the example client device, the one or more processors are configured to establish the chain of trust from the trusted execution environment to the platform by using a public key that corresponds to the platform signing key to verify that the measurements are signed with the platform signing key. In a second aspect of the example client device, the distributed computing system provides a cloud service. In accordance with the second aspect, the one or more processors are configured to perform the operations further comprising use cryptographic communications to exclude entities other than the client device from knowing the information and to exclude the other entities from manipulating the information. The other entities include a provider of the cloud service. The second aspect of the example client device may be implemented in combination with the first aspect of the example client device, though the example embodiments are not limited in this respect. In a third aspect of the example client device, the one or more processors are configured to perform the operations comprising securely provision the trusted execution environment with the information. The third aspect of the example client device may be implemented in combination with the first and/or second aspect of the example client device, though the example embodiments are not limited in this respect. In a fourth aspect of the example client device, the one or more processors are configured to perform the operations comprising provision the trusted execution environment with at least one policy in absence of the secure channel between the client device and the trusted execution environment based at least in part on the chain of trust. The fourth aspect of the example client device may be implemented in combination with the first, second, and/or third aspect of the example client device, though the example embodiments are not limited in this respect. In an implementation of the fourth aspect of the example client device, the one or more processors are configured to perform the operations comprising provide an update regarding the at least one policy to the trusted execution environment. In accordance with this implementation, the update is signed with a policy update key, which corresponds to a public key that is usable by the trusted execution environment to verify that the update is provided by the client device. In a fifth aspect of the example client device, the one or more processors are configured to perform the operations comprising provision the trusted execution environment with secret information in absence of the secure channel between the client device and the trusted execution environment based at least in part on the chain of trust. The secret information includes at least one of data, software code, or one or more keys. The fifth aspect of the example client device may be implemented in combination with the first, second, third, and/or fourth aspect of the example client device, though the example embodiments are not limited in this respect. In a first implementation of the fifth aspect of the example client device, the one or more processors are configured to perform the operations comprising encrypt the secret information with a secret import key that is received from the trusted execution environment. The secret import key corresponds to a private key that is usable by the trusted execution environment to decrypt the secret information. In a second implementation of the fifth aspect of the example client device, the one or more processors are configured to perform the operations comprising provision the trusted execution environment with at least one policy in absence of the secure channel between the client device and the trusted execution environment based at least in part on the chain of trust. In accordance with the second implementation, the one or more processors are configured to perform the operations comprising determine whether to provision the trusted execution environment with the secret information based at least in part on whether the client device receives confirmation from the trusted execution environment that the trusted execution environment has received the at least one policy from the client device. In further accordance with the second implementation, the one or more processors are configured to perform the operations comprising provision the trusted execution environment with the secret information in absence of the secure channel between the client device and the trusted execution environment based at least in part on receipt of the confirmation from the trusted execution environment that the trusted execution environment has received the at least one policy from the client device. In an example of the second implementation, the confirmation includes the at least one policy. In a sixth aspect of the example client device, the one or more processors are configured to perform the operations comprising request auditing information from the trusted execution environment. The auditing information indicates a detected state of the trusted execution environment. In accordance with the sixth aspect, the one or more processors are configured to perform the operations comprising determine whether to provision the trusted execution environment with additional information based at least in part on whether the detected state and a reference state are same. In further accordance with the sixth aspect, the one or more processors are configured to perform the operations comprising provision the trusted execution environment with the additional information in absence of the secure channel between the client device and the trusted execution environment to further customize the trusted execution environment with the additional information based at least in part on the detected state and the reference state being the same. The sixth aspect of the example client device may be implemented in combination with the first, second, third, fourth, and/or fifth aspect of the example client device, though the example embodiments are not limited in this respect. In a seventh aspect of the example client device, the one or more processors are configured to perform the operations comprising encrypt a public portion of a policy update key that is associated with the client device with a public portion of a provisioning encryption key. The policy update key has a private portion that is usable by the client device to sign an update regarding at least one policy that is to be provided to the trusted execution environment. The provisioning encryption key has a private portion that is associated with a plurality of trusted execution environments and that is usable by the trusted execution environment to decrypt the public portion of the policy update key. The plurality of trusted execution environments includes the trusted execution environment. The seventh aspect of the example client device may be implemented in combination with the first, second, third, fourth, fifth, and/or sixth aspect of the example client device, though the example embodiments are not limited in this respect. An example system comprises memory and one or more processors coupled to the memory. The one or more processors are configured to execute a plurality of trusted execution environments. The plurality of trusted execution environments includes at least a first trusted execution environment and a second trusted execution environment. The first trusted execution environment is configured to perform operations comprising establish a chain of trust from the second trusted execution environment to a platform based at least in part on receipt of measurements of the second trusted execution environment that are gathered by the platform and that are signed with a platform signing key of the platform. The measurements indicate attributes of the second trusted execution environment, which is hosted by a distributed computing system that hosts the first trusted execution environment. The platform is configured to execute an operating system. The operating system is configured to launch the second trusted execution environment from a template. The first trusted execution environment is configured to perform the operations further comprising provision the second trusted execution environment with information in absence of a secure channel between the first trusted execution environment and the second trusted execution environment to customize the second trusted execution environment with the information based at least in part on the chain of trust. In a first aspect of the example system, the first trusted execution environment is configured to perform the operations comprising establish the chain of trust from the second trusted execution environment to the platform further based at least in part on receipt of a notice from the second trusted execution environment. The notice informs the first trusted execution environment of existence of the second trusted execution environment. In a second aspect of the example system, the first trusted execution environment is configured to perform the operations comprising establish the chain of trust and provision the second trusted execution environment in accordance with a consensus algorithm that is configured to provide redundancy of the first trusted execution environment. In accordance with the second aspect, the information includes policies and secret information, which are copied from the first trusted execution environment. The second aspect of the example system may be implemented in combination with the first aspect of the example system, though the example embodiments are not limited in this respect. In an implementation of the second aspect of the example system, the distributed computing system provides a cloud service. In accordance with this implementation, the first trusted execution environment is configured to perform the operations comprising establish the chain of trust from the second trusted execution environment to the platform in response to the cloud service instructing the operating system to launch the second trusted execution environment in accordance with the consensus algorithm. In a third aspect of the example system, the first trusted execution environment is configured to establish the chain of trust from the second trusted execution environment to the platform by using a public key that corresponds to the platform signing key to verify that the measurements are signed with the platform signing key. The third aspect of the example system may be implemented in combination with the first and/or second aspect of the example system, though the example embodiments are not limited in this respect. In a fourth aspect of the example system, the distributed computing system provides a cloud service. In accordance with the fourth aspect, the first trusted execution environment is configured to perform the operations further comprising use cryptographic communications to exclude entities other than the first trusted execution environment from knowing the information and to exclude the other entities from manipulating the information, the other entities including a provider of the cloud service. The fourth aspect of the example system may be implemented in combination with the first, second, and/or third aspect of the example system, though the example embodiments are not limited in this respect. In a fifth aspect of the example system, the first trusted execution environment is configured to perform the operations comprising securely provision the second trusted execution environment with the information. The fifth aspect of the example system may be implemented in combination with the first, second, third, and/or fourth aspect of the example system, though the example embodiments are not limited in this respect. In a sixth aspect of the example system, the first trusted execution environment is configured to perform the operations comprising provision the second trusted execution environment with at least one policy in absence of the secure channel between the first trusted execution environment and the second trusted execution environment based at least in part on the chain of trust. The sixth aspect of the example system may be implemented in combination with the first, second, third, fourth, and/or fifth aspect of the example system, though the example embodiments are not limited in this respect. In an implementation of the sixth aspect of the example system, the first trusted execution environment is configured to perform the operations comprising provide an update regarding the at least one policy to the second trusted execution environment. In accordance with this implementation, the update is signed with a policy update key, which corresponds to a public key that is usable by the second trusted execution environment to verify that the update is provided by the first trusted execution environment. In a seventh aspect of the example system, the first trusted execution environment is configured to perform the operations comprising provision the second trusted execution environment with secret information in absence of the secure channel between the first trusted execution environment and the second trusted execution environment based at least in part on the chain of trust. The secret information includes at least one of data, software code, or one or more keys. The seventh aspect of the example system may be implemented in combination with the first, second, third, fourth, fifth, and/or sixth aspect of the example system, though the example embodiments are not limited in this respect. In a first implementation of the seventh aspect of the example system, the first trusted execution environment is configured to perform the operations comprising encrypt the secret information with a secret import key that is received from the second trusted execution environment. The secret import key corresponds to a private key that is usable by the second trusted execution environment to decrypt the secret information. In a second implementation of the seventh aspect of the example system, the first trusted execution environment is configured to perform the operations comprising provision the second trusted execution environment with at least one policy in absence of the secure channel between the first trusted execution environment and the second trusted execution environment based at least in part on the chain of trust. In accordance with the second implementation, the first trusted execution environment is configured to perform the operations comprising determine whether to provision the second trusted execution environment with the secret information based at least in part on whether the first trusted execution environment receives confirmation from the second trusted execution environment that the second trusted execution environment has received the at least one policy from the first trusted execution environment. In further accordance with the second implementation, the first trusted execution environment is configured to perform the operations comprising provision the second trusted execution environment with the secret information in absence of the secure channel between the first trusted execution environment and the second trusted execution environment based at least in part on receipt of the confirmation from the second trusted execution environment that the second trusted execution environment has received the at least one policy from the first trusted execution environment. In an example of the second implementation, the confirmation includes the at least one policy. In an eighth aspect of the example system, the first trusted execution environment is configured to perform the operations comprising request auditing information from the second trusted execution environment. The auditing information indicates a detected state of the second trusted execution environment. In accordance with the eighth aspect, the first trusted execution environment is configured to perform the operations comprising determine whether to provision the second trusted execution environment with additional information based at least in part on whether the detected state and a reference state are same. In further accordance with the eighth aspect, the first trusted execution environment is configured to perform the operations comprising provision the second trusted execution environment with the additional information in absence of the secure channel between the first trusted execution environment and the second trusted execution environment to further customize the second trusted execution environment with the additional information based at least in part on the detected state and the reference state being the same. The eighth aspect of the example system may be implemented in combination with the first, second, third, fourth, fifth, sixth, and/or seventh aspect of the example system, though the example embodiments are not limited in this respect. In a ninth aspect of the example system, the first trusted execution environment is configured to perform the operations comprising collaborate with other trusted execution environments in a plurality of trusted execution environments to have a provisioning encryption key propagated to each of the plurality of trusted execution environments in accordance with a consensus algorithm. The plurality of trusted execution environments includes the first and second trusted execution environments. In accordance with the ninth aspect, the first trusted execution environment is configured to perform the operations comprising encrypt the information with a public portion of the provisioning encryption key. In further accordance with the ninth aspect, the first trusted execution environment is configured to perform the operations comprising provision the second trusted execution environment with the information in response to encrypting the information with the public portion of the provisioning encryption key. The ninth aspect of the example system may be implemented in combination with the first, second, third, fourth, fifth, sixth, seventh, and/or eighth aspect of the example system, though the example embodiments are not limited in this respect. In an implementation of the ninth aspect of the example system, the first trusted execution environment is configured to perform the operations comprising collaborate with the other trusted execution environments by encrypting the provisioning encryption key, for each of the other trusted execution environments that is to receive the provisioning encryption key from the first trusted execution environment, with a public portion of a secret import key associated with the respective trusted execution environment. In a first example method performed using at least one processor of a client device, a chain of trust from a trusted execution environment to a platform is established based at least in part on receipt of measurements of the trusted execution environment that are gathered by the platform and that are signed with a platform signing key of the platform. The measurements indicate attributes of the trusted execution environment, which is hosted by a distributed computing system coupled to the client device. The platform is configured to execute an operating system. The operating system is configured to launch the trusted execution environment from a template. The trusted execution environment is provisioned with information in absence of a secure channel between the client device and the trusted execution environment to customize the trusted execution environment with the information based at least in part on the chain of trust. In a first aspect of the first example method, establishing the chain of trust comprises using a public key that corresponds to the platform signing key to verify that the measurements are signed with the platform signing key. In a second aspect of the first example method, the distributed computing system provides a cloud service. In accordance with the second aspect; the first example method further comprises using cryptographic communications to exclude entities other than the client device from knowing the information and to exclude the other entities from manipulating the information, the other entities including a provider of the cloud service. The second aspect of the first example method may be implemented in combination with the first aspect of the first example method, though the example embodiments are not limited in this respect. In a third aspect of the first example method, provisioning the trusted execution environment comprises securely provisioning the trusted execution environment with the information. The third aspect of the first example method may be implemented in combination with the first and/or second aspect of the first example method, though the example embodiments are not limited in this respect. In a fourth aspect of the first example method, provisioning the trusted execution environment comprises provisioning the trusted execution environment with at least one policy in absence of the secure channel between the client device and the trusted execution environment based at least in part on the chain of trust. The fourth aspect of the first example method may be implemented in combination with the first, second, and/or third aspect of the first example method, though the example embodiments are not limited in this respect. In an implementation of the fourth aspect of the first example method, the first example method further comprises providing an update regarding the at least one policy to the trusted execution environment. In accordance with this implementation, the update is signed with a policy update key, which corresponds to a public key that is usable by the trusted execution environment to verify that the update is provided by the client device. In a fifth aspect of the first example method, provisioning the trusted execution environment comprises provisioning the trusted execution environment with secret information in absence of the secure channel between the client device and the trusted execution environment based at least in part on the chain of trust. The secret information includes at least one of data, software code, or one or more keys. The fifth aspect of the first example method may be implemented in combination with the first, second, third, and/or fourth aspect of the first example method, though the example embodiments are not limited in this respect. In a first implementation of the fifth aspect of the first example method, the first example method further comprises encrypting the secret information with a secret import key that is received from the trusted execution environment, the secret import key corresponding to a private key that is usable by the trusted execution environment to decrypt the secret information. In a second implementation of the fifth aspect of the first example method, provisioning the trusted execution environment comprises provisioning the trusted execution environment with at least one policy in absence of the secure channel between the client device and the trusted execution environment based at least in part on the chain of trust. In accordance with the second implementation, the first example method further comprises determining whether to provision the trusted execution environment with the secret information based at least in part on whether the client device receives confirmation from the trusted execution environment that the trusted execution environment has received the at least one policy from the client device. In further accordance with the second implementation, provisioning the trusted execution environment comprises provisioning the trusted execution environment with the secret information in absence of the secure channel between the client device and the trusted execution environment based at least in part on receipt of the confirmation from the trusted execution environment that the trusted execution environment has received the at least one policy from the client device. In an example of the second implementation, the confirmation includes the at least one policy. In a sixth aspect of the first example method, the first example method further comprises requesting auditing information from the trusted execution environment. The auditing information indicates a detected state of the trusted execution environment. In accordance with the sixth aspect, the first example method further comprises determining whether to provision the trusted execution environment with additional information based at least in part on whether the detected state and a reference state are same. In further accordance with the sixth aspect, the first example method further comprises provisioning the trusted execution environment with the additional information in absence of the secure channel between the client device and the trusted execution environment to further customize the trusted execution environment with the additional information based at least in part on the detected state and the reference state being the same. The sixth aspect of the first example method may be implemented in combination with the first, second, third, fourth, and/or fifth aspect of the first example method, though the example embodiments are not limited in this respect. In a seventh aspect of the first example method, the first example method further comprises encrypting a public portion of a policy update key that is associated with the client device with a public portion of a provisioning encryption key. The policy update key has a private portion that is usable by the client device to sign an update regarding at least one policy that is to be provided to the trusted execution environment. The provisioning encryption key has a private portion that is associated with a plurality of trusted execution environments and that is usable by the trusted execution environment to decrypt the public portion of the policy update key. The plurality of trusted execution environments includes the trusted execution environment. The seventh aspect of the first example method may be implemented in combination with the first, second, third, fourth, fifth, and/or sixth aspect of the first example method, though the example embodiments are not limited in this respect. In a second example method performed by a first trusted execution environment using one or more processors of a processor-based system, a chain of trust from a second trusted execution environment to a platform is established based at least in part on receipt of measurements of the second trusted execution environment that are gathered by the platform and that are signed with a platform signing key of the platform. The measurements indicate attributes of the second trusted execution environment, which is hosted by a distributed computing system that hosts the first trusted execution environment. The platform is configured to execute an operating system. The operating system is configured to launch the second trusted execution environment from a template. The second trusted execution environment is provisioned with information in absence of a secure channel between the first trusted execution environment and the second trusted execution environment to customize the second trusted execution environment with the information based at least in part on the chain of trust. In a first aspect of the second example method, establishing the chain of trust comprises establishing the chain of trust from the second trusted execution environment to the platform further based at least in part on receipt of a notice from the second trusted execution environment, the notice informing the first trusted execution environment of existence of the second trusted execution environment. In a second aspect of the second example method, establishing the chain of trust comprises establishing the chain of trust in accordance with a consensus algorithm that is configured to provide redundancy of the first trusted execution environment. In accordance with the second aspect, provisioning the second trusted execution environment comprises provisioning the second trusted execution environment in accordance with the consensus algorithm. In further accordance with the second aspect, the information includes policies and secret information, which are copied from the first trusted execution environment. The second aspect of the second example method may be implemented in combination with the first aspect of the second example method, though the example embodiments are not limited in this respect. In an implementation of the second aspect of the second example method, establishing the chain of trust comprises establishing the chain of trust from the second trusted execution environment to the platform in response to a cloud service that is provided by the distributed computing system instructing the operating system to launch the second trusted execution environment in accordance with the consensus algorithm. In a third aspect of the second example method, establishing the chain of trust comprises using a public key that corresponds to the platform signing key to verify that the measurements are signed with the platform signing key. The third aspect of the second example method may be implemented in combination with the first and/or second aspect of the second example method, though the example embodiments are not limited in this respect. In a fourth aspect of the second example method, the second example method comprises using cryptographic communications to exclude entities other than the first trusted execution environment from knowing the information and to exclude the other entities from manipulating the information, the other entities including a provider of a cloud service that is hosted by the distributed computing system. The fourth aspect of the second example method may be implemented in combination with the first, second, and/or third aspect of the second example method, though the example embodiments are not limited in this respect. In a fifth aspect of the second example method, provisioning the second trusted execution environment comprises securely provisioning the second trusted execution environment with the information. The fifth aspect of the second example method may be implemented in combination with the first, second, third, and/or fourth aspect of the second example method, though the example embodiments are not limited in this respect. In a sixth aspect of the second example method, provisioning the second trusted execution environment comprises provisioning the second trusted execution environment with at least one policy in absence of the secure channel between the first trusted execution environment and the second trusted execution environment based at least in part on the chain of trust. The sixth aspect of the second example method may be implemented in combination with the first, second, third, fourth, and/or fifth aspect of the second example method, though the example embodiments are not limited in this respect. In an implementation of the sixth aspect of the second example method, the second example method further comprises providing an update regarding the at least one policy to the second trusted execution environment. In accordance with this implementation, the update is signed with a policy update key, which corresponds to a public key that is usable by the second trusted execution environment to verify that the update is provided by the first trusted execution environment. In a seventh aspect of the second example method, provisioning the second trusted execution environment comprises provisioning the second trusted execution environment with secret information in absence of the secure channel between the first trusted execution environment and the second trusted execution environment based at least in part on the chain of trust, the secret information including at least one of data, software code, or one or more keys. The seventh aspect of the second example method may be implemented in combination with the first, second, third, fourth, fifth, and/or sixth aspect of the second example method, though the example embodiments are not limited in this respect. In a first implementation of the seventh aspect of the second example method, the second example method further comprises encrypting the secret information with a secret import key that is received from the second trusted execution environment, the secret import key corresponding to a private key that is usable by the second trusted execution environment to decrypt the secret information. In a second implementation of the seventh aspect of the second example method, provisioning the second trusted execution environment comprises provisioning the second trusted execution environment with at least one policy in absence of the secure channel between the first trusted execution environment and the second trusted execution environment based at least in part on the chain of trust. In accordance with the second implementation, the second example method further comprises determining whether to provision the second trusted execution environment with the secret information based at least in part on whether the first trusted execution environment receives confirmation from the second trusted execution environment that the second trusted execution environment has received the at least one policy from the first trusted execution environment. In further accordance with the second implementation, provisioning the second trusted execution environment comprises provisioning the second trusted execution environment with the secret information in absence of the secure channel between the first trusted execution environment and the second trusted execution environment based at least in part on receipt of the confirmation from the second trusted execution environment that the second trusted execution environment has received the at least one policy from the first trusted execution environment. In an example of the second implementation, the confirmation includes the at least one policy. In an eighth aspect of the second example method, the second example method comprises requesting auditing information from the second trusted execution environment, the auditing information indicating a detected state of the second trusted execution environment. In accordance with the eighth aspect, the second example method comprises determining whether to provision the second trusted execution environment with additional information based at least in part on whether the detected state and a reference state are same. In further accordance with the eighth aspect, the second example method comprises provisioning the second trusted execution environment with the additional information in absence of the secure channel between the first trusted execution environment and the second trusted execution environment to further customize the second trusted execution environment with the additional information based at least in part on the detected state and the reference state being the same. The eighth aspect of the second example method may be implemented in combination with the first, second, third, fourth, fifth, sixth, and/or seventh aspect of the second example method, though the example embodiments are not limited in this respect. In a ninth aspect of the second example method, the second example method comprises collaborating with other trusted execution environments in a plurality of trusted execution environments to have a provisioning encryption key propagated to each of the plurality of trusted execution environments in accordance with a consensus algorithm. The plurality of trusted execution environments includes the first and second trusted execution environments. In accordance with the ninth aspect, the second example method comprises encrypting the information with a public portion of the provisioning encryption key. In further accordance with the ninth aspect, the second example method comprises provisioning the second trusted execution environment with the information in response to encrypting the information with the public portion of the provisioning encryption key. The ninth aspect of the second example method may be implemented in combination with the first, second, third, fourth, fifth, sixth, seventh, and/or eighth aspect of the second example method, though the example embodiments are not limited in this respect. In an implementation of the ninth aspect of the second example method, the second example method comprises collaborating with the other trusted execution environments by encrypting the provisioning encryption key, for each of the other trusted execution environments that is to receive the provisioning encryption key from the first trusted execution environment, with a public portion of a secret import key associated with the respective trusted execution environment. A first example computer program product comprises a computer-readable storage medium having instructions recorded thereon for enabling a processor-based system to perform steps. The steps comprise establish a chain of trust from a trusted execution environment to a platform based at least in part on receipt of measurements of the trusted execution environment that are gathered by the platform and that are signed with a platform signing key of the platform. The measurements indicate attributes of the trusted execution environment, which is hosted by a distributed computing system coupled to the processor-based system. The platform is configured to execute an operating system. The operating system is configured to launch the trusted execution environment from a template. The steps further comprise provision the trusted execution environment with information in absence of a secure channel between the processor-based system and the trusted execution environment to customize the trusted execution environment with the information based at least in part on the chain of trust. A second example computer program product comprises a computer-readable storage medium having instructions recorded thereon for enabling a processor-based system to perform steps. The steps comprise establish, by a first trusted execution environment, a chain of trust from a second trusted execution environment to a platform based at least in part on receipt of measurements of the second trusted execution environment that are gathered by the platform and that are signed with a platform signing key of the platform. The measurements indicate attributes of the second trusted execution environment, which is hosted by a distributed computing system that hosts the first trusted execution environment. The platform is configured to execute an operating system. The operating system is configured to launch the second trusted execution environment from a template. The steps further comprise provision, by the first trusted execution environment, the second trusted execution environment with information in absence of a secure channel between the first trusted execution environment and the second trusted execution environment to customize the second trusted execution environment with the information based at least in part on the chain of trust. IV. Example Computer System FIG.11depicts an example computer1100in which embodiments may be implemented. Any one or more of user systems102A-102M and/or any one or more of servers106A-106N shown inFIG.1; client device202shown inFIG.2; client device302shown inFIG.3; client device402shown inFIG.4; first client device502A, second client device502B, and/or any one or more of computers528A-528Z shown inFIG.5; and/or client device700shown inFIG.7may be implemented using computer1100, including one or more features of computer1100and/or alternative features. Computer1100may be a general-purpose computing device in the form of a conventional personal computer, a mobile computer, or a workstation, for example, or computer1100may be a special purpose computing device. The description of computer1100provided herein is provided for purposes of illustration, and is not intended to be limiting. Embodiments may be implemented in further types of computer systems, as would be known to persons skilled in the relevant art(s). As shown inFIG.11, computer1100includes a processing unit1102, a system memory1104, and a bus1106that couples various system components including system memory1104to processing unit1102. Bus1106represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. System memory1104includes read only memory (ROM)1108and random access memory (RAM)1110. A basic input/output system1112(BIOS) is stored in ROM1108. Computer1100also has one or more of the following drives: a hard disk drive1114for reading from and writing to a hard disk, a magnetic disk drive1116for reading from or writing to a removable magnetic disk1118, and an optical disk drive1120for reading from or writing to a removable optical disk1122such as a CD ROM, DVD ROM, or other optical media. Hard disk drive1114, magnetic disk drive1116, and optical disk drive1120are connected to bus1106by a hard disk drive interface1124, a magnetic disk drive interface1126, and an optical drive interface1128, respectively. The drives and their associated computer-readable storage media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computer. Although a hard disk, a removable magnetic disk and a removable optical disk are described, other types of computer-readable storage media can be used to store data, such as flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROM), and the like. A number of program modules may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. These programs include an operating system1130, one or more application programs1132, other program modules1134, and program data1136. Application programs1132or program modules1134may include, for example, computer program logic for implementing any one or more of (e.g., at least a portion of) client-side TEE provision logic110, server-side TEE provision logic112, platform204, TEE206, client-side TEE provision logic310, server-side TEE provision logic312, service314, TEE316, platform318, operating system330, client-side TEE provision logic410, server-side TEE provision logic412, service414, any one or more of TEEs416A-416P, platform418, first client-side TEE provision logic510A, second client-side TEE provision logic510B, first server-side TEE provision logic512A, second server-side TEE provision logic512B, service514, any one or more of TEEs516A-516Z, any one or more of TEEs536A-536Z, activity diagram200(including any activity of activity diagram200), flowchart600(including any step of flowchart600), and/or flowchart800(including any step of flowchart800), as described herein. A user may enter commands and information into the computer1100through input devices such as keyboard1138and pointing device1140. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, touch screen, camera, accelerometer, gyroscope, or the like. These and other input devices are often connected to the processing unit1102through a serial port interface1142that is coupled to bus1106, but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB). A display device1144(e.g., a monitor) is also connected to bus1106via an interface, such as a video adapter1146. In addition to display device1144, computer1100may include other peripheral output devices (not shown) such as speakers and printers. Computer1100is connected to a network1148(e.g., the Internet) through a network interface or adapter1150, a modem1152, or other means for establishing communications over the network. Modem1152, which may be internal or external, is connected to bus1106via serial port interface1142. As used herein, the terms “computer program medium” and “computer-readable storage medium” are used to generally refer to media (e.g., non-transitory media) such as the hard disk associated with hard disk drive1114, removable magnetic disk1118, removable optical disk1122, as well as other media such as flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROM), and the like. Such computer-readable storage media are distinguished from and non-overlapping with communication media (do not include communication media). Communication media embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wireless media such as acoustic, RF, infrared and other wireless media, as well as wired media. Example embodiments are also directed to such communication media. As noted above, computer programs and modules (including application programs1132and other program modules1134) may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. Such computer programs may also be received via network interface1150or serial port interface1142. Such computer programs, when executed or loaded by an application, enable computer1100to implement features of embodiments discussed herein. Accordingly, such computer programs represent controllers of the computer1100. Example embodiments are also directed to computer program products comprising software (e.g., computer-readable instructions) stored on any computer-useable medium. Such software, when executed in one or more data processing devices, causes data processing device(s) to operate as described herein. Embodiments may employ any computer-useable or computer-readable medium, known now or in the future. Examples of computer-readable mediums include, but are not limited to storage devices such as RAM, hard drives, floppy disks, CD ROMs, DVD ROMs, zip disks, tapes, magnetic storage devices, optical storage devices, MEMS-based storage devices, nanotechnology-based storage devices, and the like. It will be recognized that the disclosed technologies are not limited to any particular computer or type of hardware. Certain details of suitable computers and hardware are well known and need not be set forth in detail in this disclosure. V. Conclusion Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims, and other equivalent features and acts are intended to be within the scope of the claims. | 132,935 |
11943369 | DETAILED DESCRIPTION An automated solution that enables the creation of one-way signatures of proprietary technology data, such as software code (which may be either in source code or object code format) or hardware code, such as HDL code or code in another hardware descriptive language or format, and record them into a Global signature database. In one embodiment, prior to recording signatures, they are validated for uniqueness and origin. In one embodiment, once signatures are in the Global signature database, a builder, perhaps more commonly referred to as a vendor, may receive alerts if their data is seen in the public domain or outside the organization. In one embodiment, the system may be used to alert vendor A that IP belonging to someone else is being introduced to their proprietary code base or IP. In one embodiment, the system may be used to track where the proprietary IP is being detected. In one embodiment, the system also ensures that the components used in the proprietary IP are of high quality and can be legally used, without risk of contaminating the proprietary code bases with incompatible or ‘toxic’ free or open source software (FOSS) or commercial licenses or potentially illegally obtained commercial IP. In one embodiment, the system allows effectively protection of vendor proprietary technology data, managing the risk of using 3rd party code, and alerting if IP theft or leakage is detected. In situations where ownership is contested, it can provide a proof of existence and ownership at given point in time. In one embodiment, the Global signature database may be a distributed database, and the system may use public blockchains as ledgers to record signatures in a decentralized, difficult to forge manner. The following detailed description of embodiments of the invention makes reference to the accompanying drawings in which like references indicate similar elements, showing by way of illustration specific embodiments of practicing the invention. Description of these embodiments is in sufficient detail to enable those skilled in the art to practice the invention. One skilled in the art understands that other embodiments may be utilized and that logical, mechanical, electrical, functional and other changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims. FIG.1is a network diagram showing one embodiment of technology and ownership validation system, at a high level. The system includes a plurality of vendors110,120, with proprietary files. The proprietary files may be software code, hardware description language (HDL), IP blocks in various languages, or other proprietary files representing code for software, hardware, or a combination. Note that the proprietary files may include FPGA code, and other descriptors. The protection server160is designed to create a system in which vendors can, in some embodiments, track their own proprietary files securely without disclosing them to any third party, as well as verify that their files are not leaking (being released as open source), and they are not bringing on-board the proprietary files or others, or open source code, without awareness. The protection server160in one embodiment makes local signature generators115,125available to vendors. The vendors110,120can use the signature generators to generate unique, trackable, unforgeable, and non-reverse-engineerable signatures for their proprietary files. Those signatures are then shared with protection server160. In one embodiment, the signature may be made available via a distributed database190. The distributed database190, in one embodiment, stores blockchain signed versions of signatures, in one embodiment generated by signature generation system170. In one embodiment, in addition to the proprietary files of vendors110,120, the system may also obtain files from one or more open source databases130and repositories180and other sources185. Other sources185may include drop file sources, such as paste.bin, wikileaks, and other drop sites. The signature generation system170may process these files to generate the unique signatures for open source files. This enables the IP protection server160to perform comparisons not only between the files of different vendors, but also the files of vendors and open source files. The protection server160performs comparisons, and provides alerts to vendors, as will be described below. In one embodiment, the IP protection server160also provides validation of ownership, and chain of use. FIGS.2A-2Fare diagrams illustrating various use cases for the system.FIG.2Aillustrates an exemplary use case. In this scenario, a vendor creates a signature of all or a portion of their proprietary files, or code base. In one embodiment, metadata is added. Metadata may include the vendor identity, copyright date, license data, and other relevant information. Other relevant information may include supported chipsets/devices, compilation targets, memory requirements, associated other files, etc. In one embodiment, each signature is of a segment of a file. In one embodiment, the metadata associated with one segment indicates the other signature segments associated in a particular complete file. The signatures are processed at the vendor site, enabling the system to be used without providing copies of files which are proprietary to the system. The signatures are then submitted to the signature database. In one embodiment, the proprietary files may be sent to another system, to enable processing of the files off-premise. In one embodiment, the database may be a database maintained by the IP protection sever. In one embodiment, the database may be a publicly available distributed database. The system validates the signatures are validated to be unique and high quality. The validated signatures are then added into the database. Open source data is obtained from various publicly available databases and sources, such as GitHub, SourceForge, Bitbucket, paste.bin, Wikileaks, and others. The files from these open source repositories are processed to generate signatures as well. The system then monitors the proprietary code, to ensure that no open source file signatures are found in the proprietary data, indicating that open source information has been entered into the vendor's proprietary files or that the vendor's proprietary data/IP exists in some public database. If such a match is found, the vendor may be alerted, to enable them to take action. FIG.2Billustrates one embodiment of another use case. In this use case, the signatures are matched against signatures from another vendor. When a match is found, an alert is sent to the vendor whose files are contaminated. FIG.2Cillustrates another example use case, in which when a match is found between the files of two vendors, the alert is sent to the vendor whose files are leaked/misappropriated. FIG.2Dillustrates another example use case in which a vendor creates signatures of licensed files, with metadata. The metadata may identify the type of licenses provided, and other relevant data. When the data of other vendors, and optionally open source files, are scanned, the use of the licensed code is identified. Furthermore, it enables identification of the code that is not properly licensed. FIG.2Eillustrates another example use case, in which proof of authorship, ownership, and existence is incorporated into the GIPS protection database. This enables the system to become a central registrar for authenticity of source code, based on proprietary data. This may be provided as an effective proof, without storing the actual source code. In one embodiment, the system also permits owner of the IP to register multiple different versions of software, with similar, overlapping signatures for some parts of the IP. In one embodiment, the system also permits moving of a portfolio between companies, due to mergers & acquisitions (M&A), technology transfers, etc. In one embodiment, if such a transfer occurs, the system may also provide a complete audit trail of such transactions. FIG.2Fillustrates an example use case, in which the signature data is stored in the form of blockchains. Blockchain represents a public ledger, which is used in one embodiment to provide a one-way unforgeable signature of the files. The format of the blockchain selected may be bitcoin, or some other active Blockchain (Ethereum, Litecoin, Doge, NXT, etc.) This enables the system to push verified, unchallenged signatures to a public blockchain, which is made freely available. This may be used to establish proof of Existence, Establish proof of ‘first’ creation, Establish proof of ‘prior art’. In one embodiment an ‘open’ signature algorithm is used. In one embodiment, the signature algorithm, instead of hash, supports partial matching. The signature algorithm is robust against code alterations, and thus supports matching partial code snippets, and simple modifications such as renaming functions and variables, or removing comments does not impact the match. This enables the system to match code snippets, such as a function that has been copied vs. the entire the source file. FIG.3is a block diagram of one embodiment of the system. In one embodiment, the system includes a vendor system305, central signature processor360, a signature validator330, and a matching and authentication server380. Although shown separately, the signature validator330, central signature processor360, and matching and authentication server380may be parts of the same system, located on the same server, or located on a distributed system which works together. The vendor system305in on embodiment, is a downloadable tool, which is made available to a vendor. In one embodiment, the vendor system305enables a vendor to process their proprietary files locally, without providing them to the system. This enables the vendor to maintain trade secrets, and not reveal the exact details of their files. The vendor system305includes local signature generator310, and signature store315. In one embodiment, the signatures have associated metadata. The metadata may include the vendor's identification, licensing information, file associations, and other relevant data. The signatures and associated metadata generated are stored in signature store315, and in one embodiment communicated via communication system320to the signature validator330. Communications system320, in one embodiment, comprises a network connection, or secure upload mechanism or cloud storage mechanism, or another way of providing the signatures to the IP protection server. In one embodiment, the vendor may choose send some or all of their proprietary files to the central signature processor360, which can generate signatures, instead of generating them on-site. Center signature processor360, in one embodiment, processes open source files, and optionally files provided by vendors who want off-site signature generation. The open source scraper365obtains open source files from repositories such as GitHub and SourceForge, as well as site that provide a way to download files, such as Wikileaks, Tor, and Pastebin, or other known sources of open source files. Signature & metadata generator370generates the signatures and metadata for open source files. For files obtained from vendors, the vendor provides the metadata for inclusion. The metadata for open source files in one embodiment includes source (e.g. Github), file associations, license, creation date, version, and other relevant information. Signature store375temporarily stores the generated signatures, while communications system363provides the files to the signature validator330. Signature validator330includes comparator335to compare the signatures from vendor system305, and central signature processor360, which are stored in its storage355. If a conflict is identified, validator340attempts to resolve the conflict, and if there is insufficient information, alerts the vendor. In one embodiment, signature validator330is used to ensure that signatures are unique, and that multiple copies of the same file are not claimed by different originators. In one embodiment, signature validator330includes a block chain generator345. Blockchain generation creates a unique validation key for each signature, in on embodiment, once the signatures are validated as being unique. Using blockchain enables the use of a distributed database399, which can serve as an escrow and validation, as will be described below. The signature data is sent, via communication system350to matching and authentication server380, and distributed database399. Matching and authentication server380in one embodiment maintains a global database385of signatures. Since the signatures are validated by validator330, each signature in the database is unique385. The signatures also include metadata, providing information about the file(s) associated with the signature. In one embodiment, the matching and authentication server380includes a signature matcher390, which enables matching of signatures in the database, whether proprietary or open source to identify leakage/misappropriation (when proprietary files of one vendor appear in the files of an open source project or another vendor) and contamination (when open source files, or files of another vendor appear in the files of a vendor). Alert system395sends out alerts, via communication system383, to the appropriate vendor(s). In one embodiment, a vendor is informed of leakage/misappropriation or contamination, to enable them to take action. Updater/versioning logic enables the system to update signatures when new versions of products or files are released. In one embodiment, the system does not re-generate all signatures, but only tracks alterations, and provides versioning and changes in ownership or licensing. In one embodiment, the blockchain generator345is used to update the blockchain to reflect such changes. In another embodiment, a new blockchain transaction may be generated when such changes are made. Each of the systems and logics described herein run on a computer system or processor, and are algorithmic implementations to solve the technological problem presented by validating the authenticity and uniqueness of code. In one embodiment, the algorithms are implemented in software, such as C/C++, Go, Java, and Python. This problem, and thus this solution, is inherently linked to computing technology, since this problem only occurs because computer software and hardware IP have issues of leakage and contamination. In one embodiment, signature generators115,125are embedded in one or more electronic design automation (EDA) tools and automatically generate signatures each time the tool is invoked by the vendor throughout the EDA flow. An EDA flow can include multiple steps, and each step can involve using one or more EDA software tools. Some EDA steps and software tools are described below, with respect toFIG.13. These examples of EDA steps and software tools are for illustrative purposes only and are not intended to limit the embodiments to the forms disclosed. To illustrate the EDA flow, consider an EDA system that receives one or more high level behavioral descriptions of an IC device (e.g., in HDL languages like VHDL, Verilog, etc.) and translates (“synthesizes”) this high level design language description into netlists of various levels of abstraction. A netlist describes the IC design and is composed of nodes (functional elements) and edges, e.g., connections between nodes. At a higher level of abstraction, a generic netlist is typically produced based on technology independent primitives. The generic netlist can be translated into a lower level technology-specific netlist based on a technology-specific (characterized) cell library that has gate-specific models for each cell (functional element). The models define performance parameters for the cells; e.g., parameters related to the operational behavior of the cells, such as power consumption, delay, transition time, and noise. The netlist and cell library are typically stored in computer readable media within the EDA system and are processed and verified using many well-known techniques. Before proceeding further with the description, it may be helpful to place these processes in context.FIG.13shows a simplified representation of an exemplary digital ASIC design flow. At a high level, the process starts with the product idea (step E100) and is realized in an EDA software design process (step E110). When the design is finalized, it can be taped-out (event E140). After tape out, the fabrication process (step E150) and packaging and assembly processes (step E160) occur resulting, ultimately, in finished chips (result E170). The EDA software design process (step E110) is actually composed of a number of steps E112-E130, shown in linear fashion for simplicity. In an actual ASIC design process, the particular design might have to go back through steps until certain tests are passed. Similarly, in any actual design process, these steps may occur in different orders and combinations. This description is therefore provided by way of context and general explanation rather than as a specific, or recommended, design flow for a particular ASIC. A brief description of the components steps of the EDA software design process (step E110) will now be provided: System design (step E112): The designers describe the functionality that they want to implement and can perform what-if planning to refine functionality, check costs, etc. Hardware-software architecture partitioning can occur at this stage. Exemplary EDA software products from Synopsys, Inc. that can be used at this step include Model Architect, Saber, System Studio, and DesignWare® products. Logic design and functional verification (step E114): At this stage, the VHDL or Verilog code for modules in the system is written and the design is checked for functional accuracy. More specifically, the design is checked to ensure that it produces the correct outputs. Exemplary EDA software products from Synopsys, Inc. that can be used at this step include VCS, VERA, DesignWare®, Magellan, Formality, ESP and LEDA products. Synthesis and design for test (step E116): Here, the VHDL/Verilog is translated into a netlist. The netlist can be optimized for the target technology. Additionally, the design and implementation of tests to permit checking of the finished chip occurs. Exemplary EDA software products from Synopsys, Inc. that can be used at this step include Design Compiler®, Physical Compiler, Test Compiler, Power Compiler, FPGA Compiler, Tetramax, and DesignWare® products. Design planning (step E118): Here, an overall floorplan for the chip is constructed and analyzed for timing and top-level routing. Exemplary EDA software products from Synopsys, Inc. that can be used at this step include Jupiter and Floorplan Compiler products. Netlist verification (step E120): At this step, the netlist is checked for compliance with timing constraints and for correspondence with the VHDL/Verilog source code. Exemplary EDA software products from Synopsys, Inc. that can be used at this step include VCS, VERA, Formality and PrimeTime products. Physical implementation (step E122): The placement (positioning of circuit elements) and routing (connection of the same) occurs at this step. Exemplary EDA software products from Synopsys, Inc. that can be used at this step include the Astro product. Analysis and extraction (step E124): At this step, the circuit function is verified at a transistor level, this in turn permits what-if refinement. Exemplary EDA software products from Synopsys, Inc. that can be used at this step include Star RC/XT, Raphael, and Aurora products. Physical verification (step E126): At this step various checking functions are performed to ensure correctness for: manufacturing, electrical issues, lithographic issues, and circuitry. Exemplary EDA software products, Inc. that can be used at this step include the Hercules product. Resolution enhancement (step E128): This step involves geometric manipulations of the layout to improve manufacturability of the design. Exemplary EDA software products from Synopsys, Inc. that can be used at this step include iN-Phase, Proteus, and AFGen products. Mask data preparation (step E130): This step provides the “tape-out” data for production of masks for lithographic use to produce finished chips. Exemplary EDA software products from Synopsys, Inc. that can be used at this step include the CATS(R) family of products. With embedded signature generators115,125, each of the above described EDA tools can generate and transmit unique signatures upon completion of each portion of the EDA flow. Thus a signature can be generated at the HDL stage, the netlist stage or after completion of place and route. Similarly, the software design flow can include various tools each of which can include signature generators115,125. By way of example, the Synopsys Software Security includes various tools such as the Synopsys' state-of-the-art static application security testing (SAST) product, Coverity. The Coverity tool can generate signatures on code following the completion of a static check prior to checking new code into a build. For the present application, regardless of which version of the design is used, the application will reference “language” and “code” and “code segment,” for simplicity. However, it should be understood that these terms are meant to encompass the various versions of the EDA generated elements. FIG.4is an overview flowchart of one embodiment of the system. The process starts at block410. At block420, signatures are generated locally for proprietary files. In one embodiment, the proprietary files may be hardware description language, such as HDL files. The signatures are generated, in one embodiment, using the process described below. At block430, the system determines whether the signatures are unique. This ensures that the system can uniquely identify the file segment associated with the signature. Note that the signature generation algorithm is such that the signatures are unique. Therefore, if the signature is not unique, that means that the same code was submitted multiple times to signature generation. If the signatures are unique, they are added to a database at block440. In one embodiment, in addition to the signature, the relevant metadata is also added to the database. The metadata may include information about the vendor, license, and other relevant information. At block445, a blockchain transaction is generated for each of the validated signatures, and the transactions are recorded to the blockchain that acts as a distributed database. The distributed database makes the signature available. This enables the use of the signature for authentication, proof of authorship, ownership, and existence. In one embodiment, this enables the distributed database to become a central ‘registrar’ for authenticity of the proprietary files. In one embodiment, the blockchain acts as a sort of ‘escrow’ in validation that does not require users to store their proprietary files. This is cheaper to manage than traditional escrow services. In one embodiment submissions to blockchain are securely signed to identify submitting organization, and associated metadata to support trail of ownership, licensing, and other metadata. The process then continues to block460. If the signature was not unique, at block450the vendor is alerted to the policy violation, and directed to resolve it. In one embodiment, such issues may be resolved by identifying licensed content, acquisitions, or other reasons for overlap. At block460, the system processes open source content to generate signatures. In one embodiment, the system scrapes multiple repositories of open source data. In one embodiment, the system scrapes data from appropriate type(s) of repositories. For example, there may be repositories of hardware description language (HDL), which may be processed for a system which evaluates HDL. One example of an open source hardware repository is OpenCores found at http://opencores.org/. At block470, the process determines whether there are any overlaps. Overlaps may be evidence of open source data contaminating a vendor's product, or the vendor's proprietary code being leaked into open source. If overlap is detected, at block480the vendor is alerted to the policy violation, and the open source issue detected. The process then ends, at block490. In one embodiment, this process runs continuously as new data is acquired from vendors and/or open source repositories. In one embodiment, as versions are released and updated, the process is again run. In one embodiment, the process is only run on newly added content. Of course, though this is shown as a flowchart, in one embodiment it may be implemented as an interrupt-driven system, or executed over multiple devices and in multiple time frames. For example, signature uniqueness verification may occur periodically, and at a remote system from the system which generates the signatures. Similarly, open source processing may occur in parallel with other processes. Therefore, one of skill in the art should understand this flowchart, and all other flowcharts in this application to describe a set of actions that are related to a particular process, but not assume the ordering of the elements of the flowchart cannot be altered while staying within the scope of the disclosure. FIG.5is a flowchart of one embodiment of generating a code signature for a source file. The process begins at stage504by determining a language of the source file. In an embodiment, the language may be detected based on the file extension. For example, the file extension “py” may indicate the Python programming language. In an embodiment, the programming language may also be determined through analysis of the file content. For example, presence of ‘magic numbers,’ unique language-specific reserved keywords, or aspects of the code structure, such as text sequences or indentation may be compared to known aspects associated with the language. In other instances, hardware components may be described by hardware description language at a level of abstraction that does not include HDL code. For example, the file may be a netlist file in ASCII text or EDIF (Electronic Design Interchange Format) which is a vendor-neutral format commonly used to store electronic netlists and schematics data. The file may also be in a GDSII file in the GDSII stream format, which is a database file format that is a de facto industry standard for data exchange of an integrated circuit or IC layout. It is a binary file format representing planar geometric shapes, text labels, and other information about the layout in hierarchical form. The file may also be in the form of a scripting language or interpretive code for use in a run-time environment that automates the execution of tasks to create a software or hardware build. For simplicity, all of these formats will be referred to as a “language” or “code” and the file that is being analyzed will be referred to as the source file. At stage506, a list of reserved keywords, key phrases, and magic numbers associated with the language is identified. For example, terms such as “break” and “return” are language reserved keywords in the C programming language. In an embodiment, the list of language reserved keywords and key phrases may be stored and maintained in a reference database. At stage508, text that does not match a language reserved keyword or key phrase of the identified list is removed from the source file. This removes variable names, comments, and other such parts of the code. At stage510, language-specific control characters and control character sequences are removed from the source file. This leaves only language reserved keywords and key phrases, in the processed file. The removal of content from the source file that does not match language reserved keywords or key phrases addresses issues associated with, for example, variable, class, and function name changes within the source file, as the code signature no longer relies on naming conventions. At stage512, in one embodiment, each language reserved keyword and key phrase of the source file is replaced with a corresponding compact byte representation to produce an encoded sequence. In an embodiment, each language reserved keyword and key phrase may be mapped to a byte representation, for example a single ASCII character. These mappings may be predefined or defined dynamically. This drastically reduces the size of the encoded sequence for storage and processing. One of skill in the art will appreciate that the corresponding compact byte representations need not be exactly one byte in size, but will typically be smaller in size than the corresponding language reserved keywords and key phrases. Stage512may be repeated for individual modules within the source file to create additional code sequences for those individual modules. Individual modules in a source file may be, for example, classes, functions, subroutines, or blocks of a predetermined number of lines of code. In this manner, creation of code sequences for individual modules may then represent code snippets within a source file. At stage514, the encoded sequences are hashed to produce code signatures for the source file including, in an embodiment, code signatures for individual modules of the source file. Any available hash function may be used for this purpose, such as but not limited to, MD5, SHA1, SHA2, RIPEMD, or Whirlpool. The system stores, and utilizes the data from stage514and stage512, for matching. If only signature from stage514is stored, then partial matching will be more difficult. In one embodiment, the signature from stage514helps pick full matches quickly, and the system can spend more computing time on partial matching that is allowed by the signatures in stage512. FIG.6is a flowchart of one embodiment of enumerating matched signatures. The process starts at block610. At block620, hardware design language (HDL) or other hardware file signatures are received from vendors and open source repositories or other public sources. The signatures are validated signatures from vendors and signatures from open source repositories. At block630, the process compares the signatures to the signatures in the database. The system may include multiple databases. In one embodiment, open source signatures may be in a separate database from vendor signatures. At block640, the process determines whether there is a match. If no matches are found, the process ends at block660. If there is a match, at block650, third party identified via matches are enumerated. The enumeration indicates vendor or open source matches, and the sources of those matches. In one embodiment, the set of potential matches are further processed, as will be described below. In one embodiment, the vendor is simply alerted about each match. FIG.7is a flowchart of one embodiment of verifying HDL data against open source databases. The process starts at block710. At block720proprietary signatures are received from one or more vendors. The signatures, as previously noted are generated by the vendors. At block730, the process compares the proprietary signatures to an open source signatures in the database. At block740, the process determines whether there is a match between the open source file, and a proprietary signature. If there is no match, the process ends, at block750. If there is a match, the process determines whether this is free or open source software (FOSS) contamination. FOSS contamination occurs when a vendor inadvertently brings open source software into their proprietary files. FOSS contamination occurs when engineers bring in code that is open sourced. If FOSS contamination is detected, the vendor is alerted to fix the issue, at block765. If it's not FOSS contamination, then it is likely to be potential leakage. Leakage occurs when proprietary code is made available under an open source license, without the permission of the vendor. At block780, the vendor is alerted to the potential leakage. The process then ends. In one embodiment, the determination between leakage and contamination may not be possible to make. If the data about the origination of either the open source or the proprietary files is not fully available, the system may simply alert the vendor of a problem, without specifying whether it was potential leakage or potential contamination. FIG.8is a flowchart of one embodiment of verifying HDL or other IC design data files against the files of other vendors. The process starts at block810. At block820proprietary signatures are received from one or more vendors. The signatures, as previously noted are generated by the vendors. At block830, the process compares the proprietary signatures of a first vendor against proprietary signatures of other vendors. In one embodiment, comparisons are one way. At block840, the process determines whether there is a match between the first vendor's proprietary files, and the proprietary signatures of another vendor. If there is no match, the process ends. If there is no match, the process ends at block850. If there is a match, the process determines whether this is contamination, at block860. Contamination occurs when a vendor inadvertently brings another vendor's software into their proprietary files. This may happen as engineers move between vendors, through misappropriation, or otherwise. If contamination is detected, the vendor is alerted to fix the issue, at block865. If it's not contamination, then it is likely to be appropriation. Appropriation occurs when proprietary code is taken by another vendor, without a license or similar permission. At block875, the vendor is alerted to the potential appropriation. If it's not that, or the process cannot identify whether it is contamination or appropriation, then at block880the process flags the issue for resolution. The process then ends. FIG.9is a flowchart of one embodiment of licensing and authentication using the system. The process starts at block910. At block920, proprietary signatures are received from Vendor A along with licensing data. In one embodiment, the licensing data may include the types of licenses available. In one embodiment, the licensing data may be tied to a database of licensed companies. At block930, the proprietary signatures from vendor A's licensed portfolio are compared to the code portfolios of other vendors. At block940, the process determines whether there is a match. If no match is found, the process continues directly to block970, to determine whether all signatures have been checked. If not, the process returns to block930, to check the next signature against all vendors in the database. If all signatures have been checked, at block980a usage trace is created for each signature. The usage trace identifies the travel of the code. It also permits Vendor A to identify unlicensed users. The process then ends at block990. If a match was found at950, the usage is traced in the database. The usage data may include how and in what combination (e.g. combined with what other content) the code is used. At block960, vendor A may be alerted if no license data is used. At block970, the process determines whether all licensed signatures have been checked. If not, the process returns to block930, to check the next signature. If all signatures have been checked, at block980the usage trace data is made available to the vendor A. In one embodiment, the usage traces may be analyzed by the system to determine licensees, and enable the creation of a list of licensees as well. The process ends at block990. FIG.10is a flowchart of one embodiment of updating data in an existing signature. In one embodiment, as files are deprecated, or licenses are altered, software is sold or acquired. The system maintains the signatures in the databases, but updates the metadata to reflect the current status. The process starts at block1010. At block1020, a notice of update of some proprietary files that have signatures is received. At block1030, the process determines whether the update is from the verified originator. The verified originator is the same entity that originally provided the signatures. In one embodiment, public key cryptography is used to provide verification. If the update is not from the verified originator, the verified originator is notified, and validation is received. If no validation is received, at block1050, the process ends. If validation is received, the process continues to block1060. If the update is verified, as determined at block1030, the process continues to block1060. At block1060, the signature and/or metadata is updated to reflect the transfer, change of license, or other status change. In one embodiments, the history of prior statuses and ownerships is maintained. At block1070, the verified originator is notified of the update. This ensures that there cannot be an update by a third party, without the originator's consent. At block1080, the chain of ownership is updated. The process then ends at block1090. FIG.11is a flowchart of one embodiment of resolving conflicts. The process starts at block1110. At block1120, a plurality of signatures from a plurality of vendors are received and placed in a database. At block1225, new signatures are compared to existing signatures, to determine uniqueness. In one embodiment, because of the way signatures are generated, duplication inherently means that the code is substantially identical. If no conflicts are found, at block1130, at block1135, a blockchain is generated for the signature, and it is added to the database and distributed database. The process then ends at block1199. If there is a conflict, found at block1130, the process continues to block1140. At block1140, the process determines whether the conflict is within the organization. Proprietary code is often reused within an organization in new projects. If the reuse is within the organization, the relationship between the elements is flagged, at block1145. In one embodiment, the blockchain is generated only once for each signature. However, because the metadata stores the relationship of the reuse, this is sufficient. If the conflict is not within the organization, as determined at block1140, at block1150, the process determines whether this is licensed software. If so, at block1155, the process flags the licensing relationship and adds the additional licensing terms. As a general rule, if the original signature submission includes the licensing term (e.g. that the code segment is not proprietary to the vendor but rather licensed) this check may not indicate a conflict with the licensor. However, if the original signature submission does not make this indication, the data is added at block1155. The relationship is then flagged, at block1145. If the software is not licensed software, as determined at block1150, the process determines whether the data is open source software. If so, the system flags the code segment, at block1170. The process then ends at block1199. If the conflict is not with open source, the process continues to block1180. At block1180, the process determines whether the priority is obvious. Priority indicates when the code was originally created, and can show that the later-added code is actually the code that should be in the system. If the priority is not obvious, the process flags this conflict as a problem to resolve, at block1170. If the priority is obvious, at block1185, the process determines whether this data predates the existing data in the system. If so, the prior signature is flagged, at block1190. In one embodiment, the prior vendor is alerted, as well as the new vendor. If this data does not predate, then the conflict is flagged as a problem to resolve. The embodiments ofFIG.11may be utilized to resolve legal proceedings when allegations arise or to identify instances of overuse of licensed hardware or software components. Overuse may occur, by way of example, when an organization licenses a circuit block for a limited number of uses but inadvertently uses the circuit block in a number of circuits that exceeds the authorized licensed limit. In such cases, resolution may be for the licensee to submit additional payment to the licensor for such overuse and to amend the license agreement to reflect such use. In other instances, a foundry business may require vendors to submit to a review prior to manufacture of an integrated circuit for a third party vendor to prevent (or at least eliminate) piracy. FIG.12is a block diagram of one embodiment of a computer system. The computer system may be a desktop computer, a server, or part of a distributed set of computers, or “cloud” system which provides processing and storage capabilities. The elements described above with respect toFIG.3are implemented by one or more computer systems, which may correspond to the computer system described herein. It will be apparent to those of ordinary skill in the art, however that other alternative systems of various system architectures may also be used. The system illustrated inFIG.12includes a bus or other internal communication means1240for communicating information, and a processing unit1210coupled to the bus1240for processing information. The processing unit1210may be a central processing unit (CPU), a digital signal processor (DSP), or another type of processing unit1210. The system further includes, in one embodiment, a random access memory (RAM) or other volatile storage device1220(referred to as memory), coupled to bus1240for storing information and instructions to be executed by processor1210. Main memory1220may also be used for storing temporary variables or other intermediate information during execution of instructions by processing unit1210. The system also comprises in one embodiment a read only memory (ROM)1250and/or static storage device1250coupled to bus1240for storing static information and instructions for processor1210. In one embodiment, the system also includes a data storage device1230such as a magnetic disk or optical disk and its corresponding disk drive, or Flash memory or other storage which is capable of storing data when no power is supplied to the system. Data storage device1230in one embodiment is coupled to bus1240for storing information and instructions. The system may further be coupled to an output device1270, such as a cathode ray tube (CRT) or a liquid crystal display (LCD) coupled to bus1240through bus1260for outputting information. The output device1270may be a visual output device, an audio output device, and/or tactile output device (e.g. vibrations, etc.) An input device1275may be coupled to the bus1260. The input device1275may be an alphanumeric input device, such as a keyboard including alphanumeric and other keys, for enabling a user to communicate information and command selections to processing unit1210. An additional user input device1280may further be included. One such user input device1280is cursor control device1280, such as a mouse, a trackball, stylus, cursor direction keys, or touch screen, may be coupled to bus1240through bus1260for communicating direction information and command selections to processing unit1210, and for controlling movement on display device1270. Another device, which may optionally be coupled to computer system1200, is a network device1285for accessing other nodes of a distributed system via a network. The communication device1285may include any of a number of commercially available networking peripheral devices such as those used for coupling to an Ethernet, token ring, Internet, or wide area network, personal area network, wireless network or other method of accessing other devices. The communication device1285may further be a null-modem connection, or any other mechanism that provides connectivity between the computer system1200and the outside world. Note that any or all of the components of this system illustrated inFIG.12and associated hardware may be used in various embodiments of the present system. It will be appreciated by those of ordinary skill in the art that the particular machine that embodies the present system may be configured in various ways according to the particular implementation. The control logic or software implementing the present system can be stored in main memory1220, mass storage device1230, or other storage medium locally or remotely accessible to processor1210. It will be apparent to those of ordinary skill in the art that the system, method, and process described herein can be implemented as software stored in main memory1220or read only memory1250and executed by processor1210. This control logic or software may also be resident on an article of manufacture comprising a computer readable medium having computer readable program code embodied therein and being readable by the mass storage device1230and for causing the processor1210to operate in accordance with the methods and teachings herein. The present system may also be embodied in a handheld or portable device containing a subset of the computer hardware components described above. For example, the handheld device may be configured to contain only the bus1240, the processor1210, and memory1250and/or1220. The handheld device may be configured to include a set of buttons or input signaling components with which a user may select from a set of available options. These could be considered input device #11275or input device #21280. The handheld device may also be configured to include an output device1270such as a liquid crystal display (LCD) or display element matrix for displaying information to a user of the handheld device. Conventional methods may be used to implement such a handheld device. The implementation of the present system for such a device would be apparent to one of ordinary skill in the art given the disclosure of the present invention as provided herein. The present system may also be embodied in a special purpose appliance including a subset of the computer hardware components described above, such as a kiosk or a vehicle. For example, the appliance may include a processing unit1210, a data storage device1230, a bus1240, and memory1220, and no input/output mechanisms, or only rudimentary communications mechanisms, such as a small touch-screen that permits the user to communicate in a basic manner with the device. In general, the more special-purpose the device is, the fewer of the elements need be present for the device to function. In some devices, communications with the user may be through a touch-based screen, or similar mechanism. In one embodiment, the device may not provide any direct input/output signals, but may be configured and accessed through a website or other network-based connection through network device1285. It will be appreciated by those of ordinary skill in the art that any configuration of the particular machine implemented as the computer system may be used according to the particular implementation. The control logic or software implementing the present system can be stored on any machine-readable medium locally or remotely accessible to processor1210. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g. a computer). For example, a machine readable medium includes read-only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, or other storage media which may be used for temporary or permanent data storage. In one embodiment, the control logic may be implemented as transmittable data, such as electrical, optical, acoustical or other forms of propagated signals (e.g. carrier waves, infrared signals, digital signals, etc.). In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. | 48,826 |
11943370 | DETAILED DESCRIPTION As described herein, one or more embodiments of the present invention utilize device-bound Public Key Infrastructure (PKI) credentials to sign an Open Authorization (OAuth) refresh token in the refresh token message payload. As such, and in one or more embodiments of the present invention, registration of the public key of a credential pair is performed during initial grant establishment, and the addition of a signature over the refresh token is provided during any subsequent refresh token flow. One or more embodiments of the present invention leverage patterns such as Fast Identity Online (FIDO) to establish a device-bound keypair including the attestation of such a keypair, and does not require anything other than application-level payloads (i.e. no Transport Layer Security—TLS extensions). In one or more embodiments of the present invention in which an OAuth system is utilized, a FIDO assertion payload, using the refresh token itself as the challenge, is added to the refresh token payload parameters and validated at the server to ensure the signature is performed with a key that was previously registered in a device-bound manner. Such OAuth systems provide an identity security pattern in native applications, such that a user participates in a registration ceremony after installing an application to establish a long-lived OAuth grant (first refresh token and access token) for the application to act as that user when calling Application Programming Interfaces (APIs) to a computer resource, such as data, programs, hardware, etc. An exemplary OAuth registration process for a user begins with the user entering a user profile (e.g., name, address, etc.) for initial identification. In one or more registration scenarios, Multi-Factor Authentication (MFA) is used, in which the user not only provides a password, but also biometric information (e.g., the user's fingerprint); a current location the user, as determined by a Global Positioning System (GPS) sensor in an electronic device used by the user, a current communication connection to a particular cell tower in a cellular network, etc.; access to a security token, such as a dongle that generates ever-changing characters known to be valid at a current time by a security system, etc. Once the user enters his/her/their user profile, MFA credentials, etc., this information is exchanged for an initial OAuth grant from an authorization server. A native application may launch a system browser on the device where user authenticates via browser-based mechanisms to initiate an authorization code grant type flow or a native application may offer this functionality built into the application itself for an initial OAuth grant. The native application honors any redirect Uniform Resource Identifier (URI) that is a registered custom launch URI for the app. The user authenticates on another device browser (e.g., on a laptop) and is presented with a Quick Response (QR) code, which contains the authorization code, such that the native application can scan the QR code. Regardless of the specific flow (those described above or similar variants), an initial grant is established for the application on the device. The refresh token is then stored in app-specific secure storage to prevent disclosure to other apps. After registration, whenever the native application on the device (e.g., a mobile device, such as a smart phone) needs a new access token for calling APIs, a refresh token flow is performed by the native application making a call to the authorization server's token endpoint. Only the refresh token itself serves to identify the grant being refreshed. The problem with the OAuth process just described, which one or more embodiments of the present invention overcome in a new and useful manner, is that there is no proof provided that the refresh token is being presented from the device to which it was originally issued. As such, without the present invention the authentication server is unable to determine/distinguish a scenario in which the refresh token has been compromised from the device, native application, and/or network, such that the refresh token is being presented from a different device or program. In order to address this issue, one or more embodiments of the present invention have the native application use an asymmetric device-bound credential that the application creates during the registration phase, to sign the refresh token each time a refresh token flow is performed. By ensuring that the signing operation can only take place on the device where the grant is initially established, the server has a much higher level of confidence that the OAuth grant is being used by the application on the device to which it was issued. One or more embodiments of the present invention provide a clear improvement over token binding. Token binding uses cryptographic certificates, but requires the use of a Transport Layer Security (TLS) extension, in order to obtain cryptographic certificates on both ends of a TLS connection (e.g., between a client and an authorization server). As such, user instrumentation of such a system requires a high level of expertise, time, and computer resources, which the present invention does not require. Furthermore, token binding does not mandate that the credential created at the client be device-bound, thus making it less secure. As such, one or more embodiments of the present invention provide a resource security solution that is at the application data level (e.g., using extra parameters in OAuth payloads), and is not dependent on token binding implementations in TLS. At a high level, one or more embodiments of the present invention have two phases: a Registration Phase and a Subsequent Grant Refresh. Registration Phase During registration, the native application receives or collects “initial grant data” from the user. In one or more embodiments of the present invention, this initial grant data is Resource Owner Password Credentials (ROPC) user authentication information, such as a username/password, which is supplied to the native application on the client device via a user interface. In one or more embodiments of the present invention, this initial grant data is an authorization code that was obtained either from a handoff (custom launch URI invocation) from the system browser, or scanned in via QR code. Before the native application sends this initial grant data to the token endpoint to obtain its first authorization grant, the native application creates a local device-bound public/private key credential. In one or more embodiments of the present invention, the device's operating system offers Fast Identity Online—2 (FIDO2) platform authenticator Application Program Interfaces (APIs), which are used to create the credential. In one or more embodiments of the present invention, platform APIs (e.g. the Secure Enclave APIs) create the credential. The credential that is created may or may not require user verification to be created and used. This choice comes down to the security requirements (policy) of the company offering API services and the desired user experience. In Fast Identity Online (FIDO) terminology and specifications, user verification is optional, and that remains so in the ideas described herein. Having created the keypair, the native application then constructs a FIDO attestationObject, which is included with the initial grant data in a request to the OAuth token endpoint in the authentication server. In one or more embodiments of the present invention, a packed attestation format is used, such that the clientDataJSON that is input to the creation of the attestation object is the initial grant data (in lieu of a server-provided challenge). In one or more embodiments of the present invention, the attestation private key is embedded into the native application source itself. Protection of the attestation private key is performed using a technology such as white-box encryption and secure native code distribution practices. One or more embodiments of the present invention utilize an attestation protocol, such as FIDO2 attestation. An attestation, such as FIDO2 attestation, provides a proof to the authentication server of the make/model of the client authenticator that was used. In the case of a native application functioning as the client authenticator, this is attesting to the integrity of the application by identifying the vendor and version of the native application is that specified by the application developer, such that the application is trusted to be doing what it was coded to do. In one or more embodiments of the present invention, the authentication server must be able to prove that the credential being created, which will be used to sign refresh token flows, is device-bound, such that the private key is only used on the device where it is established. Without such an attestation, the server doesn't know that the client communicating with it is a legitimate installation of the application. If the application is using native FIDO2 APIs provided by the operating system of the client device, the attestation is provided by the platform. If not, the attestation is supplied in the native application code. The native application then presents the initial grant data, the attestationObject (which includes the public key of the device-bound credential) and the signature of the initial grant data (to verify the public key) to the token endpoint of the OAuth authorization server. The OAuth authorization server verifies the attestation using pre-configured metadata that includes the trusted signer of the attestation private key. This verification includes checking that the authenticator data in the attestation object indicates user verification was performed if that is the policy of the server. The initial grant data is also verified. The public key of the device-bound credential is associated server-side with the OAuth grant, and the initial refresh token and access token are returned to the native application client. Subsequent Grant Refresh From time to time the native application will perform new refresh token flows as access tokens expire. In one or more embodiments of the present invention, the refresh token flow includes, alongside the refresh token, FIDO authenticatorData and a signature (of the hash of the refresh token), performed using the private key of the device-bound credential that was registered during initial grant establishment. This provides a proof to the server on each refresh that the request was generated at the device from which the grant was established. The clientDataJSON that is described in FIDO authentication documentation includes the refresh token instead as the server-provided challenge. In one or more embodiments of the present invention, the refresh token is given a unique type. Exemplary pseudocode for the clientDataJSON is: {“type”: “urn:mycustomapp:oauth”,“refresh_token”: “RT_value”} As such, there is no need for the WebAuthn-defined challenge, origin, or tokenBinding fields. The entire refresh token flow post body then is described as: POST/oauth/token client_id=public_client&grant_type=refresh_token&refresh_token=RT_value&authenticatorData=b64url_authenticator_data&signature=b64url_sig The authentication server ensures that the RT value is valid and represents a refresh token (RT) known to the server. The authentication server looks up the associated public key that was registered for the grant at grant establishment (this becomes the allowCredentials list in FIDO terminology), then makes sure that the rest of the validation procedure in Verifying an Authentication Assertion is successful with some minor considerations given to data values (such as origin, which in one or more embodiments of the present invention is a Uniform Resource Name—URN) since this is a native scenario rather than a browser with pure WebAuthn. In one or more embodiments of the present invention, and in order to prevent application-level Man In The Middle (MITM) attacks, other mitigations such as certificate pinning are also employed as part of the overall solution. With reference now to the figures, and in particular toFIG.1, there is depicted a block diagram of an exemplary system and network that may be utilized by and/or in the implementation of one or more embodiments of the present invention. Note that some or all of the exemplary architecture, including both depicted hardware and software, shown for and within server101may be utilized by client device123and/or software deploying server149and/or computer resource server151and/or computer resource(s)153and/or application attestation service server155and/or artificial intelligence (AI) system157and/or blockchain network159shown inFIG.1, and/or by resource owner200and/or authorization server201and/or client device223and/or computer resource server251and/or computer resource(s)253and/or application attestation service server255shown inFIG.2, and/or other processing devices depicted in other figures associated with one or more embodiments of the present invention. Exemplary server101includes a processor103that is coupled to a system bus105. Processor103may utilize one or more processors, each of which has one or more processor cores. A video adapter107, which drives/supports a display109, is also coupled to system bus105. System bus105is coupled via a bus bridge111to an input/output (I/O) bus113. An I/O interface115is coupled to I/O bus113I/O interface115affords communication with various I/O devices, including a keyboard117, a mouse119, a media tray121(which may include storage devices such as CD-ROM drives, multi-media interfaces, etc.), and external USB port(s)125. While the format of the ports connected to I/O interface115may be any known to those skilled in the art of computer architecture, in one embodiment some or all of these ports are universal serial bus (USB) ports. As depicted, server101is able to communicate with a network127using a network interface129. Network interface129is a hardware network interface, such as a network interface card (NIC), etc. Network127may be an external network such as the Internet, or an internal network such as an Ethernet or a virtual private network (VPN). A hard drive interface131is also coupled to system bus105. Hard drive interface131interfaces with a hard drive133. In one embodiment, hard drive133populates a system memory135which is also coupled to system bus105. System memory is defined as the lowest level of volatile memory in server101. This volatile memory includes additional higher levels of volatile memory (not shown), including, but not limited to, cache memory, registers and buffers. Data that populates system memory135includes server101's operating system (OS)137and application programs143. OS137includes a shell139, for providing transparent user access to resources such as application programs143. Generally, shell139is a program that provides an interpreter and an interface between the user and the operating system. More specifically, shell139executes commands that are entered into a command line user interface or from a file. Thus, shell139, also called a command processor, is generally the highest level of the operating system software hierarchy and serves as a command interpreter. The shell provides a system prompt, interprets commands entered by keyboard, mouse, or other user input media, and sends the interpreted command(s) to the appropriate lower levels of the operating system (e.g., a kernel141) for processing. Note that while shell139is a text-based, line-oriented user interface, one or more embodiments of the present invention will equally well support other user interface modes, such as graphical, voice, gestural, etc. As depicted, OS137also includes kernel141, which includes lower levels of functionality for OS137, including providing essential services required by other parts of OS137and application programs143, including memory management, process and task management, disk management, and mouse and keyboard management. Application programs143include a renderer, shown in exemplary manner as a browser145. Browser145includes program modules and instructions enabling a world wide web (WWW) client (i.e., server101) to send and receive network messages to the Internet using hypertext transfer protocol (HTTP) messaging, thus enabling communication with client device123and/or software deploying server149and/or computer resource server151and/or computer resource(s)153and/or application attestation service server155and/or other computer systems. Application programs143in server101's system memory also include a Program for Secure Access to Computer Resources (PSACR)147. PSACR147includes code for implementing the processes described below, including those described inFIGS.2-7. In one or more embodiments of the present invention, server101is able to download PSACR147from software deploying server149, including in an on-demand basis, wherein the code in PSACR147is not downloaded until needed for execution. Note further that, in one or more embodiments of the present invention, software deploying server149performs all of the functions associated with the present invention (including execution of PSACR147), thus freeing server101from having to use its own internal computing resources to execute PSACR147. Note that the hardware elements depicted in server101are not intended to be exhaustive, but rather are representative to highlight essential components required by one or more embodiments of the present invention. For instance, server101may include alternate memory storage devices such as magnetic cassettes, digital versatile disks (DVDs), Bernoulli cartridges, and the like. These and other variations are intended to be within the spirit and scope of the present invention. With reference now toFIG.2, a high-level overview of one or more embodiments of the present invention executed within a modified Open Authorization (OAuth) system206is presented. As shown in steps1and2, a resource owner200, which is either a human user interfacing with the client device223via a user interface, or a computer used by a user to present initial grant credentials to the client device via an application program interface (API), presents initial grant credentials to the native application202. Examples of such credentials include, but are not limited to, an authorization code, username and password, etc. As described in step2a(shown as occurring within client device223and in communication with application attestation service server255, analogous to application attestation service server155shown inFIG.1), the native application202sends the initial grant credentials, which were received form the resource owner200, to the application attestation service server255. The application attestation service server255provides a native application attestation service that validates the native application202is unmodified and is running on a legitimate client device223. It provides a cryptographic proof of this attestation that is included in the payload of step3a. As shown in step2b, the client device223then generates a public/private key pair204, which is used as described below. As shown in step3a, the native application202sends, to the authorization server201, the initial authorization grant, the attestation result (generated by the native application attestation service), the public key (generated by the native application202) and the signature of the initial authorization grant, signed with the corresponding private key, to the authorization server201. As shown in step3b, the authorization server verifies the attestation result of the application with the application attestation service. If the attestation is not verified by the attestation service, the registration is aborted. As shown in step3c, the authorization server201verifies the public key with the signature of the initial authorization grant, and stores the public key, which will be used in the steps below for granting access to the computer resource(s)253. As shown in step4, the authorization server201generates an initial access token and a refresh token and returns them to the native application. The initial access token allows the native application202to access the computer resource(s)253, and the refresh token allows the authorization server201to refresh/renew the access token. As shown in step5, the native application202uses an access token (i.e., either the initial access token or a new access token, whose generation by the authorization server201has been authorized by the refresh token), to access the computer resource(s)253by the native application202on behalf of the resource owner200. Access tokens have a limited validity period (e.g., less than five minutes), and thus need to be periodically refreshed by using a refresh token. Therefore, and as depicted as step6, the native application202periodically exchanges a current refresh token for a new access token and a new refresh token with the authorization server201. That is, a refresh token is single-use, so a new refresh token replaces an old/current refresh token. In the request from the native application202for the authorization server201to generate a new refresh token, the old refresh token is signed with a private key by the native application202, and the signature is sent to the authorization server201as part of the token refresh flow. Thus, the authorization server201only generates and returns a new access token and a new refresh token if the signature for the old refresh token is verified with the public key that was received in step3a. With reference now toFIG.3, a high-level flow-chart of one or more operations performed in one or more embodiments of the present invention is presented. After initiator block301, an authorization server receives, from a native application on a device, an initial authorization grant, a public key of a private/public key pair generated on the device, and an attestation of authenticity of the native application, as shown in block303and described in detail inFIG.2. As shown in block305and described in detail inFIG.2, the authorization server receives, from the native application on the device, a refresh token and a digital signature of the refresh token that is created with the private key. In one or more embodiments of the present invention, the authorization server recognizes the refresh token only if the refresh token is verified with the public key that has been previously registered. As shown in block307and described in detail inFIG.2, the authorization server then validates the digital signature of the refresh token. As shown in block309and described in detail inFIG.2, in response to validating the refresh token from the native application on the device, the authorization server transmits a new access token and a new refresh token from the authorization server to the native application on the device. This new access token allows the native application on the device to access the computer resource. The flow-chart ends at terminator block311. In one or more embodiments of the present invention, and as described herein, the public/private key is generated on the device, and the private key is protected from unauthorized use or extraction by being device bound to the device. That is, the private key cannot be “hacked” by a malicious actor, since it is only accessible to the client device that generated the public/private key pair. In one or more embodiments of the present invention, the authorization server and the client are components of an Open Authorization (OAuth) architecture (e.g., modified Open Authorization (OAuth) system206shown inFIG.2), which has been modified by the present invention. In one or more embodiments of the present invention, a refresh authorization grant is verified to come from the device to which it was issued using data in an application payload from the client without use of a Transport Layer Security (TLS) protocol extension, as described above. In one or more embodiments of the present invention, one or more processors (e.g., within the authorization server201shown inFIG.2), confirm an identity of the device by applying a Fast Identity Online (FIDO) authentication as part of refresh authorization grant processing. Details of FIDO and/or FIDO2 are described above. In one or more embodiments of the present invention, the identity of the device is determined and/or confirmed by an identifier of that particular device, such as an Internet Protocol (IP) address used by that particular device and identified by a lookup table that associates that particular device with a particular IP address; a Media Access Control (MAC) that is assigned to a network interface controller for use by that particular device as a network address in communications within a network segment; a Uniform Universal Identifier (UUID) that that has been generated for and is specific for that particular device; etc. As such, and in one or more embodiments of the present invention, the authorization server validates the refresh token from the native application on the device only if the refresh token came from a particular device that created the initial authorization grant and the public key of a private/public key pair. The authorization server, in one or more embodiments of the present invention, identifies that particular device by the IP address currently being used by the particular device, the MAC address currently being used by the particular device, the UUID of the particular device, etc., which is part of a header that is sent to the authorization server from that particular device when that particular device sends to the authorization server, the initial authorization grant, the public key of a private/public key pair generated on the device, the attestation of authenticity of the native application, and the refresh token. Therefore, in one or more embodiments of the present invention, the authorization server validates the refresh token from the native application on the device only if the refresh token came from a particular device that created the initial authorization grant and the public key of a private/public key pair, without regard to which application, within that device or within another device, sent the refresh token, the initial authorization grant, and/or the public key of a private/public key pair to the authorization server. In one or more embodiments of the present invention, a blockchain system is used to ensure that data/elements/factors used by the authorization server201shown inFIG.2to generate access tokens and/or refresh tokens are valid by storing such data/elements/factors in a blockchain ledger. Thus, with reference toFIG.4, in one or more embodiments of the present invention a blockchain network459(analogous to blockchain network159shown inFIG.1) is used to provide the infrastructure (e.g. execution of the chaincodes) and services (e.g., Membership services such as Identity management) for securely and transparently storing, tracking and managing transactions on a “single point of truth”. The blockchain network459maintains a verifiable record (of the single point of truth) of every single transaction ever made within the system. Once data are entered onto the blockchain, they can never be erased (immutability) or changed. That is, a change to a record would be regarded as issuing/introducing a new transaction. Prohibition of such thus ensures auditability and verifiability of data. The blockchain network459(also known as a “blockchain fabric”, a “blockchain system”, an “open blockchain”, or a “hyperledger fabric”) is based on a distributed database of records of all transactions or digital events that have been executed and shared among participating parties. An individual transaction in the blockchain is validated or verified through a consensus mechanism incorporating a majority of the participants in the system. This allows the participating entities to know for certain that a digital event happened by creating an irrefutable record in a permissioned public ledger. When a transaction is executed, its corresponding chaincode is executed by several validating peers of the system. For example, as shown inFIG.4, peers418a-418destablish the validity of the transaction parameters and, once they reach consensus, a new block is generated and appended onto the blockchain network. That is, an application process408(e.g., the OAuth process performed in authentication server201, as described inFIG.2) running on a supervisory computer401(e.g., authentication server201shown inFIG.2and/or server101shown inFIG.1) executes an application such as the depicted App403(e.g., a OAuth application), causing a software development kit (SDK)410to communicate using general remote procedure calls (grpc) to membership services409that support the peer-to-peer network404, which supports the blockchain416using the peers418a-418d. With reference now toFIG.5, an exemplary blockchain ledger500within blockchain416as utilized in one or more embodiments of the present invention is depicted. In one or more embodiments of the present invention, blockchain ledger500includes an identifier of the supervisory computer (shown in block502), such as authentication server201shown inFIG.2and/or server101shown inFIG.1, that also supports and/or utilizes the peer-to-peer network404shown inFIG.4. For example, in one or more embodiments of the present invention, block502includes an internet protocol (IP) address, a uniform resource locator (URL), etc. of the supervisory computer. This information is used by peers in the peer-to-peer network404shown inFIG.4to receive transactions related to the process flow described herein. In one or more embodiments of the present invention, blockchain ledger500also includes a copy of the initial authorization grant (block504), the public key (block506), the attestation of authenticity of the native application (block508), the refresh token (block510), and the digital signature of the refresh token (block512) that are utilized by the authentication server to generate a first new access token and a first new refresh token. Thereafter, the information from the blockchain ledger500is retrieved, in a form that is ensured to be unchanged since it was initially presented to (although not necessarily used by) the authentication server. That is, as the initial authorization grant, public key of a private/public key pair generated on the device, and attestation of authenticity of the native application are created, they are sent in parallel to both the authorization server and the blockchain. However, there is a chance that the initial authorization grant, public key of a private/public key pair generated on the device, and attestation of authenticity of the native application are corrupted before they are used by the authorization server. As such, an uncorrupted version of the initial authorization grant, public key of a private/public key pair generated on the device, and attestation of authenticity of the native application are stored on the blockchain, where they are stored in a blockchain system that prevents any data corruption. Exemplary operation of the blockchain network459shown inFIG.4is presented inFIG.6. As described in step601, a computing device (e.g., supervisory computer401shown inFIG.4, which in one or more embodiments is the server101shown inFIG.1and/or the authorization server201shown inFIG.2) performs a transaction (e.g., receiving a native application on a device, an initial authorization grant, a public key of a private/public key pair generated on the device, an attestation of authenticity of the native application, a refresh token, a digital signature of the refresh token that is created with the private key, etc. as described above inFIG.2). As shown in step603, the supervisory computer401hashes the transaction with a hash algorithm, such as Secure Hash Algorithm (SHA-2) and then signs the hash with a digital signature. This signature is then broadcast to the peer-to-peer network404shown inFIG.4, as described in step605. A peer in the peer-to-peer network404(e.g., peer418a) aggregates the transaction(s) into blockchain416shown inFIG.4, as shown in step607. As shown in block609, each block contains a link to a previous block. The newly-revised blockchain416is validated by one or more of the other peers in peers418a-418dand/or by other peers from other authorized blockchain systems (step611). The validated block is then broadcast to the peers418b-418d, as described in step613. These peers418b-418dlisten for and receive the new blocks and merge them into their copies of blockchain416(step615). Thus, the blockchain fabric described inFIG.4throughFIG.6describe a blockchain deployment topology that provides a distributed ledger, which persists and manages digital events, called transactions, shared among several participants, each having a stake in these events. The ledger can only be updated by consensus among the participants. Furthermore, once transactions are recorded, they can never be altered (they are immutable). Every such recorded transaction is cryptographically verifiable with proof of agreement from the participants, thus providing a robust provenance mechanism tracking their origination. As such, a blockchain fabric uses a distributed network to maintain a digital ledger of events, thus providing excellent security for the digital ledger, since the blockchain stored in each peer is dependent upon earlier blocks, which provide protected data for subsequent blocks in the blockchain. That is, the blockchain fabric described herein provides a decentralized system in which every node in a decentralized system has a copy of the blockchain. This avoids the need to have a centralized database managed by a trusted third party. Transactions are broadcast to the network using software applications. Network nodes can validate transactions, add them to their copy and then broadcast these additions to other nodes. However, as noted above, the blockchain is nonetheless highly secure, since each new block is protected (e.g., hashed) based on one or more previous blocks. Thus, in one or more embodiments of the present invention, assume that the new access token described above inFIG.3is identified as a first new access token and the new refresh token is identified as a first new refresh token. In one or more embodiments of the present invention, a blockchain architecture (e.g., server101shown inFIG.1and/or authorization server201shown inFIG.2and/or supervisory computer401and/or blockchain network459shown inFIG.4) stores a stored blockchain that includes the initial authorization grant, the public key, the attestation of authenticity of the native application, the refresh token, and the digital signature of the refresh token that are utilized by the authentication server to generate the first new access token and the first new refresh token. The supervisory computer (e.g., authorization server201) retrieves the stored blockchain from the blockchain architecture, and generates a second new access token and a second new refresh token by using the initial authorization grant, the public key, the attestation of authenticity of the native application, the refresh token, and the digital signature of the refresh token from the stored blockchain. One or more processors compare the first new access token and the first new refresh token to the second new access token and the second new refresh token in order to determine that the first new access token and the first new refresh token match the second new access token and the second new refresh token. In response to determining that the first new access token and the first new refresh token match the second new access token and the second new refresh token, one or more processors authorizing the first new access token and the first new refresh token generated by the authorization server to be transmitted from the authorization server to the native application. In one or more embodiments of the present invention, the new access token is a first new access token and the new refresh token is a first new refresh token; the blockchain just described includes the initial authorization grant, the public key, the attestation of authenticity of the native application, the refresh token, and the digital signature of the refresh token that are utilized by the authentication server to generate the first new access token and the first new refresh token; the stored blockchain is retrieved from the blockchain architecture; the authentication server generates a second new access token and a second new refresh token by using the initial authorization grant, the public key, the attestation of authenticity of the native application, the refresh token, and the digital signature of the refresh token from the stored blockchain. However, in these one or more embodiments, one or more processors compare the first new access token and the first new refresh token to the second new access token and the second new refresh token in order to determine that the first new access token and the first new refresh token do not match the second new access token and the second new refresh token. As such, and in response to determining that the first new access token and the first new refresh token do not match the second new access token and the second new refresh token, one or more processors block transmission of the first new access token and the first new refresh token generated by the authorization server from the authorization server to the native application. Thus, the contents of the blockchain are used to confirm that the initial authorization grant, the public key, the attestation of authenticity of the native application, the refresh token, and the digital signature of the refresh token that are utilized by the authentication server to generate the first new access token and the first new refresh token are valid, as guaranteed by their earlier storage in the blockchain. In one or more embodiments of the present invention, artificial intelligence (e.g., a neural network) is used to validate the access token and the refresh token that the authorization server generates. With reference then toFIG.7, an exemplary artificial intelligence system used in one or more embodiments of the present invention is a deep neural network (DNN)757(analogous to artificial intelligence system157shown inFIG.1) as utilized in one or more embodiments of the present invention is presented. The nodes within DNN757represent hardware processors, virtual processors, software algorithms, or a combination of hardware processors, virtual processors, and/or software algorithms. In one or more embodiments of the present invention, DNN757is trained to generate a neural-network-generated access token and/or a neural-network-generated refresh token, as shown in block702. DNN757has been trained to use authorization server data700(e.g., the initial authorization grant, the public key, the attestation of authenticity of the native application, the refresh token, and the digital signature of the refresh token that are utilized by the authentication server to generate the new access token and the new refresh token) to generate a neural-network-generated access token and/or a neural-network-generated refresh token, as shown in block702. DNN757is trained by using various known types of initial authorization grants, types of public keys, types of attestations of authenticity of the native application, types of refresh token, and types of digital signatures of the refresh token used to generate known neural-network-generated access tokens and/or a neural-network-generated refresh tokens. DNN757is an exemplary type of neural network used in one or more embodiments of the present. Other neural networks that can be used in one or more embodiments of the present invention include convolutional neural networks (CNNs) and neural networks that use other forms of deep learning. A neural network, as the name implies, is roughly modeled after a biological neural network (e.g., a human brain). A biological neural network is made up of a series of interconnected neurons, which affect one another. For example, a first neuron can be electrically connected by a synapse to a second neuron through the release of neurotransmitters (from the first neuron) which are received by the second neuron. These neurotransmitters can cause the second neuron to become excited or inhibited. A pattern of excited/inhibited interconnected neurons eventually lead to a biological result, including thoughts, muscle movement, memory retrieval, etc. While this description of a biological neural network is highly simplified, the high-level overview is that one or more biological neurons affect the operation of one or more other bio-electrically connected biological neurons. An electronic neural network similarly is made up of electronic neurons. However, unlike biological neurons, electronic neurons are never technically “inhibitory”, but are only “excitatory” to varying degrees. The electronic neurons (also referred to herein simply as “neurons” or “nodes”) in DNN757are arranged in layers, known as an input layer703, hidden layers705, and an output layer707. The input layer703includes neurons/nodes that take input data, and send it to a series of hidden layers of neurons (e.g., hidden layers705), in which neurons from one layer in the hidden layers are interconnected with all neurons in a next layer in the hidden layers705. The final layer in the hidden layers705then outputs a computational result to the output layer707, which is often a single node for holding vector information. As just mentioned, each node in the depicted DNN757represents an electronic neuron, such as the depicted neuron709. As shown in block711, each neuron (including neuron709) functionally includes at least four features: an algorithm, an output value, a weight, and a bias value. The algorithm is a mathematic formula for processing data from one or more upstream neurons. For example, assume that one or more of the neurons depicted in the middle hidden layers705send data values to neuron709. Neuron709then processes these data values by executing the mathematical function shown in block711, in order to create one or more output values, which are then sent to another neuron, such as another neuron within the hidden layers705or a neuron in the output layer707. Each neuron also has a weight that is specific for that neuron and/or for other connected neurons. Furthermore, the output value(s) are added to bias value(s), which increase or decrease the output value, allowing the DNN757to be further “fine-tuned”. For example, assume that neuron713is sending the results of its analysis of a piece of data to neuron709. Neuron709has a first weight that defines how important data coming specifically from neuron713is. If the data is important, then data coming from neuron713is weighted heavily, and/or increased by the bias value, thus causing the mathematical function (s) within neuron709to generate a higher output, which will have a heavier impact on neurons in the output layer707. Similarly, if neuron713has been determined to be significant to the operations of neuron709, then the weight in neuron713will be increased, such that neuron709receives a higher value for the output of the mathematical function in the neuron713. Alternatively, the output of neuron709can be minimized by decreasing the weight and/or bias used to affect the output of neuron709. These weights/biases are adjustable for one, some, or all of the neurons in the DNN757, such that a reliable output will result from output layer707. In one or more embodiments of the present invention, finding the values of weights and bias values is done automatically by training the neural network. In one or more embodiments of the present invention, manual adjustments are applied to tune hyperparameters such as learning rate, dropout, regularization factor and so on. As such, training a neural network involves running forward propagation and backward propagation on multiple data sets until the optimal weights and bias values are achieved to minimize a loss function. The loss function measures the difference in the predicted values by the neural network and the actual labels for the different inputs. When manually adjusted during the training of DNN757, the weights are adjusted in a repeated manner until the output from output layer707matches expectations. When automatically adjusted, the weights (and/or mathematical functions) are adjusted using “back propagation”, in which weight values of the neurons are adjusted by using a “gradient descent” method that determines which direction each weight value should be adjusted to. This gradient descent process moves the weight in each neuron in a certain direction until the output from output layer707improves (e.g., accurately generates appropriate access tokens and refresh tokens). As shown inFIG.7, various layers of neurons are shaded differently, indicating that, in one or more embodiments of the present invention, they are specifically trained for recognizing different aspects of information used by an authentication server when generating refresh tokens. Thus, in one or more embodiments of the present invention, within the hidden layers705are layer706, which contains neurons that are designed to evaluate types of authorization grants; layer708, which contains neurons that are designed to evaluate types of public keys; and layer710, which contains neurons that are designed to evaluate types of attestation of authenticity of native applications. The outputs of neurons from layer710then control the value found in output layer707. Thus, one or more embodiments of the present invention input, into a neural network, the initial authorization grant, the public key, the attestation of authenticity of the native application, the refresh token, and the digital signature of the refresh token, where the neural network is trained to generate a neural-network-generated access token and a neural-network-generated refresh token. One or more processors then compare the new access token and the new refresh token that are generated by the authorization server to the neural-network-generated access token and the neural-network-generated refresh token in order to validate the new access token and new refresh token that is generated by the authorization server. In one or more embodiments of the present invention, one or more processors determine that the new access token and new refresh token match the neural-network-generated access token and the neural-network-generated refresh token. In response to determining that the new access token and new refresh token match the neural-network-generated access token and the neural-network-generated refresh token, one or more processors transmit the new access token and the new refresh token to the native application. In one or more embodiments of the present invention, one or more processors determine that the new access token and new refresh token do not match the neural-network-generated access token and the neural-network-generated refresh token. In response to determining that the new access token and new refresh token do not match the neural-network-generated access token and the neural-network-generated refresh token, one or more processors block transmission of the new access token and the new refresh token to the native application. Thus, in one or more embodiments of the present invention, the neural network either validates or invalidates the work performed by the authentication server described above. As such, and as described in various embodiments presented herein, the attestation of authenticity of the native application guarantees an integrity and authenticity of the native application and device. In one or more embodiments, the present invention is implemented using cloud computing. Nonetheless, it is understood in advance that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein is not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model includes at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but still is able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows: Private cloud: the cloud infrastructure is operated solely for an organization. In one or more embodiments, it is managed by the organization or a third party and/or exists on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). In one or more embodiments, it is managed by the organizations or a third party and/or exists on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes. Referring now toFIG.8, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50comprises one or more cloud computing nodes10with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N communicate with one another. Furthermore, nodes10communicate with one another. In one embodiment, these nodes are grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-54N shown inFIG.8are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.9, a set of functional abstraction layers provided by cloud computing environment50(FIG.9) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.8are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities that are provided in one or more embodiments: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80provides the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources comprise application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment are utilized in one or more embodiments. Examples of workloads and functions which are provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and secure computer resource access processing96, which performs one or more of the features of the present invention described herein. The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of various embodiments of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the present invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present invention. The embodiment was chosen and described in order to best explain the principles of the present invention and the practical application, and to enable others of ordinary skill in the art to understand the present invention for various embodiments with various modifications as are suited to the particular use contemplated. In one or more embodiments of the present invention, any methods described in the present disclosure are implemented through the use of a VHDL (VHSIC Hardware Description Language) program and a VHDL chip. VHDL is an exemplary design-entry language for Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), and other similar electronic devices. Thus, in one or more embodiments of the present invention any software-implemented method described herein is emulated by a hardware-based VHDL program, which is then applied to a VHDL chip, such as a FPGA. Having thus described embodiments of the present invention of the present application in detail and by reference to illustrative embodiments thereof, it will be apparent that modifications and variations are possible without departing from the scope of the present invention defined in the appended claims. | 66,119 |
11943371 | DETAILED DESCRIPTION OF DRAWINGS The following description illustrates example embodiments of a mechanism for indicating, to a user, available actions for applications on a computer device. The example mechanism is simple and convenient for a user, and is relatively lightweight to implement. Further, the example mechanism will uphold the security of computer devices while enabling applications to be installed by users themselves and with minimal support or supervision. Many other advantages and improvements will be appreciated from the discussion herein. Overview Aspects of the present disclosure allow for a user of a computing device to be presented with visual indications of available actions that can be taken for a particular file or folder based on aspects or characteristics of policies associated therewith. The visual indications may be visually presented on the computing device via badges, or the like, on a GUI presented on the computing device display. In certain embodiments, the badges may visually communicate to the user of the computing device which applications or application directories have available actions for the current user account. In one embodiment, a badge may include an icon, or the like, in which a first icon may indicate that the application is installable, a second icon may indicate that the application is not installable. An agent running on a computing device can apply one or more policies to a selected file or folder to determine allowable actions therefore. The agent can determine the allowable actions upon being notified that a user selected a particular folder or file, the selection causing the computing device to render the particular folder or file on a display. The agent can cause the rendering of badges or other visual elements on the display based on one or more allowable actions determined by the agent by applying policies to the selected file or folder. The policies can be included in policy files that are stored on the computing device or that are received from a server. In one example, folders and files within a selected folder may be presented within the “Finder” application in macOS, which provides a GUI for organizing and manipulating the file system. The agent can determine allowable actions for reach folder and file, and can also determine and cause rendering of a badge associated with the determined actions for each folder and file. Furthermore, a programming application programming interface (API) may be utilized, such as “Finder Sync Extension,” that provides for extensibility and allows for developers to access or hook into a context menu presented when selecting an item in the file system. In the example, the context menu can be presented in response to selecting one of the files or folders in the GUI. The selection can be a “secondary click” on a mouse or trackpad, or may be another predetermined input. The rendered context menu can include the determined actions for the folder or file, such as opening an application, installing an application, searching the application package contents, etc. In one embodiment, the policy file may include configuration data for determining which applications may present various actions to specific user accounts or groups of user accounts. For example, these actions may be presented within the context menu in response to a user initiating display of the context menu via a secondary click, or the like. In particular embodiments, the Finder Sync API may provide a method for receiving notifications relating to when a Finder item is made visible in the Finder. In response to receiving the notification, the extension may query the policy server for a list of actions available for the item. In various embodiments, these items may be marked with badges in the context menu or Finder GUI, allowing for the user to know which items may utilize the functionality of the extension. In one embodiment, either a single badge image may be presented, or different badges may be presented according to the available functionality (e.g., a badge for Installable items, a badge for Deletable items, a badge for items that are both Installable and Deleteable, etc.). According to various aspects of the present disclosure, in response to the user accessing a particular file or folder, information relating to the accessed item, along with may be passed/transmitted to an agent, which may be a daemon. In one embodiment, the agent may transmit details of the request to a policy server along with other information such as an identifier of a current user account, a hash or signature of one or more requested items, security information, and other information. The policy server may determine one or more actions based on the current policy configuration. The policy server may return the result to the agent. In some embodiments, the policies may be stored locally and the agent or extension may determine available actions without consulting a policy server. In one or more embodiments, the agent performs the determination of the one or more actions by applying the policies, which are retrieved from storage or memory, or are received from the policy server. In one embodiment, if gated access is configured, the agent may initiate the display of one or more message dialogs to the user for requesting information from the user (e.g., via Defendpoint). In various embodiments, the requests may include reason dialog, challenge-response, authentication dialog, and block messages. In a particular embodiment, a dialog response from the user may be communicated back to the agent. In various embodiments, a valid response may initiate the agent's determining a particular action to be allowable, or causing the agent to facilitate execution of a selected action. In some embodiments, an invalid response may prevent the action from being allowable and thus prevent the action from being presented as an option to the user. In various embodiments, an invalid response can cause the agent to notify the user of an action not being allowed. In some embodiments, the policy server may provide the agent with answers to the requests such that the agent may authorize or prevent the action from occurring based on the response without consulting a policy server. EXEMPLARY EMBODIMENTS Referring now to the figures, for the purposes of example and explanation of the fundamental processes and components of the disclosed systems and methods, reference is made toFIG.1, which illustrates an exemplary networked environment100for performing various functions described herein. As will be understood and appreciated, the exemplary networked environment100and associated elements shown inFIG.1represents merely one approach or embodiment of the present system, and other aspects are used according to various embodiments of the present system. The networked environment100can include a computing environment101in communication with one or more computing devices103via a network102. The computing environment101can include one or more processors105for processing transmissions from the one or more computing devices103, and can include one or more servers107for receiving the transmissions via the network102. In various embodiments, the computing environment101includes a data store109including one or more databases for storing various information described herein. In at least one embodiment, the data store109stores information including, but not limited to, policies110, applications112, and user accounts114. The network102can be a private network, a virtual private network, an intranet, a cloud, the Internet, or other network schemas. Each computing device103may take any suitable form factor. As examples, the device103might be a desktop computer, a portable computing device, laptop, tablet, smartphone, wearable device, or an emulated virtual device on any appropriate host hardware. The computer device103includes hardware104, which suitably includes memory117, processors113(CPU central processor units), I/O input/output interfaces119(e.g. NIC network interface cards, USB universal serial bus interfaces, etc.), storage121(e.g. solid-state non-volatile storage or hard disk drive), and other suitable elements for performing the various processes described herein. The storage121can store one or more files122, including applications, etc., and one or more of files122can be organized into folders. In at least one embodiment, one or more, or all of the hardware104are implemented virtually, for example, in a cloud computing infrastructure. An operating system111can run on the hardware104to provide a runtime environment for execution of user processes, such as actions executed with respect to one or more files122stored in the memory117, in the storage121, or in one or more locations accessible to the computing device103via the network102. The runtime environment can provide resources such as installed software, system services, drivers, and files. In one example, a file includes an application for an email client that is used to send and receive email messages. Many other types of files for various software applications are available and can be provided according to the needs of the user of each computing device103. The computer device103can include an agent123. The agent123may include one or more software and/or hardware modules, such as executables, dynamic libraries (dylib in macOS), plug-ins, add-ins, add-ons or extensions. The agent123may operate as a daemon, which runs as a background process on the computer device. Alternately, when considering the Windows family of operating systems, the agent123may be a Windows service. The agent123is configured to operate in cooperation with the operating system111and the files122. In particular, the agent123may provide and coordinate core capabilities for the security of the computer device. The agent123suitably performs functions for implementing privilege management and application control. The operating system111can apply a security model wherein access privileges are based on the agent123and policies110. The operating system111may define privilege levels appropriate to different classes of users, or groups of users, and then apply the privileges of the relevant class or group to the particular agent associated with the logged-in user account114(e.g. ordinary user, super-user, local administrator, system administrator, and so on). The agent123configured in a boot sequence of the computer device103and can be associated with a user identity and password. The user credentials may be validated locally or via a remote service such as a domain controller. The agent123thus acts, in coordination with a policy server107, as a security principal in the security model. The operating system111can grant appropriate privileges to the agent123to manage and control privileges based on policies110for files122(e.g., including processes and applications) which execute in the security context of the agent123. When considering privilege management, it is desirable to implement a least-privilege access security model, whereby each user is granted only a minimal set of access privileges. However, many applications require a relatively high privilege level, such as a local administrator-level, in order to install and operate correctly. Hence, in practice, there is a widespread tendency to grant additional privilege rights, such as the local administrator level, or a system administrator level, to all members of a relevant user group, and thus allow access to almost all of the resources of the computer device. This level of access may be greater than is desirable or appropriate from a security viewpoint. For example, there is a possibility of accidental tampering with the computer device and software thereon, leading to errors or corruption within the computer device. Further, an infection or malware may access the computer device with the deliberate intention of subverting security or causing damage, such as by encrypting important data content and then demanding a ransom. The risk of this malicious activity can be reduced or prevented by assigning a relatively low privilege level to the user accounts114that access the computing device, such as a primary user account114. The agent123can coordinate with the operating system111and selectively enable access to higher privilege levels (e.g. a local administrator-level, when needed to perform certain tasks). Conversely, the agent123in some examples is also able to downgrade the privilege level for one or more actions, so that certain tasks are carried out at a privilege level lower than that of the current user account114. For execution control, the agent123can be arranged to ensure that only authorized files122are executed on the computer device103. For example, the agent123can be governed by policies110that can be based on trusted file types, digital signatures, certification, and/or other factors, thereby automatically stopping unapproved files122from running, or being installed, uninstalled, or otherwise modified. There may be a sophisticated set of policies110which define rules and conditions under which each file122may operate, in relation to the intended host computing device103and the relevant user account114. Thus, in one example, the file122will only be allowed to execute on the computer device103if permitted by the policies122as retrieved by the agent123from the data store109via a server107. A server107can be configured as a dedicated policy server107that coordinates policy110querying with the agent123and/or the transmission module115. In one example, the agent123can access a policy file stored in the data store109. The policy file stores a set of policies110which define permissions and responses of the agent123to requested actions or tasks (e.g., such as a request by a user to install an application). A policy server107may be provided to make policy decisions based on the policies110. The policy server107may operate by receiving a policy request message, concerning a requested action and related meta-information, and returning a policy result based thereon. Alternatively, the policy server107may provide the policies110to the computing device103and the agent123can determine a policy result based thereon. In one example, the agent123is configured to capture a set of identities, and may then provide these identities as part of the policy request. The agent123can utilize the identities to make a decision using the policies as to whether to perform one or more actions. Such identities may include a user identity (UID) of the relevant user account, a group identity (GID) of a group to which that user account belongs, a process identity (PID) of a current process which has initiated the action or task in question, and/or a process identity of a parent process (PPID). Suitably, the policy server107or the computing device103determines an outcome for the request based on the provided set of identities relevant to the current policy request. In one example, the agent123can store the policies110as a structured file, such as an extensible mark-up language XML, file. The policies110can be suitably held locally on the computing device103, ideally in a secure system location which is accessible to the agent123. The secure system location may be otherwise inaccessible by the user account114. Updates to the policies110in the policy file may be generated elsewhere on the network102. In one example, a management console on one of the servers107is used to pushed or pull policies110to and from each instance of the agent123on each computing device103. The policies110can be readily updated and maintained, ensuring consistency for all computing devices103across the network. In this way, the agent123can be robust and manageable for a large-scale organization with many thousands of individual computing devices103. Also, the agent123can leverage policies110that have been developed in relation to application control, such as defining user groups or user roles and related application permissions. The agent123can extend those same rules to privilege management and vice versa. The agent123can cause the computing device103to render badges401(seeFIG.4) and context menus501A,501B (seeFIGS.6A-B) on a display125. The badges401can indicate executable actions that can be performed with respect to a particular file122based on one or more policies110associated therewith. Similarly, the context menus501A,501B can include menu options for executable actions that can be performed based on the one or more policies110, and may include other executable actions associated with other software running on the computing device103. The user can select actions from the context menus501A,501B. In one example, prior to rendering a user-selected folder and files122therein, the agent123can determine whether to render one or more badges for each file122or folder in the folder. The agent123can coordinate with a policy server107to retrieve policies110associated with the selected folder. The agent123can render one or more badges on an icon for each file122to indicate actions available for the file122. As an example, the agent123can determine whether the file122is executable based on policy data associated therewith. In a similar example, prior to rendering a context menu in response to a user selecting a particular file122, the agent123can determine one or more executable actions for the particular file122based on policies110associated therewith. In some examples, the agent123is configured to perform custom messaging. In particular, agent123, whether acting directly or via a cooperating proxy or plugin, may present a message dialog to the user. This message dialog may be presented in a terminal or window from which a current action of interest was invoked by or on behalf of the user account114. Thus, the custom messaging may be presented on a display125of the computing device103for interaction with the user in control of the user account114. Input from the user account114may be returned to the agent123for evaluation. Hence, the agent123is able to interact with the user with a rich set of customizable messages. In one example, the custom messaging may include at least one of: a confirmation, a challenge-response, a query for information, and a reason. In more detail, the confirmation may present a dialog that receives input, such as a binary yes/no type response, allowing the user to confirm that they do indeed wish to proceed and providing an opportunity to double-check the intended action. The custom messaging conveniently allows specific text, e.g. as set by policies110, to be included in the dialog, such as reminding the user that their request will be logged and audited. As another option, the custom messaging may provide specific block messages, explaining to the user why their request has been blocked, thus enabling improved interaction with the user. The custom messaging and include one or more input options, such as a text input, a button, and other inputs. In one example, the custom messaging may require additional authentication to be presented by the user in order to proceed with the requested action. As an example, the additional authentication may require the user to again enter their username and password credentials or may involve one or more of the many other forms of authentication (e.g. a biometric fingerprint or retinal scan) as will be appreciated by those skilled in the art. The challenge-response also allows alternate forms of authentication to be employed, such as a multi-factor authentication. In one example, the challenge-response requires entry of a validation code, which might be provided such as from a second device or an IT helpdesk. In one example, the reason allows the user to provide feedback concerning the motivation for their request, e.g. by selecting amongst menu choices or entering free text. Logging the reasons from a large set of users allows the system to be administered more efficiently in future, such as by setting additional rules in the policies110to meet the evolving needs of a large user population. Notably, custom messaging allows the agent123to provide a rich and informative set of interactions with the users. Each of these individual custom messaging actions may be defined in the policies110. The custom messaging may eventually result in a decision to allow or block the requested action. An appropriate allow or block operation is then carried out as required. The agent123may perform auditing in relation to all requests or at least certain requests. The auditing may include recording the customized messaging, and may include recording an outcome of the request. Audit reports may be extracted or uploaded from each computing device103via the servers107at any suitable frequency. Each of these auditing functions may be defined in the policies110. In some examples, the agent123can be configured to perform passive handing of a request. The request can be presented to the originally intended recipient, which may occur within the operating system111, and any responses may be returned transparently. In one example, passive handling is defined by the policies110. The agent123can meanwhile audit the requests which were handled passively, again consistent with the policies110. Notably, this passive handling function allows the action to proceed while the requesting user process or application is unaware of the agent123as intermediary. Advantageously, default behavior of system is maintained for those actions that the agent123determines should have passive handling. Also, there is now a fail-safe option, in that the system will maintain an expected behavior for actions that are passively handled. This passive handling is useful particularly in the event that a particular user account or request is not specified in the policies110because default behavior is still enacted. Hence, the system can now quickly and safely supersede the original behavior for specific situations, allowing rapid responses and network-wide consistency when needed, while still enabling existing legacy functionality and behavior to continue in place for other actions, users and/or devices, as appropriate. The file122can include the resources that are required in order for execution on the computing device103, using the runtime environment provided by the operating system111. In one example, the file122can be included in a folder201(file directory), which allows one or more related files122to be grouped together. The folder201can be stored in a file system of the storage121, such as a disk image on the computing device. In one example, the file system is a disk image on the computing device, the disk image being a distinct portion of the storage121. The disk image can be a file122representing the structure and contents of a storage device, similar to a physical storage device such as a hard disk drive, optical disk (DVD) or solid-state storage (USB flash drive). When such a physical storage device is coupled to the computing device103, then the device is mounted by the operating system111to become available to other components within the computing device103. The agent123can capture meta-data related to the disk image, which may include any one or more of: a file name of the disk image, a hash of the disk image and a signature. The agent123can utilize the meta-data when determining whether to perform an action on a file stored on the disk image based on one or more policies. In some examples, the disk image can be signed, e.g. using a code signing identity, to reliably establish an identity of the author or source of the disk image and content therein. Other metadata may also be used, such as a current path where the file122or folder201is located, and information relevant to the current session (UID, etc.) and the current computer device (host machine details). Once a folder on a disk image is opened, the agent123may suitably examine contents of the folder201. If a folder201and/or files122are found to be contained in the opened folder201, then the agent123determines actions via one or more policies for the folders201/files122, such as, for example, via policy server107. Again, the agent123may gather appropriate metadata relating to the identified folder201and/or files122, such as a signature, the identified current user, machine, etc. In some examples, the operating system may provide API function calls that return the desired information, such as Bundle Name, Bundle Creator, Bundle Type, Version, Bundle Executable. Also, if the bundle224is signed, then these functions may also allow the certificate and hash to be retrieved. Alternatively, the agent123may itself generate a hash of the content of the folder201and/or files122, which can then be matched against records in the policies110of hashes for known and trusted folders201and files122. For example, third-party functions such as QCryptographicHash allow a hash to be generated by the agent123, suitably by integrating through all files122in the folder201. Also, the agent123may employ custom messaging to obtain additional information, or provide additional information, in relation to the folder201and/or files122, prior to determining whether or not to proceed with an action. For example, the agent123may prompt the user to confirm that they wish to proceed with installing or deleting the identified file122. If an action of the file122is approved, the agent123may initiate the process of performing the action for the file122, such as by installing the file122. It can be appreciated that the file122can correspond to a plurality of compressed or packaged files122. In one example, applications are intended to reside in the system folder “/Applications” in macOS. Some actions may require enhanced privileges. As an example, copying the file122into a system location may require privileges associated with the administrator group. If the current user account114is a member of the administrator group then the copy can be performed directly, such as by selecting a context menu item. However, it is desirable for the current user account114not to be a member of the administrator group, consistent with the least privilege principle. That is, the current user account114is excluded from a privilege level which is required to perform many actions or operations, such as install or delete applications. However, the agent123(e.g., a daemon) may have appropriate (higher) privileges and thus is able to cause the copy operation to be successfully performed by the operating system111. In some embodiments, the agent123operates as a daemon with escalated privileges to perform the action. In other embodiments, the agent123communicates with a daemon that has escalated privileges to perform the action. In some embodiments, the privileges of the agent123or daemon are escalated only for the purpose of performing the action and then de-escalated again immediately following. For example, the agent123may cause the permissions of a daemon to be escalated to perform the action. In some examples, the agent123is able to use file copy functions that are provided natively by the operating system111(e.g. NSFileManager in macOS including NSFileManager copyItemAtPath: toPath: error:), or appropriate third-party functions (e.g. Qt's QFile::copy). Hence, with this mechanism, standard users are now able to cause an install action to occur via the agent123. In particular embodiments, rather than immediately causing a selected action, the agent123may query the policy server107for configuration data from the policies110relating to the selected action. In some embodiments, the selected action may not be authorized for the particular user account based on the policies110, while in other embodiments the action may be performed in response to the user providing additional information, performing a two-step authentication, etc. In various embodiments, before causing the selected action, the agent123can determine that the particular file122associated therewith is notarized by a trusted digital authority. As an example, the agent123can verify that the trusted digital authority signed the file using a public key of the trusted digital authority. FIG.3is a sequence diagram200which illustrates exemplary event sequence and example interactions of the agent123with the user account114and the operating system111. In this detailed example, when the agent123is started (usually at boot), it requests notifications from the operating system111(e.g. macOS) of whenever disk images have requested to be mounted and when folder201and/or files122are selected. At some later point in time, the user account114selects or opens a folder201, which causes the agent123to be notified by the operating system111. The agent123consults the policy server107for executable actions for files122in the selected folder201. It can be appreciated that the policy server107may deploy the one or more policies110to the agent123on initialization or at an earlier time, and the agent123can determine whether the action is authorized using the policies110without communicating or consulting with the policy server. As can be appreciated, when a user clicks to open a folder, delay in providing the list of files in the folder would disrupt a user experience. To reduce delay, the agent123may send a message with an identifier associated with a version or a signature for the set of policies110to verify that the policies110are the most current version. If the identifiers or signatures match, the agent123can proceed without further communications thereby minimizing data transmissions between the agent123and the policy server107. Based on policies110, the agent123can determine actions available for each file122in the selected folder201. Based on the available actions, the agent123can cause the computing device103to render one or more badges401on a display125based on the determined actions. As an example, the agent123may communicate with an extension to cause the badges to be rendered. The agent123can process a selection from the user account114for a particular file122in the folder201. The selected file122may or may not have a particular badge. The agent123can utilize the previously acquired policies110or query the policy server107for the policies110associated with the selected file122. The agent can verify or determine the one or more available actions for the selected file122. In some embodiments, the policies110may have a varying degree of validation or verification requirements for 1) rendering badges, 2) adding an option for an action to a context menu, and 3) performing the action from a selected option. The validation requirements may be tailored based on time to perform. For example, performing RSA analysis to verify signatures of hundreds of files in order to show the proper badges while showing a folder's contexts is resource-intensive. To solve this problem, the policies110may1) specify that rendering a badge on a spreadsheet application file122that indicates a particular action is available requires a first level of validation, 2) specify that creating a context menu item to perform the action once the file122is selected requires a second level of validation, and 3) specify that performing the action based on a selection of the context menu item requires a third level of validation. The first level may include verifying the vendor and application name, the second level may include verifying a checksum of the application, and a third level may include verifying a digital signature of the file122. Similarly, each querying of the policy server107may be performed at a particular level of granularity. For example, when the agent123may determining actions for the rendering of badges401in a selected folder201, the agent123may inspect only high-granularity policies110(e.g., “are any files from this source allowable?”). In another example, when the agent123is determining actions for the rendering of a context menu, the agent123may inspect more granular policies110for a selected file122. In another example, when the agent123is determining whether a selected action is executable, the agent123may inspect all policies110for the selected action and file122. Because the networked system100can include hundreds to thousands of computing devices103and millions of files122and folders201, a conservative hierarchy of policy inspection allows for minimization of computing resources required to perform each step in the processes described herein. The agent123can cause the computing device103to render a context menu501A (or another context menu) based on the determined actions. The agent123can process a selection for a particular action for the user account114. The agent123can determine whether the action is authorized based on the policies110. The agent123may query the policy server107for one or more policies110associated with the action or utilize the policies110stored locally. Based on rules of one or more policies110, the agent123can perform additional verification/validation steps if necessary. For example, the agent123can prompt the user account for additional information or actions. The agent123can cause the computing device103to render a challenge and response on the display125. In another example, the agent123can initiate a multi-factor authentication process. In another example, the agent123can require the user to provide login information, such as a username and password, before executing the selected action. As determined by the policies110, the agent123can cause performance (e.g., execution) of the selected action based on privileges of the agent123(e.g., as opposed to lower privileges of the user account114, which may be insufficient for controlling such processes). With reference toFIG.3, shown is an exemplary file execution control process300. At step302, an agent123registers with an operating system111. The agent123can register during a boot sequence of the operating system on the computing device103and/or in response to the computing device103receiving login credentials for a user account114associated with the computing device103. During registration, the agent123may call one or more operating system functions or application programming interfaces (APIs) to be notified of file and folder accesses. In some embodiments, the agent123may register an extension with the operating system to receive callbacks, interrupts, messages, notifications, or the like. The agent123may be initialized as a daemon or spawn a separate daemon during initialization. In some embodiments, the agent123can query the policy server107to download current policies for the user account during or shortly after registration and initialization. At step304, the computing device103receives a selection for a folder201. In one example, the computing device103renders a folder GUI on a display125including one or more folders201. The selection can be received through one or more I/O interfaces119. The selection can include receiving a touch, such as a click, or multiple touches. At step306, the agent123determines available actions for one or more files122within the opened folder201. To determine the available actions, the agent123can analyze and apply the policies110for the computing device103. The policies may be associated with the user account, the computing device103, the opened or selected folder201, the files122, or a combination thereof. In some embodiments, the agent123may query the policy server107for the policies110when the folder201is opened. The step306may occur prior to the computing device103rendering a display of one or more files122included in the selected folder201. In some embodiments, the rendering of the contents of the folder201may be delayed until the agent123determines the available actions and assigns a badge to each file122. In other embodiments, the contents of the folder201can be rendered without the badges initially, and the agent123can update the badges once the agent123determines the available actions and assigns a badge to each file122. The agent123can store a history of badges assigned to files122. In some embodiments, the agent123can initially apply a last used badge from the history to the files122until the agent123determines the available actions and assigns an updated badge to each file122. The agent123can verify a source of the files122when determining the available action. The operating system111or agent123may maintain a data store that contains the source of all downloaded or loaded files122into the storage121in a secure place. The agent123can query the data store to determine where the file122originated from. The agent123may authorize one or more actions according to the policies110if the file122originated from a trusted source. Conversely, the agent123may deny one or more actions according to the policies110if the file122originated from an untrusted or unknown source. In some embodiments, the agent123may quarantine the file122if the source of the file122is untrusted. For example, the agent123may determine that a source of a particular file122should be from a particular vendor according to the policies110, but that the particular file122originated from a source known to distribute malicious content. Based on the identified source and the policies110, the agent123may quarantine the file122. In one embodiment, the agent123can perform a background process upon loading of the policies110by applying the policies110to each file122in a file system of the storage121to generate a default badge for each file122. Further, the agent123can perform the background analysis of the files122in the storage121upon receiving one or more updated policies110from the policy server107. In some embodiments, the background process can be limited in computing resources to prevent the computing performance from being affected during ordinary use. In some embodiments, the results of the background analysis by the agent123can be utilized for selecting badges when a folder201is loaded (which may or may not be replaced by a subsequent analysis performed upon loading of the folder201) but not for generating context menu options. As such, the agent123may utilize potentially stale data (e.g., historical badges or background identified badges for files122in the opened folder201) to determine badges but require current analysis of policies to provide a context menu items. Similarly, the agent123may utilize the potentially stale data (e.g., historical available actions or background identified available actions for a selected file122) to determine context menu items but require current analysis of policies to execute or perform a selected action. The determining of the available actions can include the agent123performing one or more operations. For example, the agent123may determine one or more files122included in the folder201, and determine one or more file metadata corresponding to each of the plurality of files122. The agent123can apply the policies to the files122and folders201based on the file metadata. The agent123can send the file metadata and other formation to the policy server107to determine the available actions. The policy server107or the agent123can identify and retrieve the requested policies110based on the file metadata. The policy server107can determine the available actions or transmit the policies110to the agent123for further processing. The agent123can evaluate the policies110to determine one or more executable actions for each of the files122or utilize identified available actions received from the policy server107. The agent123can also evaluate the policies110to identify one or more properties or actions as well as one or more badges401that corresponds to each identified property/action. The agent123can evaluate each file122based on the one or more properties, actions, and badges, and assign a specific one of the badges401to the file122. For example, the agent123may determine that a particular file122has install, copy, uninstall, and modify actions available based on the policies110. The agent123may determine that a badge401associated with the following are available: 1) install, 2) copy, 3) uninstall, 4) modify, 5) install and uninstall; 6) install and modify; and 7) install, copy, and uninstall. The agent123may select the badge401for 7) as a best-fit candidate even though other badges may also apply. The agent123can score each of the available badges to determine the best-fit badge401. The scoring may be weighted based on predefined criteria and based on likely actions the user may want to take. As an example, the agent123may determine that a particular file122is already installed and provide a greater weight to an uninstall available action and a modify available action while providing a lesser weight to install available action. The determined one or more executable actions can be temporarily stored as file122metadata in memory117. In various embodiments, one or more operations of the agent123are appropriately performed by various elements of the computing device103, the performance being caused by the agent123instructing the various elements. In some embodiments, the agent123can cache the metadata for future evaluations. The cached metadata can be associated with a timeout parameter such that the agent123can deem the data is stale once a predefined threshold time passes from when the data is collected. In one example, based on one or more policies110, the agent123causes the computing device103to determine that a particular file is notarized by a trusted authority. The agent123may verify the signature or notarization via a third-party service, such as an Apple service or may verify by using a trusted party's public certificate. In other embodiments, the agent123may communicate with the trusted party to verify the integrity of the signature. In another example, the agent123can determine that an action is available based on one or more factors, such as available resources of the computing device103, available resources of a network, source of the file122, or other aspects. The agent123may upload one or more of the files122to a remote service for review by an administrator. In one example, the agent123may determine that an install action is not available and upload the file to the remote service for consideration by the administration. The remote service may maintain a queue of requested files122along with a frequency and count of requests for each file. The queue may be ordered based on statistics associated with each file122. For example, if a threshold number of user accounts request installation of a particular file122, the particular file122may be ranked higher in the queue. The queue ranking can also factor in an author and source of each uploaded file122. The remote service can provide a user interface for the administrator to launch a sandbox to perform the actions on the uploaded files122and test a result. The administrator can modify one or more policies110based on a review of the uploaded files122. The updated policies110can be pushed to the agents123on various computing devices103so that subsequent badges and context menu items reflect the updated policies110. In another example, the agent123can generate a hash of each file122and transmit the hashes to the remote service. The agent123can compare the generated hash to one or more of the policies110to determine executable and/or available actions. According to one embodiment, at step306, the agent123can evaluate the policies110and determine executable and/or available actions based thereon. The agent123may perform this determination of actions at a high-level of granularity compared to similar policy110evaluations performed later in the process300. The granularity-controlled approach may include evaluating only a particular subset of the policies110associated with each file122to minimize computing resources used to perform the evaluation. Thus, in at least one embodiment, the policies110can be assigned to granularity tiers, and the tiers may increase in an average computing cost required to enforce the policies110therein. At step308, the agent123can cause the computing device103to render a user interface on the display125including an icon for each folder201. The agent can cause the computing device103to render a badge401on each icon, the badge401being selected based on the determined action or combination of actions associated with each file122. At step310, the computing device103receives a selection for a file122. The selection can be received as a long touch (or click), a right-click, or other input selecting the icon representing the file122on the generated user interface. As can be appreciated, the selection can correspond to any type of indication that would cause an underlying operating system to generate a context menu for the selected file122. For example, the selection may correspond to a non-traditional input device, such as smart-glasses where the user may look at the file122for a threshold time to generate the context menu. As another example, the input device may correspond to a pointer device that may point at the file122for a threshold time to generate a context menu. At step312, the agent123determines the available actions for the selected file122. In some embodiments, the agent123may utilize the available actions determined when badges401were determined at step306. The agent123may perform a more resource-intensive analysis of the selected file122in comparison to step306. In one embodiment, the agent123can query the policy server107for policies110associated with metadata of the selected file122or for a determination of the available actions. The agent123can determine one or more executable and/or available actions for the selected file122based on applying the policies110to the selected file122. According to one embodiment, the policy110evaluation of step312is performed at a higher level of granularity than similar operations performed at step306. The determined one or more executable actions can be temporarily stored as file122metadata. At step314, the agent123causes the computing device103to render a context menu501on the display125. The context menu501can include an item for each of the determined one or more available actions from step312and, in some embodiments, one or more actions associated with other software installed and/or running on the computing device103. The one or more actions can be rendered as menu entries such as text strings, icons, or combinations thereof. In some embodiments, the one or more actions are added to the context menu501via an application programming interface (API) of the operating system111and/or via an extension executed by the computing device103. According to one embodiment, steps316-324are performed if the user selects one of the context menu items corresponding to the determined actions rendered on the context menu500, as opposed to other actions associated with other software running on the computing device103. For example, if an available action for install was added to the context menu, but a user selects a “Properties” option from the context menu that was generated by the operating system, steps316-324may not be performed. However, if the added context menu item for install is selected, steps316-324may be performed. At step316, the computing device103receives a selection for a determined action included in the context menu500. The selection can be received similarly to other selections made by a user to provide selections throughout the process300. The selection may be communicated by the operating system to the agent123, which may or may not be via an extension. The agent123may register to receive a callback for the context menu item. In some embodiments, the API to add the context menu item may include one or more parameters for details of how the agent123may receive input when the context menu item is selected. As an example, the agent123may provide a callback function or some other means as a parameter to the API. At step318, the agent123can verify whether the selected action is allowed to be performed according to the policies110. The agent123may query the policy server107for permission to perform the selected action, for policies110that may be associated with the selected action or user account, or for other information. The agent123can verify the selected action is executable and/or available for the user account based on applying the policies110. According to one embodiment, the evaluation of the policy110in step318can be performed at a higher level of granularity than similar operations performed at steps306and step312. At step320, the agent123determines, based on policies110, whether additional information or input is required before the agent123can cause the selected action to be executed on the computing device103. If the agent123determines that additional information or input is required, the process300proceeds to step322. If the agent123determines that additional information is not required, the process300proceeds to step326. At step322, the agent123causes the computing device103to prompt the user for additional information. In one example, the agent123can render a login window on the display125, thereby requiring the user to submit credentials for the user account114associated with the agent123. In the same example, the submitted credentials are verified by the agent123and a server107, for example, by comparing the submitted credentials associated with metadata of the user account114. In another example, the agent123can initiate a multi-factor authentication process requiring the user to provide additional forms of authentication, such as responding to a text and/or accessing a particular website on the computing device103(or another device). In another example, the agent123can cause the computing device103to render a challenge-response in response to receiving the selection, and execution of the selection may only occur upon the user providing an appropriate response (e.g., as verified by the agent123and/or the server107). In another example, the agent123can transmit the selected action to a remote service for review and authorization by an administrator, the administrator's authorization causing the action to be executed based on a privilege level of the agent123. In the event that an administrator is unavailable, the remote service can notify or email a user account to try the action again once approved by an administrator. When a subsequent request by the agent is submitted to perform the action a subsequent time, the remote service can automatically grant the request based on a previous administrative authorization. In some embodiments, the server107can perform the functionality of the remote service. In one example, the selected action is a request to install the selected file122. The agent123can determine that an instance of the selected file122is already installed on the computing device103and automatically prompt the user to confirm that they wish to overwrite or repair the installed instance. In another example, the selected action is a request to delete the selected file122. The agent123can determine one or more other files122that require the selected file122and automatically prompt the user to confirm that they still wish to delete the selected file122. In another example, the agent123may not prompt the user, but automatically determine, based on the policies110, whether the computing device103is within a predetermined range of a near field communication device, such as a radio frequency identification (RFID) tag or antennae. If the computing device103is within the predetermined range, the agent123may authorize the action, but otherwise deny the action. In another example, the agent123can determine that the policies110specify that execution of the action can only occur during a predetermined time window (e.g., as configured by the administrator to prevent unauthorized activities outside of certain windows), and the agent123can enforce the time window based on a current time of the computing device103. In some embodiments, the agent123may communicate with one or more identity providers or mobile device management providers to verify the request is allowed to be executed. At step324, the agent123receives a response from the user and approves the execution of the selected action based on determining that the response satisfies predetermined criteria as established by the policies110. The agent123can apply the policies110to the received response to determine compliance therewith, and can also communicate with the computing environment101, for example, to determine that a particular user action or input occurred and/or is valid. At step326, the agent123causes the computing device103to execute the selected action at the privilege level of the agent123or a daemon without changing the privilege level of the user or user account114. The selected action can include one or more actions including, but not limited to: 1) installing the selected file122; 2) uninstalling the selected file122; 3) suspending one or more processes associated with the selected file122; 4) commencing one or more processes associated with the selected file122; 5) copying the selected file122to a particular folder201or other location on the computing device103; 6) transmitting the selected file122to a particular destination; 7) providing another user account114access to the selected file122; 8) downloading other related files; and 9) other actions. With reference toFIG.4, shown is an exemplary user interface400including icons503A,503B,503B,503D,503E that represent a folder201and one or more files122. The user interface400can be rendered on a display125in response to the user selecting a folder201in a higher-level instance of the user interface400. The icons503A-E can each include a badge401A,401B, or401C (or other badge401). Each badge401can include a particular pattern, color, indicia, or a combination thereof. The badge401rendered with the icon503can be determined based on one or more executable and/or available actions associated with the particular folder201or file122that the icon503represents. The executable and/or available actions are determined via the agent123as discussed herein. Combinations of executable and/or available actions can be represented by badges401that are different from badges401that represent each action individually. The rendered badges401allow the user to quickly and readily identify approximate actions that can be performed with each folder201or file122via the agent123. In one example, badge401A corresponds to an uninstall action, thereby indicating that the user may select an uninstall action causing uninstallation of the file122A via the agent123and a privilege level associated therewith. In another example, the badge401B includes a green color, thereby indicating that the user can select an install action causing installation of the file122B via the agent123. In another example, badge401C includes a right-striped pattern, thereby indicating that the user can select a copy action causing the file122C to be copied to one or more locations on the computing device103. In an alternate example, the badge401C includes a red color, thereby indicating that the user cannot select any actions to be executed on the file122C at the privilege level of the agent123(e.g., without prejudice to other actions associated with a privilege level equal to or less than the user's privilege level). In some embodiments, the user interface400includes a legend, table, or other visual interpretation that indicates the relationship between each badge color, symbol, pattern, etc. and executable actions. The legend can be initially hidden from the user and rendered upon the user selecting a “display legend” button or providing a functionally equivalent input to the computing device103. The relationships between the badges401and actions can be stored on the computing device103or in the computing environment101. In some embodiments, the legend or a specific action can be rendered in a popup window when a cursor hovers over a badge401. With reference toFIG.6A, shown is an exemplary user interface500A including a context menu501A. In at least one embodiment, the context menu501A is rendered via the agent123and/or an extension in response to receiving a selection from a user for a particular folder201or file122. The context menu501A can be rendered on a user interface400, or another user interface. The context menu501A can include selectable and executable actions503A,503B,503C, and503D that are rendered in response to determinations made via the agent123based on policies110associated with the particular folder201or file122. The selectable actions503A,503B,503C, and503D can include text, icons, or combinations thereof. The text and/or icons can describe the particular action. The context menu501A can include other actions505that are not associated with a privilege level. In one example, selectable actions503include “move to trash,” “scan with a first software application,” “install,” and “move to location,” and the other action505A includes “show contents,” among other actions associated with a privilege level of the user or user account114. Upon the user selecting one of the actions503, the agent123can cause or not cause the computing device103to execute the selected action (e.g., based on policies110and/or additional user prompts. With reference toFIG.6B, shown is a context menu501B including selectable and executable actions503C and503D. The context menu501B can further include other actions505, such as the action505B shown. The example mechanism has many benefits and advantages, as will now be appreciated from the discussion herein. In particular, installation of an application for each computer device in the network is managed more efficiently and with enhanced functionality. Application control typically determines whether or not to allow execution of an installed application, whereas the present mechanism takes control further upstream including the initial action of mounting the disk image. Thus, the mechanism better avoids downstream problems, such as mounting unauthorized disk images. Resultant issues are also addressed, such as unnecessary consumption of storage space on the computer device by mounting of disk images containing unauthorized applications. At least some of the example embodiments described herein may be constructed, partially or wholly, using dedicated special-purpose hardware. Terms such as ‘component’, ‘module’ or ‘unit’ used herein may include, but are not limited to, a hardware device, such as circuitry in the form of discrete or integrated components, a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC), which performs certain tasks or provides the associated functionality. In some embodiments, the described elements may be configured to reside on a tangible, persistent, addressable storage medium and may be configured to execute on one or more processor circuits. These functional elements may in some embodiments include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. Although the example embodiments have been described with reference to the components, modules and units discussed herein, such functional elements may be combined into fewer elements or separated into additional elements. Various combinations of optional features have been described herein, and it will be appreciated that described features may be combined in any suitable combination. In particular, the features of any one example embodiment may be combined with features of any other embodiment, as appropriate, except where such combinations are mutually exclusive. Throughout this specification, the term “comprising” or “comprises” may mean including the component(s) specified but is not intended to exclude the presence of other components. Although a few example embodiments have been shown and described, it will be appreciated by those skilled in the art that various changes and modifications might be made without departing from the scope of the invention, as defined in the appended claims. From the foregoing, it will be understood that various aspects of the processes described herein are software processes that execute on computer systems that form parts of the system. Accordingly, it will be understood that various embodiments of the system described herein are generally implemented as specially-configured computers including various computer hardware components and, in many cases, significant additional features as compared to conventional or known computers, processes, or the like, as discussed in greater detail herein. Embodiments within the scope of the present disclosure also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media which can be accessed by a computer, or downloadable through communication networks. By way of example, and not limitation, such computer-readable media can comprise various forms of data storage devices or media such as RAM, ROM, flash memory, EEPROM, CD-ROM, DVD, or other optical disk storage, magnetic disk storage, solid state drives (SSDs) or other data storage devices, any type of removable non-volatile memories such as secure digital (SD), flash memory, memory stick, etc., or any other medium which can be used to carry or store computer program code in the form of computer-executable instructions or data structures and which can be accessed by a general purpose computer, special purpose computer, specially-configured computer, mobile device, etc. The example embodiments are discussed in detail in relation to computer devices using UNIX or Unix-like operating systems, including particularly the ‘macOS’ family of operating systems (known previously as “OS X” and before that “Mac OS X”) provided by Apple, Inc. of Cupertino, California, USA. As will be familiar to those skilled in the art, Unix-like operating systems include those meeting the Single UNIX Specification (‘SUS’), along with similar systems such as implementations of Linux, BSD and several others. Hence, the teachings, principles and techniques as discussed below are also applicable in other specific example embodiments. In particular, the described examples are useful in many computer devices having a security model that employs discretionary access control. According to various aspects of the present disclosure, the example embodiments discussed herein may also be implemented on computer devices using the “Windows” operating systems, or other appropriate operating systems. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such a connection is properly termed and considered a computer-readable medium. Combinations of the above should also be included within the scope of computer-readable media. Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device such as a mobile device processor to perform one specific function or a group of functions. Those skilled in the art will understand the features and aspects of a suitable computing environment in which aspects of the disclosure may be implemented. Although not required, some of the embodiments of the claimed inventions may be described in the context of computer-executable instructions, such as program modules or engines, as described earlier, being executed by computers in networked environments. Such program modules are often reflected and illustrated by flow charts, sequence diagrams, exemplary screen displays, and other techniques used by those skilled in the art to communicate how to make and use such computer program modules. Generally, program modules include routines, programs, functions, objects, components, data structures, application programming interface (API) calls to other computers whether local or remote, etc. that perform particular tasks or implement particular defined data types, within the computer. Computer-executable instructions, associated data structures and/or schemas, and program modules represent examples of the program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represent examples of corresponding acts for implementing the functions described in such steps. Those skilled in the art will also appreciate that the claimed and/or described systems and methods may be practiced in network computing environments with many types of computer system configurations, including personal computers, smartphones, tablets, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, networked PCs, minicomputers, mainframe computers, and the like. Embodiments of the claimed invention are practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination of hardwired or wireless links) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. An exemplary system for implementing various aspects of the described operations, which is not illustrated, includes a computing device including a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit. The computer will typically include one or more data storage devices for reading data from and writing data to. The data storage devices provide nonvolatile storage of computer-executable instructions, data structures, program modules, and other data for the computer. Computer program code that implements the functionality described herein typically comprises one or more program modules that may be stored on a data storage device. This program code, as is known to those skilled in the art, usually includes an operating system, one or more application programs, other program modules, and program data. A user may enter commands and information into the computer through keyboard, touch screen, pointing device, a script containing computer program code written in a scripting language or other input devices (not shown), such as a microphone, etc. These and other input devices are often connected to the processing unit through known electrical, optical, or wireless connections. The computer that affects many aspects of the described processes will typically operate in a networked environment using logical connections to one or more remote computers or data sources, which are described further below. Remote computers may be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically include many or all of the elements described above relative to the main computer system in which the inventions are embodied. The logical connections between computers include a local area network (LAN), a wide area network (WAN), virtual networks (WAN or LAN), and wireless LANs (WLAN) that are presented here by way of example and not limitation. Such networking environments are commonplace in office-wide or enterprise-wide computer networks, intranets, and the Internet. When used in a LAN or WLAN networking environment, a computer system implementing aspects of the invention is connected to the local network through a network interface or adapter. When used in a WAN or WLAN networking environment, the computer may include a modem, a wireless link, or other mechanisms for establishing communications over the wide area network, such as the Internet. In a networked environment, program modules depicted relative to the computer, or portions thereof, may be stored in a remote data storage device. It will be appreciated that the network connections described or shown are exemplary and other mechanisms of establishing communications over wide area networks or the Internet may be used. While various aspects have been described in the context of a preferred embodiment, additional aspects, features, and methodologies of the claimed inventions will be readily discernible from the description herein, by those of ordinary skill in the art. Many embodiments and adaptations of the disclosure and claimed inventions other than those herein described, as well as many variations, modifications, and equivalent arrangements and methodologies, will be apparent from or reasonably suggested by the disclosure and the foregoing description thereof, without departing from the substance or scope of the claims. Furthermore, any sequence(s) and/or temporal order of steps of various processes described and claimed herein are those considered to be the best mode contemplated for carrying out the claimed inventions. It should also be understood that, although steps of various processes may be shown and described as being in a preferred sequence or temporal order, the steps of any such processes are not limited to being carried out in any particular sequence or order, absent a specific indication of such to achieve a particular intended result. In most cases, the steps of such processes may be carried out in a variety of different sequences and orders, while still falling within the scope of the claimed inventions. In addition, some steps may be carried out simultaneously, contemporaneously, or in synchronization with other steps. The embodiments were chosen and described in order to explain the principles of the claimed inventions and their practical application so as to enable others skilled in the art to utilize the inventions and various embodiments and with various modifications as are suited to the particular use contemplated. Alternative embodiments will become apparent to those skilled in the art to which the claimed inventions pertain without departing from their spirit and scope. Accordingly, the scope of the claimed inventions is defined by the appended claims rather than the foregoing description and the exemplary embodiments described therein. | 71,241 |
11943372 | DETAILED DESCRIPTION Hereinafter, an embodiment of the present invention will be described with reference to the accompanying drawings. Note that the same elements are denoted by the same reference signs in the descriptions of the drawings, and duplicate descriptions will be omitted. In addition, dimensional ratios of the drawings are exaggerated for purposes of illustration, and may be different from actual ratios. <Configuration of Use Right Information Processing System> FIG.1is a diagram illustrating a schematic configuration of a use right information processing system to which a use right information processing apparatus according to an embodiment of the present invention is applied. As illustrated inFIG.1, the use right information processing system includes a user terminal100, an information processing apparatus200that functions as a use right information processing apparatus, a control target device300, and a server400that serves as a node to be connected to a peer-to-peer network. The use right information processing system is a system for securely managing a use right of the control target device300without using an authentication server and appropriately allowing a valid user to use the control target device300. The user terminal100is mutually communicably connected to the information processing apparatus200, the control target device300, the server400, and the like via a network such as the Internet, various types of wireless communication, and the like. The information processing apparatus200is mutually communicably connected the user terminal100, the control target device300, the server400, and the like via a network such as the Internet, various types of wireless communication, and the like. The control target device300is mutually communicably connected to the user terminal100and the information processing apparatus200via various types of wireless communication or the like. The server400is mutually communicably connected to the user terminal100and the information processing apparatus200via a network such as the Internet, various types of wireless communication, and the like. Furthermore, the server400is mutually communicably connected to other servers400a,400b, and the like via a peer-to-peer network. Note that connection between the respective configurations is not limited to the examples described above, and the respective configurations may be connected using any method. Hereinafter, each of the configurations will be described in detail. <User Terminal100> The user terminal100is a mobile terminal such as a smartphone or the like, or an information terminal such as a laptop PC, a desktop PC, or the like, which is to be used by the user. FIG.2is a block diagram illustrating a schematic configuration of the user terminal. As illustrated inFIG.2, the user terminal100includes a CPU (Central Processing Unit)110, a ROM (Read Only Memory)120, a RAM (Random Access Memory)130, a storage140, a communication interface150, and an operation display unit160. The respective configurations are communicably connected to each other via a bus170. The CPU110performs control of the respective configurations described above and various kinds of arithmetic processing in accordance with programs stored in the ROM120and the storage140. The ROM120stores various programs and various data. The RAM130temporarily stores programs and data as a work area. The storage140stores various programs including an operating system, and various data. For example, an application for transmitting/receiving various types of information to/from another device such as the information processing apparatus200or the like via a network and displaying various types of information provided from another device is installed in the storage140. The communication interface150is an interface for communicating with an external device such as the information processing apparatus200or the like. For example, a standard such as 3G, 4G, or the like for mobile telephony, a standard such as Wi-Fi (registered trademark), or the like is used as the communication interface150. A standard for short-range wireless communication such as Bluetooth (registered trademark) or the like may be used as the communication interface150to perform communication with a nearby short-range wireless communication device. The operation display unit160is, for example, a touch panel display, which displays various types of information and receives various inputs from the user. <Information Processing Apparatus200> The information processing apparatus200is, for example, a computer such as a server or the like, which functions as a use right information processing apparatus in the present embodiment. FIG.3is a block diagram illustrating a schematic configuration of the information processing apparatus. As illustrated inFIG.3, the information processing apparatus200includes a CPU210, a ROM220, a RAM230, a storage240, a communication interface250, and an operation display unit260. The respective configurations are communicably connected to each other via a bus270. Note that the CPU210, the ROM220, the RAM230, the storage240, the communication interface250, and the operation display unit260have functions similar to those of the corresponding configurations of the user terminal100, and thus duplicate descriptions thereof will be omitted. Programs and data for performing various kinds of processing are installed in the storage240. In addition, the storage240stores various kinds of identification information and the like for performing communication with the user terminal100, the control target device300, and the server400. Furthermore, the storage240stores a program for generating authentication data, a program for verifying signature data, a program for generating an access token, and the like. The CPU210functions as a receiving unit, a deriving unit, an acquisition unit, an authentication data generation unit, a determination unit, a token generation unit, and a validity confirmation unit in the present embodiment. Each of the functions to be implemented by the CPU210will be detailed later. Furthermore, the storage240functions as a storage unit in the present embodiment. <Control Target Device300> The control target device300is a device to be controlled by the user by obtaining a use right, and is, for example, an electronically controllable car key in car sharing, an electronically controllable key of a building in an office sharing service, or the like. The control target device300is not limited to the key as mentioned above, and may be any device as long as it is an electronically controllable device to be used by the user. FIG.4is a block diagram illustrating a schematic configuration of the control target device. As illustrated inFIG.4, the control target device300includes a CPU310, a ROM320, a RAM330, a storage340, a communication interface350, and an operation display unit360. The respective configurations are communicably connected to each other via a bus370. Note that the CPU310, the ROM320, the RAM330, the storage340, the communication interface350, and the operation display unit360have functions similar to those of the corresponding configurations of the user terminal100, and thus duplicate descriptions thereof will be omitted. The storage340stores various programs and data for executing a control instruction and operating the control target device300. <Server400> The server400is a node connected to a peer-to-peer network, and mutually communicates with the servers400a,400b, and the like, which are other nodes, via the peer-to-peer network. Each node such as the server400or the like stores a common blockchain, and executes a code recorded on the blockchain to execute a smart contract. The server400may be a full node that retains all pieces of information in the blockchain, or may be a light node that retains information such as a hash value for proving each data and the like. FIG.5is a block diagram illustrating a schematic configuration of the server. As illustrated inFIG.5, the server400includes a CPU410, a ROM420, a RAM430, a storage440, a communication interface450, and an operation display unit460. The respective configurations are communicably connected to each other via a bus470. Note that the CPU410, the ROM420, the RAM430, the communication interface450, and the operation display unit460have functions similar to those of the corresponding configurations of the user terminal100, and thus duplicate descriptions thereof will be omitted. The storage440stores a blockchain500in which a code, a setting value, and the like for executing a smart contract are recorded. The blockchain500may be configured by a publicly known blockchain platform such as Ethereum capable of executing a smart contract or the like, and detailed descriptions thereof will be omitted. Furthermore, a method of electronic signature using a secret key and a public key is also a publicly known technique, and detailed descriptions thereof will be omitted. For example, in a case where Ethereum is used as a blockchain platform for implementing a smart contract, the user has his/her own Ethereum account, and the account is associated with a secret key of the user, a public key of the user, and an address of the user. The secret key and the public key of the user are defined in association with each other in an encryption scheme used in an electronic signature algorithm adopted by the blockchain platform. The address of the user is information for identifying the user, and for example, information that can be uniquely derived from the public key of the user is used. The blockchain platform for implementing the smart contract is not limited to Ethereum, and various platforms such as EOS, NEO, Zilliqa, an independently constructed platform, and the like are used. <Function of Information Processing Apparatus200> FIG.6is a block diagram illustrating a functional configuration of the CPU of the information processing apparatus. As illustrated inFIG.6, the CPU210reads the programs stored in the storage240and executes processing, whereby the information processing apparatus200functions as a receiving unit211, a deriving unit212, an acquisition unit213, an authentication data generation unit214,a determination unit215, a token generation unit216, and a validity confirmation unit217. The receiving unit211receives, from the user terminal100, signature data generated in the user terminal100by signing the authentication data with a predetermined signature algorithm using the secret key corresponding to the user. The receiving unit211may further receive a control instruction to the control target device300from the user terminal100. The deriving unit212derives the public key corresponding to the secret key from the authentication data and the signature data received by the receiving unit211using a predetermined signature algorithm. The acquisition unit213obtains information regarding the use right of the control target device300of the user recorded in advance in association with the public key or the identification information in the smart contract using the public key derived by the deriving unit212or the identification information corresponding to the public key. The authentication data generation unit214generates authentication data on the basis of a request from the user terminal100used by the user, and transmits the generated authentication data to the user terminal100, thereby causing the authentication data to be shared between the information processing apparatus200and the user terminal100. The determination unit215determines availability of the control target device300for the user on the basis of the information regarding the use right obtained by the acquisition unit213. The token generation unit216generates an access token to transmit it to the user terminal100in a case where it is determined that the control target device300can be used by the user on the basis of the information regarding the use right obtained by the acquisition unit213. The validity confirmation unit217receives the access token from the control target device300that has received the access token together with the control instruction from the user terminal100, determines whether or not the access token received from the control target device300is the same as the access token generated by the token generation unit216, and in a case of being the same, transmits information indicating that the access token is valid to the control target device300. <Outline of Process in Use Right Information Processing System> Hereinafter, an exemplary process to be executed in the use right information processing system will be described. First, a process of registering (purchasing) a use right will be described. FIG.7is a sequence chart illustrating a procedure of the use right registration (purchase) process to be executed in the use right information processing system.FIG.8is a diagram illustrating an example of use right information for each address recorded on the blockchain. Processing of each device illustrated in the sequence chart ofFIG.7is stored in the storage of each device as a program, and is executed by the CPU of each device controlling each unit. As illustrated inFIG.7, the user terminal100receives an instruction to use the control target device300from the user, and generates transaction data (TX data) including a smart contract for purchasing a use right of the control target device300(step S101). For example, in a case of using the control target device300via a business operator providing a service of enabling a purchase of the use right for a predetermined period of time by transmitting a predetermined amount of cryptocurrency to a predetermined destination address, the TX data includes the user address, the destination address (address of the smart contract), the amount of cryptocurrency, information indicating intention to purchase the use right, information indicating use start time, and the like. For example, a method (function) indicating the purchase defined in the smart contract may be used as the information indicating the intention to purchase the use right, and the information indicating the user address or the use start time may be set as an argument of the method. Subsequently, the user terminal100generates an electronic signature for the generated TX data using the secret key of the user stored in advance in the storage140(step S102). Subsequently, the user terminal100transmits the TX data together with the generated electronic signature to the server400(step S103). The server400broadcasts the received TX data to the peer-to-peer network (step S104). When mining of the blockchain500is executed by a node such as the server or the like connected to the peer-to-peer network to generate a block, the smart contract included in the TX data is executed (step S105). As a result, use right information for each user address as illustrated inFIG.8is recorded on the blockchain500. In the example ofFIG.8, an address of a user, use start time of the user, and use end time are recorded in association with each other. In that case, when the user address and the current time are known, it is possible to easily determine whether or not the user has the use right at that point in time. A form of the use right information to be recorded on the blockchain500is not limited to the example ofFIG.8, and may be implemented in any form. For example, information indicating a status such as availability or the like may be recorded in association with each other as the use right information. Furthermore, as the use right information, information regarding any use condition may be recorded such as information regarding the number of available hours, information regarding the number of uses, available days of the week and periods of time, available locations, areas, and ranges, available types of the control target device300, combinations thereof, and the like, for example. Furthermore, not only the available condition but also information regarding an unavailable condition (exclusion condition) may be recorded as the use right information. In other words, information for expressing various conditions for defining the use right may be appropriately set as the use right information. The server400notifies the user terminal100of a result of the execution of the smart contract based on the TX data (step S106), and the user terminal100displays the notified result on the operation display unit160(step S107). Note that, while the descriptions have been given assuming that the user terminal100generates the TX data to transmit it to the server400in the example above, those processes may be executed by a server or the like operated by a service provider, which is what is called an exchange. Next, a process of authenticating the user, confirming the use right, and authorizing use by the user will be described. FIG.9is a sequence chart illustrating a procedure of the authentication and authorization process to be executed in the use right information processing system. Processing of each device illustrated in the sequence chart ofFIG.9is stored in the storage of each device as a program, and is executed by the CPU of each device controlling each unit. As illustrated inFIG.9, the user terminal100generates a device control request related to control of the control target device300on the basis of an instruction from the user or the like (step S201), and transmits it to the information processing apparatus200(step S202). When the information processing apparatus200receives the device control request from the user terminal100, it generates authentication data (step S203), and transmits it to the user terminal100(step S204). The authentication data is information for confirming validity of the user terminal100and the user, which is information to be shared between the user terminal100and the information processing apparatus200and having a different value for each process. In the present embodiment, the authentication data is randomly generated by the information processing apparatus200to have a different value for each time. The information processing apparatus200stores, in the storage240, the authentication data transmitted to the user terminal100in association with information regarding the user terminal100. The user terminal100gives an electronic signature to the authentication data received from the information processing apparatus200using the secret key corresponding to the user with a predetermined signature algorithm to generate signature data (step S205), and transmits it to the information processing apparatus200(step S206). In the present embodiment, an elliptic curve digital signature algorithm capable of deriving a public key corresponding to a secret key from authentication data and signature data obtained by signing the authentication data with the secret key is used as the predetermined signature algorithm. The information processing apparatus200verifies the signature data received from the user terminal100using the signature algorithm described above (step S207), and derives the public key corresponding to the secret key of the user who has signed the signature data (step S208). Specifically, the information processing apparatus200verifies the signature data using the elliptic curve digital signature algorithm using the signature data received from the user terminal100and the authentication data transmitted to the user terminal100stored in the storage240, thereby deriving the public key of the user. The information processing apparatus200derives the address, which is the identification information of the user, from the derived public key (step S209). Note that the processing of step S209may be omitted if the value of the public key is directly used as the address. The information processing apparatus200transmits the derived address of the user to the server400, and requests information regarding the use right of the user associated with the control target device300(step S210). The server400checks, using the received address of the user, the information regarding the use right of the user associated with the control target device300stored in advance in association with the address of the user in the smart contract on the blockchain (step S211). The server400transmits the information regarding the use right checked in the processing of step S211to the information processing apparatus200(step S212). The information processing apparatus200checks the contents of the information regarding the use right transmitted from the server400, and if the user is in a state of holding the use right of the control target device300, allows the user to control the control target device300to generate an access token (step S213), and transmits it to the user terminal100(step S214). The access token is information uniquely generated for each process, which is a random value having a sufficient size or length of, for example, 16 bytes, 32 bytes, or the like. The access token is used as an authentication code when the user controls the control target device300. The access token includes information regarding a use period of the access token. The information processing apparatus200stores, in the storage240, the access token transmitted to the user terminal100in association with information regarding the user terminal100or the user who uses the user terminal100or the like. Note that, if the user is in a state of not holding the use right of the control target device300, the information processing apparatus200notifies the user terminal100of the fact without allowing the user to control the control target device300. The user terminal100adds the access token transmitted from the information processing apparatus200to a control instruction for controlling the control target device300(step S215), and transmits the control instruction to the control target device300(step S216). The control target device300obtains the control instruction and the access token transmitted from the user terminal100(step S217), and transmits the access token to the information processing apparatus200together with the information regarding the user terminal100or the user (step S218). The information processing apparatus200confirms validity of the access token transmitted from the control target device300(step S219). Specifically, the information processing apparatus200searches the storage240using the information regarding the user terminal100or the user transmitted from the control target device300, and extracts the access token stored in association with the information regarding the user terminal100or the user. Then, the information processing apparatus200compares the access token extracted from the storage240with the access token transmitted from the control target device300, and determines that the access token is valid if the two are the same. In that case, the information processing apparatus200transmits information indicating that the access token is valid (validity information) to the control target device300(step S220). On the other hand, if the two are not the same, it is determined that the access token is not valid. In addition, also in a case where the use period included in the access token has elapsed, it is determined that the access token is not valid. When the access token is determined not to be valid, the information processing apparatus200notifies the control target device300of the fact, and the control target device300notifies the user terminal100of the fact. The control target device300confirms the validity information transmitted from the information processing apparatus200, and executes the control instruction received from the user terminal100(step S221). Note that the present invention is not limited to the embodiment described above, and various modifications may be made within the scope of the claims. For example, while the example in which the user terminal100transmits the control instruction to the control target device300in the processing of step5216inFIG.9has been described in the embodiment above, it is not limited thereto. For example, the user terminal100may transmit the control instruction to the information processing apparatus200, and the information processing apparatus200may transmit the control instruction to the control target device300. In that case, the CPU210of the information processing apparatus200functions as a device control unit. Hereinafter, this example will be specifically described. FIG.10is a sequence chart illustrating another exemplary procedure of the authentication and authorization process to be executed in the use right information processing system. The process in steps S201to S212inFIG.10is similar to that inFIG.9, and descriptions thereof will be omitted. As illustrated inFIG.10, the information processing apparatus200checks the contents of the information regarding the use right transmitted from the server400, and if the user is in a state of holding the use right of the control target device300, allows the user to control the control target device300(step S313), and notifies the user terminal100of the fact (step S314). The user terminal100generates a control instruction for controlling the control target device300(step S315), and transmits it to the information processing apparatus200(step S316). The information processing apparatus200confirms the control instruction transmitted from the user terminal100(step S317), and transmits it to the control target device300(step S318). The control target device300confirms and executes the control instruction transmitted from the information processing apparatus200(step S319). Note that, although it has been described above that the control instruction is newly generated in the processing of step S315, the control instruction may be included in the device control request generated in the processing of step S201. In that case, when the information processing apparatus200allows the device control in the processing of step S313, it may transmit the control instruction included in the device control request that has already been received to the control target device300. With this arrangement, the process of steps S314to S317may be omitted. Furthermore, while the example in which the user terminal100signs only the authentication data that is a random value with the secret key of the user has been described in the embodiment above, it is not limited thereto. The user terminal100can add optional information such as a control instruction or the like to the authentication data and sign the authentication data with the secret key of the user. Even in this case, the information processing apparatus200can derive the public key of the user from the authentication data, the added optional information, and the signed signature data to derive the address. Moreover, while the descriptions have been given assuming that the authentication data is randomly generated by the information processing apparatus200to have a different value for each time in the embodiment above, it is not limited thereto. It is sufficient if the authentication data is information to be shared between the user terminal100and the information processing apparatus200and having a different value for each process. Therefore, in a case of performing encryption based on the AES scheme or the like between the user terminal100and the information processing apparatus200, a random value (initialization vector) shared between the user terminal100and the information processing apparatus200is used in the AES-based encryption process, and that may be used instead of the authentication data, accordingly. For example, the user terminal100generates information obtained by performing the AES-based encryption on the control instruction as authentication data, and signs it with the secret key of the user to transmit signature data to the information processing apparatus200. In that case, the information processing apparatus200can derive the public key of the user from the information obtained by performing the AES-based encryption on the control instruction and the signature data. Furthermore, each configuration of the user terminal100, the information processing apparatus200, the control target device300, and the server400included in the use right information processing system may include components other than the components mentioned above, or may not include some of the components mentioned above. Furthermore, each of the user terminal100, the information processing apparatus200, the control target device300, and the server400may be configured by a plurality of devices, or may be configured by a single device. In addition, the function of each configuration may be implemented by another configuration. For example, the information processing apparatus200and the server400may be constructed by one device. Furthermore, the information processing apparatus200may be incorporated in the control target device300, and each process described as being executed by the information processing apparatus200may be executed by the control target device300. Furthermore, the function of the information processing apparatus200may be implemented as an application of the user terminal100, and may be executed by the user terminal100. Furthermore, the process in the use right information processing system according to the embodiment described above may include steps other than the steps in the flowchart or sequence chart described above, or may not include some of the steps described above. In addition, the order of the steps is not limited to that in the embodiment described above. Moreover, each step may be combined with another step and executed as one step, may be included in another step and executed, or may be divided into a plurality of steps and executed. In addition, some of the steps may be omitted, or other steps may be added. As described above, according to the information processing apparatus200of the present embodiment, the public key corresponding to the secret key is derived from the signature data received from the user terminal100and the authentication data transmitted to the user terminal100. Then, the information processing apparatus200obtains the information regarding the use right of the control target device300of the user recorded in advance in association with the public key or the identification information in the smart contract using the public key or the identification information corresponding to the public key. With this arrangement, it becomes possible to securely manage the use right of the user using the data on the blockchain having high falsification resistance without using an authentication server that imposes enormous costs and loads on construction and operation, and to allow (authorize) a valid user to use the control target device300appropriately. In addition, the information processing apparatus200generates authentication data on the basis of a request from the user terminal100, and transmits the generated authentication data to the user terminal100, thereby causing the authentication data to be shared with the user terminal100. With this arrangement, different authentication data is generated to be used for the electronic signature each time a request is issued from the user terminal100, whereby it becomes possible to suppress fraudulent use of the signature data and the like by an invalid user terminal100, and to manage the use right more securely. In addition, the information processing apparatus200determines availability of the control target device300for the user on the basis of the obtained information regarding the use right. With this arrangement, it becomes possible to quickly and reliably allow a valid user to use the control target device300. Furthermore, the elliptic curve digital signature algorithm capable of deriving a public key from the signature data and the authentication data is used as a predetermined signature algorithm. With this arrangement, the information processing apparatus200is enabled to derive the public key from the signature data that is a result of being signed with the secret key and the authentication data that is data to be signed. As a result, it becomes possible to negate the need to share the public key for signature verification in advance between the user terminal100and the information processing apparatus200. In order to share the public key in advance, a process of checking whether or not the received public key is valid and saving it occurs. According to the present embodiment, the public key can be derived from the signature data and the authentication data so that there is no need to share the public key in advance or to check the validity of the public key, whereby it becomes possible to construct and operate the system efficiently. Furthermore, the identification information is address information that can be uniquely derived from the public key. Therefore, there is no need to separately prepare and manage information for identifying the user, and the system can be constructed and operated more efficiently. Furthermore, in the smart contract, information regarding the available time is recorded as information regarding the use right of the device in association with the public key or the identification information. With this arrangement, it becomes possible to provide a service in which the use right is entitled to the user with a specific available time, and to enhance the convenience of the service provider and the user. In addition, in the smart contract, information regarding the number of available times is recorded as the information regarding the use right of the device in association with the public key or the identification information. With this arrangement, it becomes possible to provide a service in which the use right is entitled to the user with a specific number of available times, and to enhance the convenience of the service provider and the user. Furthermore, the information processing apparatus200generates an access token to transmit it to the user terminal100in a case where it is determined that the control target device300can be used by the user on the basis of the obtained information regarding the use right. Then, the access token is received from the control target device300that has received the access token together with the control instruction from the user terminal100, whether or not the access token received from the control target device300is the same as the generated access token is determined, and in a case of being the same, the information indicating that the access token is valid is transmitted to the control target device300. With this arrangement, even in a case where the information processing apparatus200that has confirmed the use right of the user does not directly control the control target device300and another device controls the control target device300, it is possible to reliably allow a valid user to use the control target device300. In addition, the access token is data uniquely generated for each process, and includes information regarding a use period. With this arrangement, even in a case where the information processing apparatus200that has confirmed the use right of the user does not directly control the control target device300and another device controls the control target device300, it is possible to allow a valid user to use the control target device300more reliably. In addition, the information processing apparatus200may further receive the control instruction for the control target device300from the user terminal100, and may transmit the control instruction to the control target device300in a case where it is determined that the user is allowed to use the control target device300on the basis of the obtained information regarding the use right. With this arrangement, the information processing apparatus200that has confirmed the use right of the user is enabled to directly control the control target device300, whereby it is possible to quickly, reliably, and efficiently allow a valid user to use the control target device300. The means and method for performing various processes in the use right information processing system according to the embodiment described above may be achieved by either a dedicated hardware circuit or a programmed computer. The program may be provided by, for example, a computer-readable recording medium such as a flexible disk, a CD-ROM, and the like, or may be provided online via a network such as the Internet and the like. In that case, the program recorded in the computer-readable recording medium is normally transferred to a storage unit, such as a hard disk or the like, and is stored therein. Furthermore, the program may be provided as single application software, or may be incorporated into the software of the device as one function of the use right information processing system. REFERENCE SIGNS LIST 100user terminal200information processing apparatus300control target device400server500blockchain110,210,310,410CPU120,220,320,420ROM130,230,330,430RAM140,240,340,440storage150,250,350,450communication interface160,260,360,460operation display unitFIG.1200INFORMATION PROCESSING APPARATUS300CONTROL TARGET DEVICE400SERVER (NODE)400aSERVER (NODE)400bSERVER (NODE)PEER-TO-PEER NWFIG.2140STORAGE150COMMUNICATION INTERFACE160OPERATION DISPLAY UNITFIG.3240STORAGE250COMMUNICATION INTERFACE260OPERATION DISPLAY UNITFIG.4340STORAGE350COMMUNICATION INTERFACE360OPERATION UNIT370DISPLAY UNITFIG.5440STORAGE450COMMUNICATION INTERFACE460OPERATION DISPLAY UNIT500BLOCKCHAIN (SMART CONTRACT)FIG.6211RECEIVING UNIT212DERIVING UNIT213ACQUISITION UNIT214AUTHENTICATION DATA GENERATION UNIT215DETERMINATION UNIT216TOKEN GENERATION UNIT217VALIDITY CONFIRMATION UNITFIG.7USER TERMINAL100INFORMATION PROCESSING APPARATUS200CONTROL TARGET DEVICE300SERVER400S101GENERATE TX DATA INCLUDING SMART CONTRACT FOR PURCHASING USE RIGHTS102ELECTRONICALLY SIGN TX DATAS103TX DATAS104BROADCAST TX DATA TO PEER-TO-PEER NWS105BLOCK IS GENERATED BY MINING AND SMART CONTRACT INCLUDED IN TX DATA IS EXECUTEDS106RESULT NOTIFICATIONS107DISPLAY RESULTFIG.8ADDRESSUSE START TIMEUSE END TIMEFIG.9USER TERMINAL100INFORMATION PROCESSING APPARATUS200CONTROL TARGET DEVICE300SERVER400S201GENERATE DEVICE CONTROL REQUESTS202DEVICE CONTROL REQUESTS203GENERATE AUTHENTICATION DATAS204AUTHENTICATION DATAS205ELECTRONICALLY SIGN AUTHENTICATION DATAS206SIGNATURE DATAS207VERIFY SIGNATURE DATAS208DERIVE PUBLIC KEYS209DERIVE ADDRESSS210USE RIGHT INFORMATION REQUESTS211CHECK USE RIGHT INFORMATION RECORDED ON BLOCKCHAINS212USE RIGHT INFORMATION RESPONSES213PERMIT DEVICE CONTROL AND GENERATE ACCESS TOKENS214ACCESS TOKENS215ADD ACCESS TOKEN TO CONTROL INSTRUCTIONS216CONTROL INSTRUCTION AND ACCESS TOKENS217OBTAIN CONTROL INSTRUCTION AND ACCESS TOKENS218ACCESS TOKENS219CONFIRM VALIDITY OF ACCESS TOKENS220VALIDITY INFORMATIONS221CONFIRM VALIDITY INFORMATION AND EXECUTE CONTROL INSTRUCTIONFIG.10USER TERMINAL100INFORMATION PROCESSING APPARATUS200CONTROL TARGET DEVICE300SERVER400S201GENERATE DEVICE CONTROL REQUESTS202DEVICE CONTROL REQUESTS203GENERATE AUTHENTICATION DATAS204AUTHENTICATION DATAS205ELECTRONICALLY SIGN AUTHENTICATION DATAS206SIGNATURE DATAS207VERIFY SIGNATURE DATAS208DERIVE PUBLIC KEYS209DERIVE ADDRESSS210USE RIGHT INFORMATION REQUESTS211CHECK USE RIGHT INFORMATION RECORDED ON BLOCKCHAINS212USE RIGHT INFORMATION RESPONSES313PERMIT DEVICE CONTROLS314PERMISSION NOTIFICATIONS315GENERATE CONTROL INSTRUCTIONS316CONTROL INSTRUCTIONS317CONFIRM CONTROL INSTRUCTIONS318CONTROL INSTRUCTIONS319EXECUTE CONTROL INSTRUCTION | 39,876 |
Subsets and Splits